If you're backing up the perf data using the archive method, then I believe it does stop perf data collection. I'm not sure if that should come with that 32 second spike, however. Missing perf data is a problem of its own and our perf data was getting so large that it was taking hours for us to back it up too, so we ended up switching to snapshot based backups.
We have a threshold like yours setup, but we pair it with another metric(volume:throughput) to identify either the bully, or the volume(s) most affected by the bully in terms of I/O. Otherwise if every volume is seeing high latencies and you get that many more emails even if some of them aren't generating I/O. There are some cases where the filer is at the point where it can satisfy the bully volume's I/O below the latency threshold and no other volume is moving the amount of data specified in the alarm I mentioned and they can still suffer(like a low I/O & latency sensitive process), so YMMV with that.
I switch back and forth with using ONTAP module for doing filer related tasks vs using "dfm run cmd", especially with DFM monitoring 7-mode filers. WIth the DFM server monitoring Clustered ONTAP filers, more of the work is done with ONTAP module, largely because the version of DFM we have in place for cDOT doesn't/won't work with some of the standard monitoring/conformance stuff we do with 7-modes(dedupe and autosize checks, for example).
If you are looking to do some graphing like dfm perf view command, then you may be running into the issue of powershell handling binary data differently than the regular prompt. I'm not sure if that's what you mean by growth rate trending, but when I was trying to use dfm perf view to generate graphs on space usage and so forth, that's what I had to work around. There's probably a .NET library to handle this, but I instead opted to get the script to write a simple batch file and had it run the batch file in the regular prompt because I knew I could do that quicker. There is no style point in that, however.
I didn't run into any odd problems with putting the dfm perf data output into powershell. I wrote it a while ago so I see there are some things I may do differently today, but it's run-of-the-mill string manipulation stuff and doesn't involve any use of objects or properties as far as the absorbing/manipulating the output. What I'm talking about when I say DFM reporting the max value last, here's what I mean:
Timestamp filer1:/vol0:throughput
-------------------------------------------------------------------------------
2013-08-27 17:01:44 5018.000
2013-08-27 17:02:44 5072.550
2013-08-27 17:03:44 5385.433
2013-08-27 17:04:44 6178.200
Timestamp filer1:/vol0:throughput (max)
-------------------------------------------------------------------------------
2013-08-27 17:04:44 6178.200
You see that DFM picks out the peak value at 17:04 and puts it 2 lines under (max) at the end, which is the mode that was specified. The three things I'm interested are the the timestamp, the value(6178), and the object name, which is vol0. I just manipulate the output so that it doesn't have that ------ line and organize the object name, timestamp, and the value more neatly. I do use new-object cmdlet for that part, and I attach those parameters as noteproperties. The rest is the standard array-of-objects code. Create an array, keep adding a new object to it with those 3 properties, and then after you're done, you can sort the objects by the values in descending order so that you can view the values from the highest value to the lowest. If you have 50 volumes on a filer(in any object), then the script will have 50 lines in the array with the name, timestamp, and the value it got from DFM.