I've been down this route once before.. Word of caution, i would use only system based events because let's say volume threshold events could come in multiple triggers and it will continue to trigger the script. My experiment failed because I was trying to automatically trigger perf stats based on custom thresholds I put in place. That didn't end well.. What happened was it was triggering perfstat script like every 10 seconds against the same controller.. So I failed miserably..
... View more
The other thing is it's easy to remove stuff from the pipeline, so we can find the volumes easily that don't have a schedule associated get-navol |get-nasnapshotschedule | ? {($_.days -eq 0) -and ($_.hours -eq 0) -and ($_.Weeks -eq 0)} But are all your vol options nosnap set to on or you just control it through the schedule. I was trying to craft something via one pipeline, but i think the if/else will have to do the trick
... View more
I have a full dashboard that will do this, but I will get you started $snapsched = Get-NaSnapshotSchedule $volume -erroraction "silentlycontinue" if ($snapsched.weeks -eq 0 -and $snapsched.days -eq 0 -and $snapsched.hours -eq 0 -and $snapsched.minutes -eq 0) { $snapshotsched = "No" } else { $snapshotsched = "Yes" } Write-host $snapshotsched - This will tell you if there is a snapshot schedule But
... View more
Peter There is many ways to skin a cat here.. I did it that way b/c I was using it in an excel spreadsheet. But if you just want to see output, you can send it to convertto-formattednumber PS C:\powershell> ((get-naefficiency vol0).snapusage).used | convertto-formattednumber 55G Or i wrote a function that i load as a module Function Convert-togb() { Param ( [Parameter(Position=0,Mandatory=$True,ValueFromPipeline=$True)] #[int]$data $data ) Process { [math]::truncate($data/1gb) } } and I just send it to that PS C:\powershell> ((get-naefficiency vol0).snapusage).used | convert-togb 51
... View more
Peter, Im not following you on this? The above code works without issue.. Can you paste what you put and output.. (Take out any company specifics)
... View more
Ok, i'm suffering from a brainfreeze today..... Im looking to find active snapshot used space on volumes and i'm having an absolute brainfreeze.... Will someone kick start my brain for me... I tried get-nahelp *snapshot* but nothing there
... View more
What i do is wrap the dfm command line with powershell. You need to use something like dfm graph -F csv volume-usage-vs-total-1m or -3m Than you need to convert the csv and dump it out..
... View more
So you don't combine write-latency and read_latency with the volume throughput? Now, do you create alarms based on this? I usually create templates as well. Let me play with this for a little..
... View more
What's your metrics for volume throughput.. I would like to pair it together like you are saying, but i'm not exactly quite sure how you set it up. Can you export that threshold template you setup, and send it on over? Check my post on elastic aggregate growth in the powershell forum, it should have the script attached to what I did to determine growth rate trending on aggrs to see when they fill up.. I will try to dig up link or I can send you the code..
... View more
Excellent post... I was digging into this more and found out the following DFM stopped collecting data on this object from 23:00 until 2:51am the next day. I've dumped out the numbers from the cli from a latency number and noticed that gap. I checked other counters and noticed the same, so me trying to correlate those counters to find root cause will be very difficult. This is very aggravating... I also, just exported numbers from different objects and there is a gap the entire time. I'm wondering if DFM backups are happening during that time which is why no data is getting captured.. I am checking into this now.. So, my thought process from yesterday went out the window as well. I was working on multi-level triggers in PA, so I created a threshold with an emergency alert of 65000 microseconds for 5 minutes and the thought process was to have an event triggered in DFM, and then have the alert kick off a perfstat. I'm obviously stuck on the last part of triggering the perfstat, I'm trying to wrap powershell around that part.. TBD... I'm very interested to see what you use powershell for DFM, as I use it a lot. I post a lot in the powershell forum for filer based, but I also posted powershell script to do elastic growth rate trending on aggrs from DFM and it got no traction, so I kind of gave up posting, but I have some other cool scripts that I would be willing to discuss with you. You can PM me and we can chat about that. I haven't had some success dumping out from dfm perf data retrieve and wrapping powershell around it. I'm curious as to how you handled that aspect of it. I don't understand the statement where you said, that Netapp reports the highest last? Also, with this statement, "We found it beneficial to use variables for those parameters above so that we're not restricted to just running it against a filer or a volume(if you group filers by certain groups in DFM you can run it against that group), and so we can specify the counters we want and so we can find more than maximum. We almost always use max or mean switch and not the others, though." I'm really interested in hearing more about this.. Are you creating custom objects, and using measure-object etc....
... View more
Hey all, So we had an interesting issue over the weekend where we saw nfs write latency spikes up to 32,000,000 microseconds in our environment. We have PA enabled, but i'm not sure what other views/countes I should look at to try to correlate if this was a filer issue or not a filer issue. Do we need a custom view for this. I think i'm also looking at something like statit for the disks ut% and xfers, to try to see if the disks were slammed. Is there a view for that as well? Thanks in advance
... View more
Clinton - Ok, my disclaimer is I have yet to go down cluster-mode yet But are you saying we can't use AD Authentication via cifs like we do in 7-mode to manage the filers via powershell
... View more
Ok, you can do a few things differently First, I see you are just checking al your snapmirrors.. What's your threshold? You should have a trigger point, who wants to look at all your snapmirrors. Also, you can look into using the excel com object or piping it to an HTML page. If you search some of my scripts that will get you started
... View more
I see no issue ac -path c:\test.log -value "---------------------" ac -path c:\test.log -value "new reading $time" ac -path c:\test.log -value "---------------------" ac -path c:\test.log -value "`n---------------------" here's the file --------------------- new reading 10-08_13-11 --------------------- ---------------------
... View more