Well, there are two schools of thought.. You can do it this way, or feed the food into an array Get-Ncvserver nac-rwc01-db01 |get-ncvol n01_cancdwd |get-ncsnapshot |? {$_.name -like "*hotdb*"} | % { Write-host "Deleting Snapshot: " $_.name remove-ncsnapshot $_.name -confirm:$False }
... View more
I wouldn't use pattern in this way You should use where-object So, something like after your getting snapshot | ? {$_.name -like "*hotdb*"} | remove-ncsnapshot -confirm:$False -confirm:$false will not prompt you.. Make sure you test the command first with -whatif
... View more
There's a fundamental size difference between TiB and TB. One is done base 2, one is done base 10. TiB = Tebabyte., TB = Terabtye. So, which one do you want? And I always prefer createing in the value we want to see, hence 2t, or 3t not in bytes..
... View more
Just know this, traditional volumes are a thing of the past. Only focus on Flexvols. Also, these are pretty basic questions you are asking, you might want to do some additional on netapp's documentation website.
... View more
I hate to be the bearer of bad news, but man, RG's are the underlying of all the disk layouts for netapp. You need to understand this before you even touch a netapp, in my opinion.. Start here - https://communities.netapp.com/docs/DOC-12850
... View more
First of all you've said multiple things.. Your volume is 100% and your quota is exceeded. I personally don't use gui's, but i imagine you are looking at filerview. We need to see a df -Vg of the volume and then we might be able to assist. Also your ontap version is VERY old. 7.3.4, is IONS old... If your controller can support it, you might want to consider upgrading
... View more
As stated above, system manager is a local install on your workstation that is used to connect to filers. OCUM is the manageability engine that manages the filers and does the alerting, and all sorts of jazz..
... View more
I see from your badges your a netapp employee, so you have more resources than I would as a customer. I would check with your sources, but I don't see how it can be a requirement for 8.2.1. I searched the release notes but nothing jumped out at me. As part of your deployment, no cluster, (unless 22xx) should go without acp. I suggest your customer cable it up as in Universal SAS and ACP Cabling Guide . It takes a few minutes to setup acp, and then you will not have any issues
... View more
Well, i'm always a fan of even number raid groups. Even though netapp says you can go to 28 disks in a sas rg, that risk profile is too high for me.. 23 is pushing it a little as well.. Highest i've gone was 24, in isolated cases. I suggest you read the following document - reallocate best practices tr-3929 I still don't understand why netapp never thought to re-allocate the entire aggregate when adding disk. That would be a big win. It's a tedious process, but if you want to stripe the data across all your disks, this is what you will have to do
... View more
Since they are already defined in your rc file it should be pretty straightforwrad vfiler remove vfiler0 -i ip address vfiler add vfiler1 -i ip address Now, if there is something using or mounted those volumes you will have issues.. Is anything exported to those
... View more
I'm not sure how critical your environment is, but I can tell you my opinion, all enterprise storage units need to have some sort of maintenance contract. Support is not free of charge, the forums are. But if you have a maintenance issue on the 2040, you are stuck. And if it's serving production data it's a bad idea. As for upgrades, these are non-disruptive if you do it correctly. Basically, you copy the install file to software directory and issue software update (installfilename) -r once done you do cluster takeover and giveback on each node and viola. you're done!
... View more
I checked the hwu.netapp.com and it looks like your 2040's can go to 8.1.4. I would still request upgrade advisors if you can't grab them from the support staff and do an upgrade Also, i'm not going to be able to troubleshoot from the client perspective. If after the upgrade and reboot happens, and this issue still persists, you need to open a case with netapp support for further troubleshooting
... View more
a 2040 filer is pretty old, so you might not have support on it, But you should be able to go to now.netapp.com and click mysupport. You can generator your own autosupports on there as well.. Run a compare on your nfs options on each controller to see if something is different. What's wierd is you state it goes ok to the other node in the cluster. Just turn on nfs stats and see if it points anything out. no need to post
... View more
Ok, so let's tackle this first. if these are your filer stats under load, your filer is sleeping like a baby. Not CPU bound and not disk bound. As for upgrading, you need to have support on these boxes, and run upgrade advisors from my support. For the HA pair, ACP is a plus, helps the cluster shelves out. For optimal cluster performance you want this at full connectivity if it's cabled up properly. (Google that part) i'm a little perplexed how the linux box copies fine to two of the three heads, but only receives very slow speeds coming from the one head. You can turn on nfs client statistics on that head and try copying it and see what's going on. Check your port configurations on the vif as well, make sure you don't have a duplex issue or something silly.
... View more
Updating Netapp Ontap is as easy as well.. anything.. I can't comment until i know how big your aggr is, how many disks and some stats...
... View more
Ok, let's tackle the first issue... If you say your aggregate is 98%, that's obviously going to affect performance. You are also running code that should be updated if possible. What is your disk layout? Are you disk bound priv set diag statit -b Wait 9 seconds during peak work load statit -e You should see what the disk UTs are .. Also, sysstat -m 1
... View more
Jon - Google jquery and datatables you will see what we can produce. Also, you need to have the js put either on an apache server or windows host and dump your scripts to there. If I get time today or sometime this week i can hopefully dig through your script and see what i can do to help... Also i can tell you right off the back, there are easier ways to do things for sure.. Here', i'll give you one quick tip.. Windows PS 2.0, to send email it makes it a lot easier. You want to use the send-mailmessage cmdlet Send-MailMessage -To $recipients -From $sendMailAs -Subject $subjectLine -Body "put body here" -Priority $priority -SmtpServer $smtpServer
... View more
John, Looks good A couple of suggestions.. Don't know how indepth you want to get, but wouldn't it be easier to create custom objects and pipe it to convertto-html cmdlet with a custom header. Now, if you really want to it it out of the park, what i've done is i've used javascript jquery on the backend with datatables plug and it really blows away any tables you can create standard with html. I can post some of what i've done if your interested...
... View more
This is interesting.. I never thought to do it that way... I'm a script junkie, but I used to use the excel api to dump to a dashboard and then use the power of excel to do the calcs. I'm going to play around with this today if I have time to see if I can also come up with a solution...
... View more