Hi David, With regards the debate, I said that to them that we could not only save the space, also increase the performance due to more spindles. However, in their opinion the best practice will be always separate them, and they don't care about the space since it is cheat in their words. I guess, I am losing the debate, because they need a written NetApp document to clearly state that a large aggr will be an option or doing better that is what we are lack of. They are just not convinced without docs.
... View more
Hi David and resqme914, I have two type of filers, filerA and filerB. We just started to use them, so DATA is not significant. I need to restore exactly what aggr0 was, 3 disks raid-dp for aggr0, root aggr. filer A with one large aggr0 only now, 2xraid-dp, 23disks each, and total of 2 spares. This filer is for DR, and only one DR volume now, which can be removed and then re-create snapmirror later, I think. I will do the following: - create raid4 aggr_root on 2 spares. - run through "root vol move" steps above, and reboot - destroy aggr0, and will be releasing all disks. - use 3 disks to create a raid-dp aggr0, then run “root vol move” step above again to move root vol to this aggr0 from aggr_root. reboot - destroy aggr_root - create a separated aggr1 with 2xraid-dp, 21 disks each - re-create Snapmirror on that DR volume from the primary. filer B has 2 aggrs now, aggr0, aggr1, and 4 spares. aggr1 has data, but aggr0 has no real data yet, other than root volume. Agg0 has 80 disks, 4xraid-dpx20disks. Since I am not going to touch aggr1, so it hould be intact in the whole process, right? The following would be steps: - create a raid-dp aggr_root by using 3 out of 4 spares - run through “root vol move” steps above and reboot - destroy the aggr0 - create a separate aggr “aggr2” with total of 76disks, 4xraid-dpx19 disks, 5 spares now. Please verify these steps, and correct me if anything wrong. Thank you!
... View more
Hi resqme914, I have 2 filers here and both with 2 shelves, 24 disks each. Upon our conversation, I feel it makes sense to create a large aggreate "aggr0", and have root volume vol0 in the same aggr. Based on this idea, on each filer, I already created the aggr0 with 2 raid-dp raid groups, 23 disks each, and 2 spares. Now people here doesn't this idea at all, and insist on separating root aggr, due to "best practice", and "performance is better", and these concerns are overweigh on the capacity. I may have to separate them. I don't have strong reason to defend idea because of lack of OnTape documents to support the idea. Here is my question to you, based on what I have already done, what can I do to separate them. I am thinking that I can use these 2 spares, then the question is what type of OnTap raid groups that could have 2 disks only? If I can create the raid group using 2 spare disks, I can then create a aggr, say aggr1. I can use steps you provided above to move vol0 to this aggr1, then destroy the current aggr0, and then create a new aggr, say aggr2 with 3 disks of raid-dp, then move vol0 to aggr2. Does that sound right? and what raid group that OnTap supports can have only 2 disks? I have the other 2 filers, since there are 4 spares, so if this idea works, I could use the similar idea and more easily to seprate them. I am sorry to continue on this thread again. But, really appreciate all your messages.
... View more
Scott, Your detailed check list is very helpful. We could not find the list any where else. We currently use 2-nodes filers for corporate shares, and as a dedicated CIFS server. so, we are debating on if we have the need to convert them to vfilers. Of cause, we can gain some benefits by doing that. However, since the purpose of these two filers is pretty straightforward, without any development filers, to convert the production to vfilers might be chanllenging. We are also planning on setup corporate user home directories on NetApp filer. Maybe we will have the need to convert to vfiler then, since filers can therefore serve two purposes. What would you suggest?
... View more
this number 2320G or any other later number being grown to did indeed contain those just deleted data? and that's why you expect the size would eventually grow to 2911G, adding the same amount that we just removed? and therefore we could say that snapshots will also keep track of deleted data as well? Sorry, Bill, I am slow man...
... View more
Hi Bill, I have read your message a few times. based on my understanding, I honestly still don't understand why the snapshot space is getting big and big, and the following number 2320GB (bold) is climbing again and again for about half hour, and reached as much as 3219G before I had to delete it by using snap delete command. source> df -rg vol1 Filesystem total used avail reserved Mounted on /vol/vol1/ 8193GB 8148GB 0GB 0GB /vol/vol1/ /vol/vol1/.snapshot 0GB 2320GB 0GB 0GB /vol/vol1/.snapshot So, it did not error out quickly, and I am not sure if it is the result of scheduled resync in /etc/snapmirror.conf, because the number immediately started to climb as soon as we deleted 1TB amount data. that "transferring" is on the source side when I run "snapmirror status" on the source volume there are "vol1 is full" messages in /etc/messages file, also about destination volume is full, could not make transfer. The only type of messages in /etc/log/snapmirror is about DR volume is full, and could not make the transfer. So, both volumes were full. Another basic question, please forgive me, is 2320GB here really a total amount of data that all snapshot pointers point to, not the amount of space that these pointers occupy, right? because ponters won't now occupy so much space. If right, then it means once 1TB data got removed, then growing snapshot pointers are starting to point to these removed blocks, therefore the amount of data that pointers point to is getting big and big?
... View more
>In addition, when you delete a large amount of data, the process that "transfers" that data to the snapshots (block reclamation) is not instantaneous - can I understand this sentence as following: Snapshot is not only keep track of data that just added, also data that just removed, and therefore I will see just removed data will be "transferred" to snapshot area, and then that's why I will see snapshot space is growing? I did quiesce the snapmirror first before break off. However, the relastionship is still defined in snapmirror.conf file. As you said, that's why the "transferring" would be still going on, but eventually error out? Thank you very much for staying with me for so long.
... View more
Hi Bill, I need turn back to you again for your help. to continue on our conversation about the size of the snapshot which is produced by snapmirror, the issue is that the size of the snapshot will be gradually increased, as more data gets removed from this 8TB volume. If I frequently use df -rg volume, the size is getting larger and larger, after I just removed 1TB data. Why? Eeven though I break off the snapmirror on the DR site, the size of snapshot (on the source) is still increasing. and if I do" snapmirror status volume" the status is showing me "transferring". Why? I thought if I broke off the snapmirror, transferring should be stopped.
... View more
I have FAS 2140 2 node HA, and thinking about converting them to multiple vfilers. Currently the environment is mainly for CIFS shares, with multiple volumes and also qtrees. My questions is, once I create multiple vfilers, where will these existing volumes/data be located? will they stay on vfiler0? and if it is true that I can size the rest of vfilers in any figure I want as long as they don't exceed total capacity? Thanks for your advice in advance!
... View more
The following is a very good document on DR procedures. https://kb.netapp.com/support/index?page=content&id=1011195&actp=search&viewlocale=en_US&searchid=1356550972797 One question though about this statement. In the begenning of the article, there is a "Note: Replace volume names with fully-qualified qtree names if working with qtrees" What does that mean, and what exactly do I need to do? As I understand, when real DR happened, I need to change the DNS entry, and point source name "system A" to the DR storage, I also need to name all volumes/qtrees/shares on DR filer are the same as those on the primary, so, when I access the primary: systemA:/volume1, after the DR, I'd use the same reference. Those are what I need to do.
... View more
Hi Bill, I understand all your said about deleting and then adding the share, but with the exception of the following. How do I migrate the data up one level? Should I do this on the filer side or on the Window side? > If you want to move the share and have it look the same, you'll need to migrate the data up one level, then change the shares.
... View more
I have a share "share1", >cifs shares share1 Name Mount Point Description ---- ----------- ----------- share1 /vol/vol1/share1 Created on 10/16/2013 what I can do to move the share1 to the volume, and wanted to be the following:? Name Mount Point Description ---- ----------- ----------- share1 /vol/vol1 Created on 10/16/2013
... View more
Need another help from experts. Under "Management", and the "Host Users", I could not add domain users onto a new added filers, and getting the following message. I am sure the domain user is fine, since it has already existed on the other filers. This may not be a big issue, since I can use useradmin domain user command to do the same, however, I am wondering what settings caused the issue? and Operation Manger may not be in right state. Cifs has already been setup, and AD entry is created. I can run cifs lookup domainid on the filer without an issue Error : Could not add domain user domainname\username to the usergroup(s). Reason: Look up failed for the given domain user
... View more
I logged in NetApp Management console today, and 4 new added filers are still not there. So on only ONE of 4 new filers, I did: options httpd.admin.enable on options httpd.admin.ssl.enable on Other items suggested by Roger Bergling have already done. THEN LOGOUT Management Console, then log back in. Now, all 4 new filers show up under "Global". I just don't know why it works, since I only did that two options commands on one filer, and why all the suddent all 4 filers come up.
... View more
Hi resqme914, I did the same as you adviced, P.A is enabled and treansport is httpsok, and logout Management Console, log back in. Still the same...
... View more
Hi kryan, it failed on the following, the others seem "passed". What should I do then, it is related to this error? perfAdvisorTransport Failed (perfAdvisorTransport set to httpOnly, but host us es https)
... View more
Hi there, I did that, and that was successful. I don't have an issue with adding the host. The issue is, if i click "manage performance", the new host won't show up in any groups there. I can add it in groups in Operational Manager. But, still won't show up under groups under "view" on the left pane in NetApp Management Console.
... View more
I have added a new filer into NetApp Management Console. However, if I click "management performance", then "view", the filer is not under "Global". I can see others. How to add the new one here?
... View more
I used (1) choice : (1) Create the filer's machine account in the "Computers" container (CN=Computers, Windows default) then AD admin moved the new filer from "Computers" to the same container where the current filer located. Would that cause any issue, since he made change on AD side? Do I need to do anything on the filer to reflect the change? The group policy of the exisiting filer on AD is empty. We have clikced the property on both new created and existing filer on AD, and made sure settings under "security" tab are all the same. Also, in /etc/cifsconfig_share.cfg, there are a lot of commands similar to the following: "cifs shares -add Marketing... " "cifs access "Market..." Should "Marketing" here, for instance, be fefine somewhere in AD? Could you please tell me where exactly can I find these in AD? Thank you!
... View more
http://www.netapp.com/us/media/tr-3437.pdf this doc on page 11-12 kind of said the best would be 12-20, and also https://communities.netapp.com/message/102130 but, I feel the same that using the whole shelf for the raid group makes the sense. I also read threads,and suggested using the whole shelf as well.
... View more
Hi resqme914, I have 78 data disks left (3 disks for spares) is because I would have to use 12 other disks for 2nd aggr. I know creating 2 aggr is not an optimal approach. However, we have a backup and this part of data hit the filer's performance, so, all people here wanted to separate the backup data from the other production data. Now I have another sceneario need your advice: On DR site. we have two heads, each head has 2 shelves and 24 drives each. I am thinking to create one aggr on each filers and one shelf for each raid-dp with 24 disks, would that be against the best practice? I heard the best number for raid group would be 12-20 disks. What would you say?
... View more