Hi Bill, Yes, it does make sense. I guess, I would have to accept this size of the snapshot. I hvae two more quick question: a) I would have to remove the snapshot, and let the backup going, then reinitialize the snapmirror, since I don't have any space left in aggr. right? b) If I add FC drives into this SATA aggr, would that be alright, any performance issues? Thank you!
... View more
Hi aborzenkov, >Backup applications can delete large amount of expired data in short time. Does that mean it could cause such large size of snapshot (1911GB) in an hour due to the deletion of the large amount of expired data? I thought if the data is gone, we don't need to track these data, and then should not have such amount of the snapshot. I am still trying to logically explain why we have such large size of snapshot in an hour(thanks for pointing this out). Please let me know. Thank you!
... View more
No, it is not 5 days. I explained that already. Though I post the message on 10/14, the snapmairror had already broken off by me on 10/9 because the volume was full on10/9. so, all outputs were refrection of situation before 10/9.
... View more
Hi Bill, There is one issue left in my mind. >So your 1911G snapshot essentially contains the _changes_ to the volume between when the snapshot was taken and now -_not_ the changes since initialization. We have the schedule of updating the volume in every minute on the destination(I know now it is too extrem), as I understand, this update would meanwhile also trigger the snapshot. So, this 1911GB was the result of snapshot being taken in a minute! This size of data changes in a minute seems impossible. >2TB change in 8TB in 5 days _seems_ a bit high I don't know where you got "5days" from? At this point, I should tell you what is the volume for. This volume is presented to Window server as a share, and is used for Acronis backup. 2 weeks retention, full backup in 2 Sunday, and incrementals in any other days. Does this tell you something? Thank you!
... View more
Okay. One thing I am not so sure of. Would it be possible the snapshot takes so much space, 1911GB? How to explain the snapshot is so big? It seems not likely that there are so much data being changed between two snapshots.
... View more
Got your point. I could not grow the source, since the aggr where the volume is located is completely full, I could not grow the source. Could I remove the 1911G first? it seems to me that this snapshot maybe already corrupted. If I could, then do I have to reinitialize or I could resync?
... View more
Hi Bill, Thank you so much for such detailed explanations which cleared out quite confusions in my mind. As you indicated, there should be something wrong with the snapmirror, and now I feel the size of the snapshot should not be so big (1911GB). The total of the volume size is about 8TB. I am sorry, but I have not accurately stated my situation: - a) the snapmirror for this volume is scheduled as following, not every half hour as I said earlier: netapp2:vol1 drfiler1:vol1 - 0-59/59 * * * How to explain this schedule? Does the update start every hour based on this schedule? Maybe this schedule caused the problem of that every month or so the volume gets full? - b) The output of snap list you see here was up to 10/09 when the volume got full, and I therefore broke the snapmirror off on that day. The following are outputs you ask for, and again it was up to 10/09: drfiler1> snap list vol1 Volume vol1 working... %/used %/total date name ---------- ---------- ------------ -------- 0% ( 0%) 0% ( 0%) Oct 09 06:00 drfiler1(0151735037)_vol1.806 0% ( 0%) 0% ( 0%) Oct 09 05:59 drfiler1(0151735037)_vol1.805 drfiler1> df -rg vol1 Filesystem total used avail reserved Mounted on /vol/vol1/ 8193GB 8135GB 57GB 0GB /vol/vol1/ /vol/vol1/.snapshot 0GB 0GB 0GB 0GB /vol/vol1/.snapshot drfiler1> snapmirror status vol1 Snapmirror is on. Source Destination State Lag Status netapp2:vol1 drfiler1: vol1 Broken-off 126:33:39 Idle Thanks again for your patience.
... View more
Okay. Understood. Thanks for clearifications. I am ursorry, but still could not fully understand my questions in mind. This snapmirror has been scheduled once every half hour on the destiantion, and it seems working by using"snapmirror status". So, the following line by "snap list" contains every single snapshot (understood it is frozen image copy) since first initialization? and that's why it is so big, with the size of 1911GB? if all snapshots have already transferred to the destination ( I guess this is how the destination keeps the original copy and all changes being made on the source), why do we need to keep all these snapshots on the source? Thank for your patience. filer2> snap list vol1 Volume vol1 working... %/used %/total date name ---------- ---------- ------------ -------- 23% (23%) 23% (23%) Oct 09 06:00 drfiler(0151735037)_vol1.806 (snapmirror)
... View more
Okay. Where could you see "two snapshots" on source? I only see one. The snapmirror was established a month ago. I can olny see one line as the result of "snap list". So, does this snap with 1911GB include all changes/owverwritten since first initialization?
... View more
The vol1 is 100% full again. My question is not about sovling volume full issue. but, more about the snapmirror in a more granular way. Please see the following outputs on the source filer "filer2". It seems to me that the size of the snap caused by the snapmirror is 1911GB, the rest of the space is taken by the volume (FS) itself. Could anybody please explain to me in detail about the output of "snap list vol1"? - what does exactly the snap include, complete copy of volume1, and plus all snapshots since the first full copy? Why do I have to continue to leave the full copy on the source filer, after it has already copied over to DR site? - has this listed snap alredy been copied to drfiler1? or just the full sets of snapshots? - is there any way to list the data in detail as to what is the full copy of volume and what are those snapshots, when were those snapshots taken individually? thanks for your help! filer2> df -rg vol1 Filesystem total used avail reserved Mounted on /vol/vol1/ 8193GB 8148GB 0GB 0GB /vol/vol1/ /vol/avol1/.snapshot 0GB 1911GB 0GB 0GB /vol/vol1/.snapshot filer2> snap list vol1 Volume vol1 working... %/used %/total date name ---------- ---------- ------------ -------- 23% (23%) 23% (23%) Oct 09 06:00 drfiler(0151735037)_vol1.806 (snapmirror)
... View more
thanks for your idea. I would correct settings as following now: /etc/snapmirror.conf on D1: s1rep:vol1 d1:vol1 - 0,15,30,45 * * * /etc/snapmirror.conf on D2: s2rep:vol2 d2:vol2 - 0,15,30,45 * * * in /etc/snapmirror.allow d1rep options snapmirror.access legacy I also need to make sure there is a route from s1rep to d1 via d1rep. traceroute -s s1rep d1rep can show you the route. right?
... View more
I have two filers with HA configuration, primary connections and host names are S1 and S2 respectively, with two dedicated 10G connections for replicating data to DR, with names of S1REP and S2REP. The same at DR, two filers with names of D1 and D2. and two dedicated 10G connections, D1REP and D2REP. The following is what I am going to setup. Please kindly advice on if anything is MISSING or INCORRECT /etc/snapmirror.allow on S1 and S2: D1REP or IP D2REP or IP /etc/snapmirror.allow on D1 and D2: S1REP or IP S2REP or IP /etc/snapmirror.conf on D1: S1REP:vol1 D1REP:vol1 - 0,15,30,45 * * * ... /etc/snapmirror.conf on D2: S2REP:vol2 D2REP:vol2 - 0,15,30,45 * * * no S1,S2, D1 or D2 should be involved in snapmirror configurations. Thank you very much in advance!
... View more
need you guys help again. To continue on my story. finally I am able to get DC admin to come to my desk, and just enter his id and password. It works. However, I am getting following message. What following options should I choose? CIFS - Logged in as admin@abcnet.cit.com. The user that you specified has permission to create the filer's machine account in many (754) containers. Please choose the method that you want to use to specify the container that will hold this account. (1) Create the filer's machine account in the "Computers" container (CN=Computers, Windows default) (2) Choose from the entire list (3) Choose from a subset of containers by specifying a search filter Here is some background: Currently, we have CIFS running on an existing pair of filers, and we want to migrate CIFS to the new pair, then eventually retire the existingone. So, we need to keep all informaiton, including DC information. So, what should I do from here, should I choose (1), enter a new object under "computers" container, or choose (2)? I don't know what (2) is, is this something that I may choose from for the existing pair of filers? I don't know too much about DC.The DC admin is not so sure about what I am asking. So, I once again turn to you for help!
... View more
I contacted Supprt, but they could not give me a clear answer other than just sending a few links, and wanted me to figure it out myself. I could not. I have 2 SAS ports on each filer (2XFAS3250), and two dedicated SAS stacks. Can I use FC ports for SATA stack, or are there any other ports I can use for SATA stack?
... View more
Hi All, Our users have full access to a share, abc$, and on the window side they have full access to all folders under abc$ except one folder. Is this something NetApp admin can do to allow them to acccess the folder, or is this something the owner of the folder on the window side has to grant the access? Thanks!
... View more
Yes. I have flashcache on this 3250. Would that have any impact on adding the SATA shelf. Do I have ports for the shelf and then have its own Stack? Thanks for your message!
... View more
Hi resqme914 , sorry to bother you again on this topic. now, I have total of 78 data sas drives left. You mentioned that I can add all these drives into existing aggr0 then I don't need to move vol0. My question is, since there is already a raid0 in aggr0 with size of 3 drives, how many raid-dp groups I need to create in aggr0, and how to size them? Here is the step I can think of, please correct me. 1. expand raid0 first by adding 18 more drives. So, raid0 will include root vol0. Then how do I determine what those of 18 drives out of 78 should be added in? 2. create the rest of 3 more raid-dp groups with the size of 20 each. Also, with the size of 20 / 21 of raid-dp, would that satisfy the best practice? Thank you, as always.
... View more
I know I need to ask our sales rep. but, I just wanted to get an rough idea, and then take a step further. We need to dump about 10TB backup data to the storage, It'd be cheap if we use SATA drive(maybe 1TB drive?), but currently we only have SAS (600GB each). so, I am thinking how much difference between these two shelves, then make the decision from there. Thanks!
... View more
I am sorry, but I could not quite follow you. currently, we are using 2-ports LACP group on each 2-node HA filer, A vlan is created on these two ports on each filer, so, two vlan's from two filers respectively are pointing to two different switches. If one switch went down, and then we will lost one filer, since as I understand it, in case like this, the failover would not happen. Could you please elabrate your idea more, or recommend any document to prevent the single point failire on the switch? Thanks a lot!
... View more
aborzenkov, You sounds pretty familiar with tht sese options. you are right, we should talk to networking guys, but we heard different things from them, so, that's why I wanted to get some ideas myself. It sounds a lot of efforts or costly on networking side. Then what we can do on NetApp side? If we don't go with vPC approach on network side, what can we do to resolve the single point failure on the switch? when tthe switch went down, it will cause the entire LACP (2 ports) going down as well.
... View more
Thanks for your message. Is vPC usually enabled or disabled in normal circumstance? what impact on the switch if I wanted to enable it? The reason I am asking these is because our networking guys told me that two ports in LACP could not be connected to two different NX switches. So, I am wondering what difficulties are to stop them to do so. Thanks for your further advices!
... View more
I have two 10G ports on NetApp filer, and they are formed as a LACP group. To keep redundancy, can these two ports go to two different CISCO Nexus switches model 5545? Thanks for your information!
... View more
Hello ipsita, Thanks for checking. I just did it, quiesce and break the snapmirror, and destroy the lun, then resync the snapmirror. the lun has been snapmirrored to DR. Looks all good. the conclusion is that I have to break the mirror before I can destroy the lun. online volume won't do it. volume has already been online but read-only. Thanks to all for your help
... View more