2013-09-10 02:09 PM
I have an N6040 filer (FAS3140) serving up CIFS and FCP. We have way too many CIFS volumes on a particular aggregate, and our engineers are trying to work with 15-20GB simulation files. Saving these files is absolutely brutal since the data stays queued up in NVRAM waiting for the smallish SATA aggregate to write. This causes a bit of latency across the whole system. The current aggregate is 19 SATA disks which should be good for ~1500 IOPS (at 80 IOPS/7k spindle) but we are pushing 2.5x that during high use intervals. We have another that is much lower utilization, a perfect candidate for a volume move.
I'd like to take one or two of these big CIFS volumes and move them to aggregates with lower IO utilization so that the workload can spread out a bit more evenly and reduce the congestion on the front-end and hopefully flush the data out of NVRAM much quicker. Our engineers have gigabit from desktop to core and our filer heads only have a pair of gigabit ports each so their copy jobs often fail when everyone decides to save their work and go for lunch at the same time.
Since the new aggregate is 64-bit and the old is 32-bit, an actual vol move is out of the question. We are licensed for snapmirror but do not want to take CIFS down for all of our volumes.
If I create a new volume and do a "snapmirror initialize -S filer2:engdata filer2:engdata2" it should copy all of the data over with the security perms intact, then I should be able to stop sharing the old volume and start sharing the new volume with the same name so as to not require any changes to scripts or batch files, correct?
I'm a bit unclear on the last part as we've only ever done a "cifs terminate" on the filer in question for reboots or what have you. Also, pretty rusty with my CIFS skills.
Solved! SEE THE SOLUTION
2013-09-10 03:33 PM
What you say is correct - if you do a snapmirror initialize, a snapmirror break, then a cifs shares delete on the source and a cifs shares add on the destination, you have migrated the data and the share name is the same. BUT, your data is likely inconsistent, and you will still need to change scripts, etc. to access the new volume at the new host.
Do the snapmirror initialize, wait for it get into a snapmirrored state, remove the share on the source, and do a snapmirror update before the snapmirror break and creating the new share. This will take care of inconsistent data. You can do snapmirror updates without removing the source share, too. What I've done in the past is do an update, then another update as soon as the first finishes - this gives you the amount of time a final sync will take, and the amount of time the share will have to be down. Coming up to the window, keep doing updates so that when your window starts, you know that the final one will be about the same time. Then cifs shares delete, snapmirror update, snapmirror break, cifs shares add.
If your current shares are accessed by the controller hostname, you're going to need to change scripts to point to the new hostname. If you use DNS aliases, you can repoint DNS, but only if you're migrating ALL shares for that alias. You'll also need to set the cifs.netbios_aliases option - but again, only if you're using aliases, and only if you're migrating ALL the shares for that alias. If you're not using aliases, now is a good time to implement them, since you'll be changing all the configs anyways. Then next time you have to migrate, you won't have to worry about it!
Hope that helps
2013-09-13 07:20 AM
That's exactly the info I needed. Luckily the source and destination filers are the same, I just need to move the volume between aggregates to something lesser-used. I have a few low utilization aggrs available that will do the trick so I can snapmirror from one to another and remove the share, configuring it on the new source and save myself the headache of redoing scripts etc.