Network and Storage Protocols

Moving CIFS Shares from one aggr to another on same controller


Running 8.1.2 in 7-Mode and need to move a few CIFS shares from one aggr to another on same controller due to space issues.  I have used Snapmirror to perform this task in the past.

Looking to see if anyone else has other options to perform this task. 

Can vol move or vol copy work with CIFS Shares?  From what I read these command will only work on LUNS.







vol move or vol copy would work fine, though they would optimally require access to the volume(s) to be restricted to ensure a consistent copy.  ndmpcopy would work as well, with the same caveat.  These would still require moving your cifs shares around.

snapmirror is, I think, the best option, as it minimizes your downtime to just the amount of time for a final update, which is typically very small.

If you terminate cifs, you can move volumes around (current -> old, new -> current) without affecting your cifs shares; otherwise, when you move your volumes the share will follow the volume.  You can pretty easily get the commands necessesary to recreate the cifs shares from cifsconfig_share.cfg.  Some of the share config might be in registry, too - look for options.cifsinternal.share.

Hope that helps



Thanks Bill..I figured SnapMirror was the best option.  Unfortunately I can't stop the CIFS service.


Very useful replies on this thread. I need to do the same in my own environment. I did do a test vol copy with CIFS exported however I had to stop sharing the CIFS shares before it would allow the vol copy to work. 


2 additional questions I have on this:


  • Before I start a vol copy, you say you can terminate CIFS, If I do this what is the command to terminate CIFS on a filer and would this involve needing to recreate the permissions which are contained in the cifsconfig_share.cfg file?


if anyone else knows the answer, please feel free to comment.


You can cifs terminate -v volname to shut down cifs only for that volume.



I have always done these migrations using robocopy, xcopy and snapmirror. I personally think snapmirror is better than any of them when you are doing netapp to netapp cifs migrations. But when you go to cluster mode you have data motion to migrate volumes disruptively.



Hi Kathy,

just like you, I'd like to copy a cifs share from one aggregate to another. you guys are talking about snapmirror. I'm confused... to my understanding, snapmirror will copy a whole volume, not just a single share. So besides robocopy, and I dont see how you can achieve this.

If someone found a way to use snapmirror to copy a single cifs share, please let me know how you dit it.



Hi..sorry I was referring to moving a volume that was one window share

Hi Benoit,

There are two scenorios, 1) Share is in volume 2) share is in qtree.
If your share is in volume you can do Volume snapmirror(VSM), if it is in qtree, you can use qtree snapmirror(QSM). At the end cutover time little down time would be there.
So, if you want to move share from one aggregate to another, best option is snapmirror.

(Ex) Assume your share is in volume(Vol1), then destination side create one test volume(vol1_test),

After configuration, once evrything done, make it destination volume online and rename it same as source (vol1). then create share with same name.
thats it.


There is a third scenario - that matches Benoit's issue, I think - where the share is configured to a path within a volume that is not a qtree - /vol/vol1/dir1/dir2, for example.

Benoit is correct in that snapmirror will only replicate volumes and qtrees, and will not work in this scenario - unless the option exists to snapmirror the entire volume containing the share, then delete what's not needed on the destination.

For this, ndmpcopy will work, with the caveats that I stated above.  I've always used client side copies for this sort of thing, though - rsync or robocopy.  These give the best solution when you need to minimize downtime, since you can do the baseline while the source is live, and only have to restrict access for the final sync and cutover.

As I said above, you would have to manually reconfigure the actual share.