Network and Storage Protocols

Moving a Volume with CIFS Shares to Another Aggregate

pjorg
15,614 Views

Hi all,

I have a FAS3040-R5 filer running 7.3.2 with two different SATA aggregates.  I'm running out of space on one, and the other has plenty.  I'd like to move my largest volume (which contains data that is exposed to users via CIFS shares) from the full aggregate to the empty one.  I've been referencing this post: http://communities.netapp.com/message/4488

The volume in question is called ABC.  What I've done so far is to create a new volume called ABC_new on the destination aggregate, and created a SnapMirror relationship between the old and new volumes that mirrors the changes over to the destination once an hour.  So far, so good.

What is the next step here?  I can rename the shares ("ABC" -> "ABC_old" & "ABC_new" -> "ABC") but am concerned that the CIFS shares will "follow" the old volume through the rename instead of pointing to the new volume.

Also, (though this is of less concern) there is a SnapMirror relationship between the old volume and our DR site, which I would like to be able to keep without having to reinitialize the relationship (this volume is nearly 5 TB, and it will take a while to re-sync all the data).  This is not a show-stopper, though.

Thanks.

1 ACCEPTED SOLUTION

chriskranz
15,471 Views

This would disrupt all your CIFS traffic however. I would do it slightly differently...

  • Take a copy of your cifsconfig_share.cfg file in \\filer\c$\etc (or just use the snapshots later)
  • Remove the CIFS share for this particular volume (arranging downtime on it before hand)
  • Do a final snapmirror update
  • Change the volume names around as you require
  • Recreate the CIFS share, either by hand if the permissions aren't complex, or by referencing the copy / snapshot of the cifsconfig_share.cfg file

If you have other CIFS shares, this would avoid any major disruption across the whole system.

View solution in original post

9 REPLIES 9

pascalduk
15,471 Views

Renaming the volumes will keep any cifs shares pointing to the same volume with the new name. You need to delete the shares and recreate them during the actual migration.

There is also no need to start a new baseline transfer for your volume snapmirror. The steps required are described here: http://now.netapp.com/NOW/knowledge/docs/ontap/rel727_vs/html/ontap/onlinebk/4mirro32.htm#1003095

jb2cool01
15,471 Views

If you halt the CIFS service before renaming the volumes you won't have to do anyhting special with the shares. The shares will remain static. If you rename volume with CIFS running then the shares will automatically change their paths.

So what i'd do is.....

Arrange for some downtime.

Disable CIFS to make sure no more data is going to change during transfer.

Perform a final snap mirror.

Break mirror.

Rename original volume to ABC_old.

Rename new volume from ABC_new to ABC.

Start CIFS service.

chriskranz
15,472 Views

This would disrupt all your CIFS traffic however. I would do it slightly differently...

  • Take a copy of your cifsconfig_share.cfg file in \\filer\c$\etc (or just use the snapshots later)
  • Remove the CIFS share for this particular volume (arranging downtime on it before hand)
  • Do a final snapmirror update
  • Change the volume names around as you require
  • Recreate the CIFS share, either by hand if the permissions aren't complex, or by referencing the copy / snapshot of the cifsconfig_share.cfg file

If you have other CIFS shares, this would avoid any major disruption across the whole system.

jb2cool01
15,471 Views

Sound advice as always Chris, i quess that's why i'm not an IE.

pjorg
15,471 Views

Thanks Chris.  We do in fact have many other CIFS shares running off this filer.  This will take place during a scheduled maintenance window anyway, so downtime isn't so much of a concern, but it's still good to be as surgically precise as possible.

Also, this volume actually has more than one CIFS share referencing it, so recreating them would be a big hassle; your hint about the cifsconfig_share.cfg file is exactly what I was looking for.

Thanks, all.  I will post back here with the final results when I'm through.

andrew_braker
15,471 Views

Hello

Thanks for all the information in this thread, it will be really useful for migrating some large CIFS volumes to a new aggregate.

So far in testing its worked fantastically. Keeping the mirror right up to date allows us to only need a short outage window to complete final update/quiesce/break and volume renames.

However! I've recently come to terms with a background process that WAFL runs on volume snapmirror destinations called, Deswizzle. In short, deswizzle optimises the block metadata in the flexvol to make reads as fast a possible for the flexvol residing on the destination aggregate. Better explained here: https://kb.netapp.com/support/index?page=content&id=3011866&locale=en_US

This interested me, but also slightly worried me because I'm going to be using this snapmirror destination volume (on the new aggregate) as the volume users will be accessing for their data and I don't wish to have any slower read speeds due this "swizzle" caused by snapmirror replicating the flexvol. Since doing the initial large replication transfer I have let the deswizzle process correct all the block metadata, however on the more recent snapmirror updates, I don't have the time to left deswizzle complete before doing the next transfer, so some blocks will be left with slow-path reads. Well, I thought to myself, when I do a snapmirror break (from research) that kicks off a deswizzle on the now Read/Writeable volume. Sweet, I'll just let that run until completion over the weekend of the migration (users will be able to access data, but in theory some of the blocks will have slow-path reads until deswizzle is completed). BUT slight problem, it looks to me that when you do the vol rename (making the new volume be the same as the original volume name) after the snapmirror break, the deswizzle process seems to end for that volume!


Does that mean I will be stuck with some blocks with forever slow-path reads?! I don't know! Has anyone thought about this particular consequence of using volume snapmirror to migrate workloads between aggregates OR am I just looking to deep?

I also read this document: https://www.usenix.org/legacy/event/usenix08/tech/full_papers/edwards/edwards_html. Of interest is 4.1 Volume Mirroring, it shows what I'm talking about (You may need to read the section above about Dual Block Numbers). The Destination Block Pointer has a PVBN of U (= Unknown), meaning the location of the block in the aggregate is not known at the block pointer level (unlike in the source where this information has been kept) and must read the destination containter map (Think block location at the virtual block level in the flexvol - I think ). So more reads ops are required to find the block on the aggregate for a flexvol that was a snapmirror destination (and has not had deswizzle correct the metadata)

https://www.usenix.org/legacy/event/usenix08/tech/full_papers/edwards/edwards_html/snapmirror-xfer-i.png

Sorry to confuse anyone.. I'm pretty confused myself

RANJBASSI
13,675 Views

Hi

We are using Data ONTAP 8.2.3p3 on our FAS8020 in 7-mode and we have 2 aggregates, a SATA and SAS aggregate.

I want to decommission the SATA aggregate as I want to move that tray to another site. If I have a flexvol containing 3 qtrees CIFS shares can I use data motion (vol copy) to move the flex vol on the same controller but to a different aggregate without major downtime?

I am aware that possibly there may be a small downtime with the CIFS share terminating but I plan to do this work out of core business hours.

Is this possible?

Many Thanks

PIYUSHBANSAL198722
13,551 Views

1.) If you use vol copy then destination volume would be restricted state even after the operation is completed - due to which you would not be able to setup the cifs share for destination volume untill the whole operation is completed - whereas in snapmirror you could add the cifs share as soon as baseline transfer gets completed

 

2.) You need to make sure that during the whole vol copy operation no data is being written on the source - for data consistency, which would require longer downtime as compared to snapmirror update since you could continous snapmirror update the destination with source volume is serving the data and during the downtime you could quickly finish the final transfer and then break snapmirror and delete the cifs source share

 

So, the best option for cifs migration seems to be snapmirror ----  never used cluster mode so not sure about the vol copy for NAS in cluster mode.... ---- assuming this is also disruptive for NAS volumes.....

 

3.) Don't forget to rename the destination volume (same as source volume and source volume as <volumename_old or something>) and create cifs share for destination volume with same name as source (offcourse after deleting the old cifs share for source) if you have quota enabled for the users accessing this share.....

PIYUSHBANSAL198722
13,548 Views

Done some of the migrations NAS and SAN using snapmirror but never got an issue reported pertaining to this slow-path after migration...

Public