ONTAP Discussions

Moving Aggregate that contains root volume and luns

fabian213
6,994 Views

Hello,

We are moving moving an aggretgate that contains the root volume and various luns (and shares) from a FAS3020 (cluster) to a FAS3050 (cluster). We are not swapping heads, we are physically moving the shelves. I've read the procedure for moving aggregates, but I don't think it takes into consideration luns and volumes. Is there are procedure or document that gives a bit of insight to this kind of procedure? How will the new filer recognize the luns? Will we have to recreate the shares and igroups? Is there a way to copy all of that info fro m the 3020 etc to the 3050 etc?

Thanks.

1 ACCEPTED SOLUTION

chriskranz
6,993 Views

If the file is in Software Ownership mode (run "storage show" and look for the last line to see if SAN own is enabled).

If it's hardware ownership, boot the 3020 into maintenance mode and remove the ownership before you move the disks across.

Then move the disks across with the 3050 live. You can hot-plug the disks, the aggregate will get offlined automatically as it'll be foreign. It'll also get renamed (for instance aggr0 will become "aggr0(0)", which can get confusing!). You can rename this to aggr1 or whatever ("aggr rename aggr0(0) aggr1"). The moved root volume will get flagged up as well, but that's fine, rename it and online it. You can create a new share to this, or just do "rdfile /vol/foreign_vol0/cifsconfig_shares.cfg" on the console, and then copy and paste the contents.

You can ofcourse copy all the share information before moving the stuff across, but it doesn't have to be. Either way I'd keep the foreign vol0 in place for awhile, just in case there's any config info you haven't remembered to copy across of pull out.

Unfortunately the iGroups and LUN mappings aren't in a file, so these can't simply be copied across. You will have to recreate these, sorry!

View solution in original post

15 REPLIES 15

chriskranz
6,948 Views

The shares (either CIFS or NFS) will be easy to transfer across, read and copy the contents of /etc/cifsconfig_share.cfg and /etc/exports. So long as the volume names stay the same, you'll be all good with that.

The LUNs may be a bit more tricky. These will be turned offline when you move them across to prevent any LUN conflicts. As they are move from a foreign system, there's no easy way for the filer to check for iGroup and Mapping conflicts, so it's easier to simply offline them. I believe iGroups and LUN mappings are all in the registry for the filer, so I'm not sure how easy this would be to simply copy across. I often end up listing all the LUN mappings and iGroups before hand, and then editing the output in Notepad to generate some commands to reload these on the next filer. This is okay for 20-30 LUNs, but if you have a lot more, then this might not be practical.

Are you looking to move the root volume and have the new filer boot to this? This is a fairly easy move again, but you'll have to rename it. When it's online on the new filer, just run "vol options foreign_vol0 root" and the filer will check it's viable and then enable it on boot. You can rename the root volume while it's live too, so don't worry too much about the names as you move stuff around. You shouldn't need to re-install OnTap as you won't have changed architectures.

fabian213
6,949 Views

Thank you for the info it is greatly appreciated. As for the root volume, no we are not planning to use the root volume on the new FAS. That FAS already has a root. There aren't that many LUNS so manually adding all the initiators and mappings isn't too big a deal.

So, if I get this straight, we can physically move the shelves to the new FAS, rename the aggr via maintenance mode, copy the share info over (shouldn't this be done prior to moving the shelf?), re-add the initiators and lun mapping (can't copy a file out of etc?), and we should be good?

chriskranz
6,994 Views

If the file is in Software Ownership mode (run "storage show" and look for the last line to see if SAN own is enabled).

If it's hardware ownership, boot the 3020 into maintenance mode and remove the ownership before you move the disks across.

Then move the disks across with the 3050 live. You can hot-plug the disks, the aggregate will get offlined automatically as it'll be foreign. It'll also get renamed (for instance aggr0 will become "aggr0(0)", which can get confusing!). You can rename this to aggr1 or whatever ("aggr rename aggr0(0) aggr1"). The moved root volume will get flagged up as well, but that's fine, rename it and online it. You can create a new share to this, or just do "rdfile /vol/foreign_vol0/cifsconfig_shares.cfg" on the console, and then copy and paste the contents.

You can ofcourse copy all the share information before moving the stuff across, but it doesn't have to be. Either way I'd keep the foreign vol0 in place for awhile, just in case there's any config info you haven't remembered to copy across of pull out.

Unfortunately the iGroups and LUN mappings aren't in a file, so these can't simply be copied across. You will have to recreate these, sorry!

fabian213
6,949 Views

We won't be able to add the new disks live, because its a new loop. Also the 3050 has its root vol on aggr3, when we boot into Maint Mode, we should be able to specify which root to use. Also, the 3020 currently has hardware ownership, so we should be able to power down the 3020, and move the shelves over.

As for the initiators, a manual recreation is fine. We'll have to use the new IQN right? Also, I imagine we'll have to disconnect (before the move) and reconnect all the luns (via snapdrive), on the server (given the new IQN).

Thanks for all the help!!

chriskranz
6,949 Views

Okay cool, going into maintenance mode will do the job. You shouldn't need to define a root volume as the new aggregate will be marked as foreign at the start. I'd probably rename the vol0 before moving it across actually, just in case. You can also rename aggregates hot, so you may want to also rename this before moving it too.

As for the iGroups, yeah, just input the IQN. Copy what is already setup on the 3020. A nice trick here, if you set it up identically to the 3020 (with the full viaRPC.iqn...... naming), then from the host simply create an new iSCSI session, and all the disks will be ready and waiting! I did this exact thing a few weeks ago, works a treat! Take care to get the LUN ID's and mappings the same, it helps to plan this and make lots of notes. Just in case, I'd also jot down the drive letters as they are mapped to the hosts.

From the hosts, I'd unmap all the disks and disconnect the iSCSI session after you have noted all the connections down. This will prevent any stale or locked sessions to unknown hosts once it's moved. When you've moved everything across, create a new session and if you followed the steps above you'll see a bunch of disks already mapped. Reboot the host and everything should come online.

Alternatively you can do it very manually. Make a note of the drive letters and LUNs, disconnect them all before the move, then just use SnapDrive to reconnect them all after the move. A bit more long winded, but depends what you are comfortable doing I guess. SnapDrive can be scripted, and it's often not a bad idea to have a script on each host available that will reconnect all the disks. It can make DR a lot easier too.

Hope this all helps you out, and good luck with the move!

fabian213
6,949 Views

Sounds like a plan. We'll export the lun id's and mappings to a file for reference. That will, hopefully, allow us to automate this process a bit more. Thanks again for the info, it's going to help us a great deal!

fabian213
6,047 Views

The cifsconfig_share.cfg file doesn't contain qtree information. Is there a file that does? Will all the qtrees have to be recreated?

chriskranz
6,047 Views

The QTrees are part of the volume, so these will move across with the storage.

Do you have an quota's set on the qtrees?

fabian213
6,047 Views

No quotas on the qtrees. I think theres a config file to move over if there were quotas, but thankfully we can avoid that. So when shelves get moved, the qtrees will automatically populate? I thought they would be treated like cifs or luns and need reconfiguring. Thanks for the heads up.

chriskranz
6,047 Views

Nope, the Qtrees are directories within a volume, so will be moved across like any other data area and should get picked up on the other end. No problems at all!

pascalduk
6,949 Views

fabian213 wrote:

We won't be able to add the new disks live, because its a new loop.


Hot adding new loops in supported in ontap 7.2+: http://now.netapp.com/NOW/knowledge/docs/ontap/rel7261/html/filer/ds14hwsv/appa5.htm

I have done it multiple times last year without any problems in our clusters.

fabian213
6,949 Views

Very useful! Is this limited to ESH4 shelves? Have you tried adding AT-FCX shelves?

Now, the document says it can be done, but it doesn't provide steps on how to do it. I assume that the steps would be the same as hot adding a shelf to an existing loop. Does this work in a HA environment?

amiller_1
6,949 Views

Yes to all of the above.

Not limited to ESH4 shelves (works for AT-FCX), steps are just plugging it up correctly, works fine with MPHA (since you don't actually add a disk loop as MPHA -- you add the loop with a single connection to each filer first and then do the MPHA part like you would if adding MPHA normally).

fabian213
6,047 Views

Very cool. Thank you for the info. Cheers.

pascalduk
6,047 Views

amiller@dnscoinc.com wrote:

Yes to all of the above.

Not limited to ESH4 shelves (works for AT-FCX), steps are just plugging it up correctly, works fine with MPHA (since you don't actually add a disk loop as MPHA -- you add the loop with a single connection to each filer first and then do the MPHA part like you would if adding MPHA normally).


I can confirm that it works in the real world for ESH2 / ESH4, even in a stretch metrocluster, and AT-FCX. Note that in a cluster you should (temporarily) disable automatic takeover when a mismatch in number of disk shelves occur (options cf.takeover.on_disk_shelf_miscompare).


BTW, I think it already worked with pre 7.2 releases, but that it was not documented before.

Public