2013-05-09 02:00 AM
It is easy to move root volume with 7-mode, but how to move NODE root volume to different aggregate in Clustered ONTAP?
I can't even create a volume and assign to Node's vserver (Cmode-02 is node's vserver, at least vol0 belongs to it)
Cmode::> volume create -vserver Cmode-02 -volume vol0new -aggregate aggr1_Cmode02 -size 160g -state online
Error: command failed: This operation is not supported on 7-mode volume 'vol0new'.
2013-05-09 02:17 PM
I found the answer in the 'logical storage management guide', there's a section called 'Rules governing node root volumes and root aggregates'.
"Contact technical support if you need to designate a different volume to be the new root volume
or move the root volume to another aggregate."
hope this helps,
2013-05-10 03:28 AM
There's a documented procedure to do this, its a PITA to do, best to contact your local support people for the instructions. If you're an ASP/reseller or have access to net2 it might be available there. otherwise your partner TSE should be able to point you in the right direction
The root volume has to reside on an aggr all by itself, its to do how the node access failover stuff works in clustered ontap. As a result they are cautious about letting you create volumes on a CFO aggregate. (the root aggr is CFO)
2013-05-10 06:57 AM
I know that docs instruct me to contact tech support.
I'm ASP, can you please point me where I can find the procedure or what additional access should I ask my channel manager. What is net2?
It is too "expensive" to use separate aggregate on small systems like 2240, but my question is about moving node root volume only
2013-05-23 01:14 AM
I contacted Tech support and they guided me how to move root volume.
General idea is - make new aggregate a root aggregate, then reboot and ONTAP will create new root volume on that aggregate. Then recover cluster configuration to accept the new root volume, It is NOT like copying configuration from /etc like in 7-mode.
I can tell that the procedure is not easy, is disruptive and involves Maintenance mode, cluster shell, node shell and system shell commands.
So contact Tech support or better don't wish to move node's root volume
2013-08-20 02:05 PM
Yeah, expensive or not you HAVE to have a dedicated root aggr in cluster mode. (try it with 4tb SATA drives)
You can put volumes into the root aggr if you wanted to (not supported but it does work) but dont expect failover to work. The root aggr is a CFO aggr, and as such doesnt behave how you would expect it to in a SFO failover. What this means is you effectively lose your volumes on a CFO aggr during failover. Its hard to explain with out a whiteboard and some furious arm waving, but if you want more info try your TPM, he should be able to walk you through the whys and hows.
Net2 is a documentation library http://net2.netapp.com/ you can download it to your laptop and have alot of proceedural documentation for common tasks, Its quite useful. As an ASP your TPM should have told you about it i would have thought.
2013-08-20 11:33 PM
Yes, you have to use root aggregates, so you need at least 4 drives in RAID4 for two aggregates. You can talk to sales manager to replace those 4Tb with 1Tb to safe a penny. Don't think it helps much.
For 2220HA system with 12 disks it is a nightmare to make two dedicated root aggregates.
Failover (takeover) with data volume on root aggregates (CFO) works the same as with SFO in case of controller failure.
SFO works much better than CFO for planned takeover and giveback, takeover/giveback takes 1-2s or even less.
Another useful feature introduced in 8.2 - aggregate relocation. This feature works with dedicated root aggregates only.
There is a KB how to move root aggregate/volume in 8.2, it is much easier now.
What is TPM?
2013-09-10 02:37 PM
Yeah i agree, its annoying but i understand the logic behind it. Hopefully somewhere in the near future we'll get mirrored drives in the controllers for the root aggrs
Regarding CFO? really? i havent had a chance to test it directly but i was under the impression the CFO aggrs didnt failover at all? They waited until the owner host came back.
TPM = Technical Partner Manager