2013-07-26 08:24 AM
So as we're in the planning process to move from 7-mode to c-mode, a decision on disks to use for the root volumes comes about.
Seems to me that one constant with NetApp has always been a root volume per controller and recommended it be contained in it's own aggregate. This doesn't appear to be changing.
Is it not common practice to have a 3-disk raid-dp aggr with two hot spares available? So in a dual controller example, 6 disks need to be sacrificed for the two root volumes from a chosen type of disk you have available. May not mean much for large organizations with lots of disk shelves and storage, but for a small setup, it can be a rather significant portion of your overall usable storage taken away.
All that said, it would be nice to see ALL shelves be hybrid in the way of having some SSD slots on each shelf devoted to root aggr&vol creation maintaining the architecture of the shelf in every other way.
Since the root volumes are specialized in comparison to all other volumes, it seems there should be logical some way to create architecture specifically for them. Given it now appears to be a nightmare to move the rool volume in c-mode, it seems a more static, 'forced' root vol architecture would make a lot of sense.
Is this a crazy thought?
2013-07-26 08:30 AM
A couple things...
1) There is an initiative to make moving the root volume not as much of a nightmare.
2) The notion of wasted disks to a root vol is a point of discussion and something we'd like to see addressed.
2013-08-09 09:45 AM
any idea on NetApp's response to a best practice for the root volume? My last job, we were actually told to put the root volume on a shared aggregate because of space issues.....I don't really understand the need for the dedicated disks.....although in cluster mode, I understand the need better than for 7mode (if that makes sense) But even in cluster mode, the aggrs stay with the filer, the volumes are the only things that "move"......so in cluster mode, is it recommended to keep the root vol on a separate aggr?
2013-08-09 09:50 AM
Dedicated disks for the node root vol is the recommendation in clustered Data ONTAP.
Some of the reasons:
- node root vol is a 7-mode style volume and would reside on an aggr with a CFO HA policy. These failover differently (and more slowly) than SFO aggrs, which are meant for data.
- CFO aggrs also will not veto failover for things like CIFS, locks, Snapmirror, etc. So you could see an outage for data volumes residing on a CFO aggr.
- node root volumes contain the replicated database used to make a cluster a cluster, so you want the dedicated spindles to service that and not create any contention from external IO
2013-08-09 10:28 AM
Interesting. I vaguely remember one NetApp-er saying something that dedicated root aggregate is enforced in 8.2 C-mode, but product documentation doesn't seem to confirm this.
One more downside of keeping data inside root aggregate - Aggregate Relocate doesn't work on root aggregates.
2013-08-13 06:49 AM
So, I was thinking about this. The ultimate goal is to be able to move workloads around the cluster. The root volume doesn't need to be mobile anymore in cluster mode (for upgrades and such). Basically it should be a set it an forget it thing. The only thing I don't like is you actually lose 5 disks for the root aggr (3 +2 spare) unless that config has changed?
The CLUSTER root volume is the only one that concerns me - does the cluster information get striped across all the root volumes or ? I ask because eventually the original heads (that hold the first root aggr/vol) will be the ones that get replaced first (in theory)....is there a dcpromo type process that migrates that information out to the other root volumes?
2013-08-13 06:55 AM
There is no cluster root volume. There are only node root volumes. The node root volume contains mroot, which holds the cluster configuration, logs and replicated databases. The databases replicate across each node root volume to create a consistent copy.
There is no need to migrate information out, unless you wish to keep the logs. The config and databases all would get replicated to the new root vol.
The node root volume is a "set and forget" thing. And it doesn't affect the ability to move workloads around.
The 5 disk aggr is a requirement new in 8.2. This requirement can be bypassed for the node root volume via the advanced level option "-force-small-aggregate." That would allow for a 3 disk root aggr.