ONTAP Hardware

Root volume disk type

skyfoster
4,366 Views

We recently added several shelves of SAS disk to our FAS3240, DOT 8.0.2P6 array that previously only had SATA.  Our VAR recommends moving the root volume (vol0) to SAS and gave us a blanket "it's best practice" reasoning, but I'd like to know a) is there any NetApp official documented reasoning behind this recommendation and b) what your thoughts are on the merits of doing this.  I'm looking for anything I can pass up the chain as validation for why we should make this change.

I've read through what I can find in the Storage Management Guide (Recommendations regarding the root volume) and the forums here and elswhere, but can find no mention of any difference between using different disk types.  Thoughts?

Sky

5 REPLIES 5

scottgelb
4,366 Views

I could see doing that for less usable disk thrown away for a dedicated root... often we see a dedicated SAS node and a dedicated SATA node... node1 root is sas and node2 root is sata.  In that case put sas on the sata node would also mean putting sas spares on that node which would not save space.

Did your VAR clarify when you asked for more detail on their response? There may be a valid reason but can't tell based on your setup without more information.

thomas_glodde
4,366 Views

hi there,

usualy you want the root volume on the faster disk type so when any files are read/written from/to the root volume they are read/written with the best performance possible. we usualy do not create a dedicated (3 or 2 disk) root aggregate unless they have 500+ disks in the system.

Besides, id suggest to think about what scott told you, have a node 1 with all the sas disks and a node 2 with all the s-ata ones. be aware that writing wafl filesystem consistency points (they happen at least every 10 seconds) needs to flush all filesystems at the same time so your sas disks might have to wait for the sata ones (only under very high load conditions). thats why we try to keep the disk types on seperate heads. do not have 1 sas and 1 sata shelf on node 1 and 1 sas and 1 sata shelf on node 2. better go for 2 sas on node 1 and 2 sata on node 2 if possible.

regards,

thomas

skyfoster
4,366 Views

Thanks Scott and Thomas for responding to this.

We have nine SATA shelves (four on node 1, five on node 2) and 13 SAS shelves (seven/six).  Everything's in production use, so I'm not about to try to split them up between nodes at this point, but we do have enough SAS spares to create a new root aggr on them and move vol0.  What I want to know is WHY this would be needed.  I'm perfectly willing to do it, mostly because I trust my VAR but also because it fits in with some other improvements we're making, but I thought I'd see what the NetApp Community thinking on this was.

Thomas, you mention that one possible reason would be to have the best performance possible, but I guess I'd like to know just how much performance vol0 needs and why you think it would need the best.  Obviously, we've not had any performance issues as yet, but I'd like to have some background on this in case i'm asked 😉

scottgelb
4,366 Views

With the first iteration of snapmirror sync root was used for nvlog/cplog so performance of root was more of a consideration...but later ontap versions put the logging in the target aggr/vol instead of root.  I don't know of a performance bottleneck with root... unless there is some really heavy cifs auditing but even then haven't seen an issue.  Anyone ever had a case where only root had a performance issue?  I'm sure there are some but can't recall any we have run into where other volumes weren't affected.

thomas_glodde
4,366 Views

Well, as usual, it is a recommendation. Either for general performace stuff scott & i talked about, beisdes, back in the days when a filer crashed and had a corrupt filesystem you would need to scan the entire aggregate containing volroot before bringing the filer back online again. I think nowadays ontap is able to handle that and can boot up even with a big root aggregate.

If you have the option and time to move the vol0, feel free to do so. If not, you can stick with your setting as it is, no real harm done unless netapp global support says otherwise ;o)

Public