ONTAP Hardware

Shelf maintenance and vserver root volumes


Hello, comrades!


So, one of the shelves attached to my FAS3250 (CDOT 8.2.3) needs to be replaced. It's nothing urgent, but I do want to make sure I have assessed the impact of a 2-hour-or-so maintenance window where a whole bunch of disks won't be available.


This shelf is home to disks that comprise parts of two aggregates. These two aggregates are home to several dozen volumes, and we can arrange downtime for those volumes on the host side, no problem.


But I've also discovered that these two aggregates are also home to the root volumes of three different vservers. Am I right in thinking that we don't want those volumes to go offline, as it'll mess up access to all the volumes on all three of those vservers? 


What am I looking at here in regards to impact and downtime? I reckon I can either:


  • Migrate those vserver root volumes to aggregates not impacted by the shelf maintenance. This would be super great if I could do it non-disruptively.



  • Arrange for downtime for not just the dozen or so volumes living on those aggregates, but all three vservers whose root volumes live on those aggregates. This would not be super great.

What are the community's thoughts on this? Is there anything I'm simply not thinking of?



Yes, you can migrate any volume non-disruptively to any other aggregate. Alternative is to setup LS mirrors for root volumes and promote mirror copy to be new root. LS mirrors are best practice anyway.


Yep, I inherited this environment, and some vserver root volumes have LS mirrors, and some don't. Reckon I can set up LS mirrors for those root volumes that don't have them, and then promote one of the mirrors to root well before the maintenance window. That'll take care of root volumes anyway, which is what I'm most concerned about. The rest of the volumes I can migrate to other aggregates and basically ... vacate the two aggregates that are affected by shutting the shelf down. 


All that said, the entire workflow could look like this, maybe?


  1. Create LS Mirrors for the three affected root volumes on other aggregates
  2. Promote those mirrors to be the new root volumes
  3. Vacate the two affected aggregates by performing vol moves (assuming sufficient space elsewhere)
  4. Confirm the two aggregates have no volumes, and offline the aggregates
  5. Perform the maintenance
  6. Online the two affected aggregates
  7. Move the volumes back to their original aggregates
  8. Create new LS Mirrors for the three affected root volumes, to adhere to best practice

How does that kind of action plan look?


It's easier to just move volumes now, for this maintenance, and care about LS mirrors later. You will need to remove all volumes to offline aggregates anyway, so destroy any existing mirrors.