We have couple of FC DSMK4 and DS4243 shelf connected to 3020 and 3040 cluster respectively. These systems are running Ontap 7.3.6.
Now we have to move these shelfs to 3240 running 8.0.2.P4 7 mode.
If someone has done it already, can you please help me with the steps to do it. In my 3240 cluster, I have a disk shelf, where I can snapmirror the data of 1 shelf from 3020 or 3040. So I was thinking, if I can do Snapmirror of one shelf at a time to the 3040 cluster, because snapmirror to higher version of Ontap is supported.
So once snapmirror of one shelf is complete. I will remove the live volumes on those shelf/aggr from exports, and make the final snap update thru snapmirror and then somehow make the aggr offline, unassign the ownership of the disks and then move the shelf physically to the 3240, assign the ownership back to new filer, make the aggr online
But there can few issues.
1. Whether shelf with firmware of 7.3.6 will work on 8.0.2P4 cluster or not, I mean do I have to upgrade the shelf firmware first, when it is connected to its old filer? Problem is, I cannot do it for 3020(does not support 8.*) though with 3040 I can do this.
2. Will the old filer allow me to make the aggregate on those shelf offline? Good thing is, my aggr do not run across multiple shelfs in this case.
3. Can I do it online on the primary side, so that only those volumes on that aggr which i intend to move at a time is affected, but rest of shelf and aggregate are not affected, and I do not have to shutdown the old filers.
4. If the above is not possible, how to proceed.
Thanks for your help, I will summarize..
There is no hot removal of shelves...so you have to halt the 3020/3040 anyway and the aggr will not be online when you remove that way.
Disk firmware (all.zip file) can be installed prior to moving the shelf...you can push the latest firmware to all systems first and wait for the update to complete.
You have to be careful if the 3020/3040 stay in production that you only remove disks for the complete aggregates you are moving (complete aggregate move which will contain all volumes in the aggregate(s) ).
One more notice. While it is possible to add shelves with existing aggregate online, it is near to impossible to assign all disks at once, which will result in "broken" aggregate and rebuild as soon as enough disks from it are present. So if possible it is better to assign disks offline (from maintenance mode).
There is possibility to offline aggregate on source filer to avoid described situation, but it is diag level command, so as usual - you should ask support before using it.
Thanks for your reply. But I think I need to redefine the issue so that you can give me a point solution.
1. You mentioned that I cannot remove the aggr from a live system, but if I have made all the volumes offline, then I should be able to do a aggrr offline thru diag command, but I am getting the error that
aggr offline: Cannot offline aggregate 'aggr_mirror' because it contains
one or more flexible volumes
So how can I do a aggr offline so that I can transfer the shelf to the new system.
Also just for the sake of asking, can I destroy the volumes and do a offline of the aggregate and then remove the shelf(containing only that aggregate)
2. You have mentioned I can update the disk/shelf firmware even for a 3020/3040 system running 7.3.6 to something say 8.0.1, but 3020 does not support 8.0.1, then will the upgrade will be supported even for 3020 shelf?
3. What is the best procedure to do this activity, I am not sure if I have any solution. I can bring down the new filer to add the disk shelves, but I do not want to bring down my running primary 3020/3040 clusters, yes, I can do a takeover and bring down one filer header at a time, but I cannot bring down both the cluster heads at same time.
4. I have another option to do a snapmirror of one shelf (aggr) at a time to the spare shelf in the new cluster, make the volumes offline, destroy the aggregate on the primary, make the snapmirrored volume online, and then physically tarnsfer the first destroyed aggr shelf to the new cluster, and then use this transferred shelf for repeating the activity, susch that now this shelf act as a snapmirror destination for the another aggr on the primary storage.
Hope I am able to explain the requirement.
Unfortunately there is no hot removal of shelves even with cluster takeover and giveback. Regardless of data on the shelves or disk assignment remove.
The same all.zip disk firmware file and procedure are the same across ontap so you can upgrade in either controller source or target after the move.
Sent from my iPhone 4S
Just for understanding..
What happens if I do the following..
1. I destroy all the volumes of a particular aggregate, then destroy the aggregate, this aggr is present in one complete shelf. Once done, I give a cf takeover, and then in the diag mode of the down header I unassign the disks and then finally remove the SAS cable from the header side, now I give a cf giveback from the partner, and then repeat the activity from the partner side. Will this result to panic and shutdown of the entire primary cluster, or it will start giving some error but my filer will still serve data for the other running aggr.
2. when I connect the shelf on the 8.0.2P4 new cluster, can I do a disk/shelf firmware upgrade, or the new filer will not be able to handle this disk shelf which is having old firmware.
You are at risk for a panic doing that. Both controllers should be down to remove a shelf. For loop maintenance it is ok to use the cluster to takeover but not In This case. Opening a case is a good idea too for any maintenance like this.
Old firmware will upgrade. Also raid labels will upgrade to match the 8.0 system.
Sent from my iPhone 4S
On the 2nd point, do you mean as soon as I connect the disk shelf to the new filer with 8.0, it will automatically upgrade the disk/shelf firware and the raid labels?
Also any idea, why I am unable to make my aggr offline thru diag command, even when I offline all volumes?
Disk firmware update is initiated as soon as disk is assigned to filer. Labels are upgraded automatically if disk is part of aggregate. Standard recommendation when moving disks between filers is to make disks spare (zeroing spares before is extra bonus). I do not think shelf firmware is updated automatically when new shelf is connected; it should be updated during boot though.
If you are removing all volumes anyway, you can simply destroy aggregate, it is easier. No need to mess up with unsupported commands in diag level.
Just to summarize.
We can add a disk shelf to a running filer online, but removal of disk shelf will result in panic, and so this feature is not available in Netapp. I wish it would, and so I found one BURT and hopefully netapp will add this feature in future.
Also while moving the disk shelves from 7.3.X to 8.1 or 8.0.2, you simply move the shelve to the filer with 8.X, the new system will upgrade the disk shelve firmware automatically. I have done this, and it work.
Also I faced a challenge while upgrading a filer directly from 7.3.6 to 8.1P1, so I had to do this in two steps,
7.3.6 --> 8.0.2
8.0.2 --> 8.1P1
Thanks to Scott and Aborzenkov for valuable inputs.