I do not have exhaustive list. Some that come in mind are snap reserve, fractional reserve, automatic snapshot creation. Possibly space reservation.
... View more
You can't. This is question about intended usage. Wizard will set some options differently, depending on it; each of these options can be changed independently at any time also after initial volume creation. This question is there to set some recommended defaults, nothing more.
... View more
I can delete them from NetApp storage. No, you can not. You can drop the whole snapshot, but there is no way to delete individual file from snapshot.
... View more
You can't reduce LUN size without reducing size of filesystem on this LUN. 10% used space from host point of view does not mean only 10% is consumed on storage. Check actual space consumption with "lun show -v" - you cannot go below this value even if you disable space reservation for this LUN (without using deduplication and/or compression). Storage vMotion as suggested in article I mentioned is really the most straightforward way to reduce space consumption.
... View more
This is audit staging volume. As long as you are going to destroy aggregate anyway, volume should be safe to delete. See FAQ: What is a staging volume and how to use it to troubleshoot issues? for details.
... View more
The KB wants to setup a Fail Over Only load balance policy This KB says "All load balance policies are supported. Round robin with subset is generally recommended and is the default for arrays with ALUA enabled".
... View more
Data ONTAP does not provide any way to associate gateway with specific interface; I assume it takes whatever interface is the first one. So the only way to fix it seems to be delete default route unconfigure e0M add default route and check that it is now associated with interface you want configure e0M again But keep in mind that you have no control of what happens after reboot. The only reason to have e0M is when you need dedicated management network. Having e0M address on the same network as data network makes very little sense and only creates problems as you found.
... View more
Find three spare disks, if necessary - remove ownership from them, boot new head into maintenance mode, assign ownership of these disks to new head, boot into special boot menu and select 4 (or 4a, depending on version) to initialize root volume. After reboot you will need to do usual setup of new controller. Make sure to install the same Data ONTAP (including patch number) as is already used before doing it for 8.x or netboot using the same version if using 7.x. In case of 7.x do not forget to "update" Data ONTAP and download correct kernel to boot device.
... View more
See KB 1014631 for procedure to reinitialize clustered mode filer. You should really open new thread instead of piggy back on question which is 4 years old and applies to completely different environment.
... View more
There are two independent problems here - staggering disk startup and spare selection. Whatever the reason for degraded plex is, spare selection does not look right.
... View more
Well, NetApp ships SATA shelves with 2 PSUs and official statement is, 2 PSUs are enough for SATA. It is true in the sense, shelf with SATA drives will function with single PSU. But as we have seen it could cause issues. It is really independent of Metro Cluster (although having two pools makes it worse). Consider short power outage. FAS and shelves will power on when it is over and this may trigger excessive reconstructions that could be avoided. And even with 4 PSUs there is no guarantee that all 4 of them will be powered if outage happens. But returning to the question of spare selection - it looks like a bug. According to KB I mentioned Data ONTAP will search for suitable spares in the opposite pool only if the aggregate is mirror-degraded or is resyncing, with the plex containing the failed disk serving as the source of the resync. according to your description that was not the case. You have "failed" disks in target part of resync in mirrored aggregate. It should not use spares from different pool then. On your place I would open separate case pointing to this KB article and asking to clarify.
... View more