What does the usage of the volume show on the filer? Are there any snapshots that could be impacting the amount of free space? How are the quotas managed, via the file on the filer?
... View more
The showmount commands shows the contents of the rmtab file. In theory, when something is unmounted, the entry should be removed, but it isn't guaranteed and you end up having the stale entries. I don't think there is a way to clean that up - except likely by rebooting or restarting the NFS service. You might be able to manually remove the entries from the file, depending on the version of data ontap, but there would be a concern that the NFS service may get a bit messed up if things were removed that shouldn't have been.
... View more
Snapshot growth comes from deletions or changes. Would there have been any sort of backup file being written to the file share? It will be difficult to pin point the exact cause of the size of the snapshots, especially if there are a lot of files / folder owners in the volume.
... View more
Just for clarification: Volume A - the source volume - does this have the snapshots you are trying to identify? If it has the default snapshot schedule assigned, Data OnTap will be creating the snapshots. Volume DR_A - the destination volume - this should show the same snapshots on the source volume if it is a regular mirror, or it could have additioanl ones if it is a vault mirror. The snapmirror relationship will also generate snapshots, that relationship can be created via Unified Manager or the CLI. You can compare the snapmirror-label to see if it matches the snapshots you are seeing. Have you shown the volume show for both the source and destination volumes?
... View more
You have to split the ownership if you want 12 disks used for each node. You can have an odd number of disks in a raid group, that's no problem. Whether you have 1 or 2 spares is a personal choice. One spare would give you more usable capacity. If you know you could get a replacement disk quickly - then 1 spare should be fine. If it might take awhile to get a replacement on site, you may want to have more than 1.
... View more
When you mirror a volume, all the snapshots on the source are copied over to the destination. The snaphots you are seeing are likely replicated from the source.
... View more
You can have a look at the HA documentation but if you do a takeover, the controller taking over will handle the workloads for the other controller. The controller being taken over will automatically reboot and wait for the giveback. CIFS sessions will be terminated though, so that may have an impact depending on the application using the CIFS shares.
... View more
Can you show the config for the ifgrp? Are the network switches Cisco - and do you have CDP enabled to verify the ports on the netapp are connected to the switch ports as expected? What does the LACP port channel show on the network switch as the status for the port? You can try replacing the cables, that would be a quick thing to check.
... View more
Can you provide more information such as where the source and destination filers are (across a WAN, type of connection)? Are there currently snapmirrors already in place that work ok? What does it show in the snapmirror log on both the source and destination filer?
... View more
As noted above, user / admin guides will help clarify things immensely. In terms of SnapVault and SnapMirror status - IDLE status is fine, noting the amount of lag is a good key indicator of an issue. If the lag time is less than the time frame of the schedule (if it is a daily vault and the lag time is less than 24 hours), then usually everything is ok. If there are other issues, the status will reflect that.
... View more
Did you have a look at the snapmirror.log on the source to see if there were any related messages there that might be more useful? You could also try breaking and then resyncing one of the relationships over the new WAN link to see if that would kick start things.
... View more
Depends on what you mean by performance impact. The clone is on the same filer and same aggregate as the parent volumes, so any extremely heavy activity on the clone could have an impact on any other volume on that filer / aggregate depending on the model of filer and size of aggregate. But having the clone read the same blocks as the parent could actually be beneficial if the blocks are already in cache from the other servers read request.
... View more
It is possible it will fail. But disks do have areas set aside to replace failed blocks and until that section is mostly used, the disk will still function. If the disk does fail, you will get an autosupport and relevant messages. As long as you have a spare disk available, the failure will be handled by the NetApp without issue.
... View more
Maybe you have already done this, but I would ensure the shelves were disconnected from both filers, then confirm shelf IDs and set all speeds to 2 or 4 (have you verifed the adpater you are using is capable of 4?), then power cycle the shelves. After connect them to each other and then connect them to the controllers one link at a time, verifying that the controller can see the shelves after each individual connection. If you can't see all the disks on the first connection to the filer, you should look at cable connections / replace cables / reseat modules before attempting to add any further connections to the controllers.
... View more
Does the new mount point disappear after you log off or does it stick around until manually unmounted? It does seem odd that is it doing that.
... View more
Are you able to physically check the disk shelves and make sure the shelves have unique IDs and the modules are set to the same speeds in the loop?
... View more