For the GUI issue, do you have OnCommand System Manager installed on your admin station? That version of ONTAP did not have an embedded GUI, but if you install it locally you can add the filers and manage them via your browser.
Along the same lines as the other replies, please look at the output of these commands:
'snap sched -A' (this will show the scheduled snapshots for all aggregates, along with the retention)
set to 0
'snap reserve -A' (this will show any space reservation for snapshots at the aggregate level)
all set to 0
'snap list -A' (this will show any existing snapshots at the aggregate level)
All are deleted
'snap sched -V' (this will show the scheduled snapshots for all volumes, along with the retention)
snap sched set to 0 2 6@8, 12, 16, 20
'snap reserve -V' (this will show any space reservation for snapshots at the volume level)
snap reserve set to 5% on all Volumes
snap list -V (this will show any existing snapshots at the volume level)
shows all from the schedule above for each volume
What you should have for maximum available space would be 0% snap reserve across the board, no snapshots scheduled/existing on the aggregates and hopefully minimal snapshots on the volumes. Also, when you list the volume snapshots youshould be able to identify snapshots created by the schedules by their names (hourly, daily and weekly with an ordinal number after to indicate the generation). Additional snapshots you see may be manually created or created by other tools (such as SnapManager).
So the volumes should not be set to 5% snap reserve?
Also, if you have block LUNs on these systems, they may also be thick provisioned. You can run 'lun show -v' to list all of your LUNs and look for the attribute Space Reservation and make sure they show Disabled so they are thin.
lun show -v returns a blank command line
If everything is already configured properly and you did remove all the excess snapshots (of all of them period), then you may be in a situation where you have to migrate some data off quickly. You can likely expand your aggregates with some additional disk shelves, but that's not typically a quick decision unless you happen to have some laying around unused.
Well since my cpu's are spiking high when i run a sysstat -M, I do think that adding an additional shelf may crash this large NetApp. We have moved over all mission systems and data to a VERY old NetApp shelf that is much smaller and things are running excellent there not issues and no aggregates filling up.
The data currently on there is nothing near what the capacity is. We also Zip and Compress data monthly off to another NetApp disk shelf.
Once you get past this intial emergency space issue, do make sure you setup some monitoring and alerting to ensure you don;t get in this situation again. NetApp has some good tools, and you should be able to use OnCommand Core (previously named DFM) to monitor these 7-mode systems.
Does this install on a Red Hat system?