I have installed two instances of the ESX version of the 8.3 Simulator and within a few days I run into space problems on the root volume. See the messages below from the console. I have also runto into this same problem with instances installed in ESX in a Partner's lab. It appears that there are logs or some other file(s) that are continually being created and filling up the root volume. The Partner dug a little deeper and reports that it looks like the Simulator is buitl on a Linux image and that it is the Linux image that is actually having a space problem.
login as: admin Using keyboard-interactive authentication. Password: *********************** ** SYSTEM MESSAGES ** ***********************
CRITICAL: This node is not healthy because the root volume is low on space (<10MB). The node can still serve data, but it cannot participate in cluster operations until this situation is rectified. Free space using the nodeshell or contact technical support for assistance.
At this point, I cannot issue any meaningful commands to make this happen. Every command I issue seems to get thwarted due to databases being offline, like the VLDB. Can you give me some additional guidance on what commands to issue to accomplish your sugestion?
aggr status The root aggr will have "root" in the option list. Typically its aggr0
aggr add aggregate_name 3@1g Assuming the default 1gb disks were used. Adjust as necessary.
vol status The root vol will have "root" in the options list. typically its vol0
vol size root_volume +2290m The size increase availble may vary depending on the type of disks used. 2560m or 2290m are most common. Try 2560 first, if that fails fall back to 2290, if that fails the error will give the max size in kb
You may or may not need a second reboot to remove the recovery flag in the loader. If required it will tell you when you log in from the node shell.
After a clean reboot, go back and disable aggr snaps and vol snaps on the root, delete any existing snaps, and clean out old logs and asup files in the mroot.
I think in that case, you will need to get into the systemshell as diag user and then go to /mroot/etc and remove the log directory recursively (rm -rf /mroot/etc/log). Once this is done do a df -h . on the /mroot directory and note the decreasing useage. Once it drops below 100%, then exit the shell and reboot. it should come back up and then add disk to the aggr and space to the volume vol0 as previously mentioned.
I have a similar issue with a strange twist. I also get the low space problem, however when I query the root aggregate size, it gives me 3.38GB available out of a total 4.17GB. Only 19% used, so strange why it's complaining about space when there seems to be loads free. I cleared all my snapshots and changed the snap sched vol0 0 0 0 etc a while ago when i added extra disks and expanded the space on aggr0 and i'm still gettig the same problem. All aggregates on my second node are showinga status 'unknown'.