Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a question about vol0 usage size. it relates to following
The vol0 usage size is continuously increasing.
Is there any way to reduce size? or it needs to continuously adding disk?
Solved! See The Solution
1 ACCEPTED SOLUTION
shypervisor has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you can add more disk to the simulator:
https://vmstorageguy.wordpress.com/2015/08/18/adding-more-disks-to-the-netapp-ontap-simulator/
Before doing this procedure properly shutdown the VM and take a snapshot of it then make the changes. After adding disk to the simulator you need to increase the agg0 size by adding disk, next grow vol0 with the newly added capacity.
When you feel all is ok delete the snapshot.
Good luck 👍
14 REPLIES 14
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I need to increate pool0 size continuously? or Is there some method to decrease vol0 size?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your suggestion,
But nothing changed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For this case, Pool size expantion is required?
At this moment, add disk is not permitted, so I need to release other raid group disk and add it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
what is the aggr0 size and space usage?
aggr status?
if size is low on aggr0 you can add disk to increase it size then you can grow vol0
vol size vol0 +1g
Reference:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Currently, aggr status is follows `aggr0_ontapsim_01` is current vol0
shypervisor has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you can add more disk to the simulator:
https://vmstorageguy.wordpress.com/2015/08/18/adding-more-disks-to-the-netapp-ontap-simulator/
Before doing this procedure properly shutdown the VM and take a snapshot of it then make the changes. After adding disk to the simulator you need to increase the agg0 size by adding disk, next grow vol0 with the newly added capacity.
When you feel all is ok delete the snapshot.
Good luck 👍
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for you help.
Is there any guideline for vol0 sizes? it seems monotonically increasing.
If I keep number of volumes under 20, vol0 usage size keeps under 2GB?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry if i misunderstood your question but the only three times i encounter problem with vol0 are related to:
1. ontap upgrade.
2. vol0 default snapshot schedule.
3. when costumer configure vol0 to serve data as mount point for cifs, nfs etc.
Freeing up space on a node’s root volume:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your suggestion.
The document is helpful. It seems vol0's golden-volume usage size is not controllable from user operation side.
In summarize,
Controllable usage size for root vol0 is snapshot.
Other usage size is not controllable. For this case, it needs to add disk.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry to takeover this thread,
but for me, it is neither the snapshot nor the vol0 serving data. I just started with NetApp administration a few month ago, so I am still an beginner.
I am having a brand new Netapp Ontap 9.8 Sim downloaded on Feb. 18 2021. Running within VMWare Player on my win10 workstation.
Nevertheless, after starting and create the 2-node cluster, my sim continues to fill up the vol0. The first setup was full within 8 hours and broke the cluster entirely over night. This was repeating two times, because my daily business did not allows to spent much time on this. On my fourth try, I was at least able to add virtual disks and increase the vol0, disable and delete the aggr snapshots. Vol0 is still filling up, but now I have more time to delete. I did not even started to create a data aggr.
Thanks to some explanations here on how to clean up vol0 with diag mode, I found within /mroot/etc/log/mlog a lot of sktrace.log files. Even deleted, they are still expanding by 100mb per ~5 hours.
sim20c10-02% ls -lha *sktrace*
Spoiler
-rw-r--r-- 2 root wheel 12M Mar 2 13:54 sktrace.log
-rw-r--r-- 1 root wheel 100M Feb 27 03:00 sktrace.log.0000000021
-rw-r--r-- 1 root wheel 100M Feb 27 09:02 sktrace.log.0000000022
-rw-r--r-- 1 root wheel 100M Feb 27 14:38 sktrace.log.0000000023
-rw-r--r-- 1 root wheel 18M Feb 27 15:42 sktrace.log.0000000024
-rw-r--r-- 1 root wheel 100M Feb 27 21:22 sktrace.log.0000000025
-rw-r--r-- 1 root wheel 100M Feb 28 02:17 sktrace.log.0000000026
-rw-r--r-- 1 root wheel 100M Feb 28 07:57 sktrace.log.0000000027
-rw-r--r-- 1 root wheel 100M Feb 28 13:22 sktrace.log.0000000028
-rw-r--r-- 1 root wheel 43M Feb 28 15:42 sktrace.log.0000000029
-rw-r--r-- 1 root wheel 100M Feb 28 20:56 sktrace.log.0000000030
-rw-r--r-- 1 root wheel 100M Mar 1 02:00 sktrace.log.0000000031
-rw-r--r-- 1 root wheel 100M Mar 1 06:51 sktrace.log.0000000032
-rw-r--r-- 1 root wheel 100M Mar 1 11:30 sktrace.log.0000000033
-rw-r--r-- 1 root wheel 84M Mar 1 15:42 sktrace.log.0000000034
-rw-r--r-- 1 root wheel 100M Mar 1 20:07 sktrace.log.0000000035
-rw-r--r-- 1 root wheel 100M Mar 2 00:23 sktrace.log.0000000036
-rw-r--r-- 1 root wheel 100M Mar 2 04:34 sktrace.log.0000000037
-rw-r--r-- 1 root wheel 100M Mar 2 09:17 sktrace.log.0000000038
-rw-r--r-- 1 root wheel 100M Mar 2 13:20 sktrace.log.0000000039
-rw-r--r-- 1 root wheel 100M Feb 27 03:00 sktrace.log.0000000021
-rw-r--r-- 1 root wheel 100M Feb 27 09:02 sktrace.log.0000000022
-rw-r--r-- 1 root wheel 100M Feb 27 14:38 sktrace.log.0000000023
-rw-r--r-- 1 root wheel 18M Feb 27 15:42 sktrace.log.0000000024
-rw-r--r-- 1 root wheel 100M Feb 27 21:22 sktrace.log.0000000025
-rw-r--r-- 1 root wheel 100M Feb 28 02:17 sktrace.log.0000000026
-rw-r--r-- 1 root wheel 100M Feb 28 07:57 sktrace.log.0000000027
-rw-r--r-- 1 root wheel 100M Feb 28 13:22 sktrace.log.0000000028
-rw-r--r-- 1 root wheel 43M Feb 28 15:42 sktrace.log.0000000029
-rw-r--r-- 1 root wheel 100M Feb 28 20:56 sktrace.log.0000000030
-rw-r--r-- 1 root wheel 100M Mar 1 02:00 sktrace.log.0000000031
-rw-r--r-- 1 root wheel 100M Mar 1 06:51 sktrace.log.0000000032
-rw-r--r-- 1 root wheel 100M Mar 1 11:30 sktrace.log.0000000033
-rw-r--r-- 1 root wheel 84M Mar 1 15:42 sktrace.log.0000000034
-rw-r--r-- 1 root wheel 100M Mar 1 20:07 sktrace.log.0000000035
-rw-r--r-- 1 root wheel 100M Mar 2 00:23 sktrace.log.0000000036
-rw-r--r-- 1 root wheel 100M Mar 2 04:34 sktrace.log.0000000037
-rw-r--r-- 1 root wheel 100M Mar 2 09:17 sktrace.log.0000000038
-rw-r--r-- 1 root wheel 100M Mar 2 13:20 sktrace.log.0000000039
The logs contains stuff like this
sim20c10-02% tail sktrace.log (skipped the date and log number per entry)
Spoiler
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128d7d340
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128e275c0, sct=0, delta=0x1d
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128d7d340, sct=0, delta=0x1d
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128e441c0, sct=0, delta=0x1d
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128a97800 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128a97800
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128baa0c0 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128baa0c0
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128b22640 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128b22640
[0:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128baa0c0
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128a97800
[0:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128b22640
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128a97800, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128b22640, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128db1500 cdb=0x28:0000a408:0008
[1:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128db1500
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128baa0c0, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128db1500
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128db1500, sct=0, delta=0x14
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128e275c0, sct=0, delta=0x1d
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128d7d340, sct=0, delta=0x1d
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128e441c0, sct=0, delta=0x1d
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128a97800 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128a97800
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128baa0c0 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128baa0c0
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128b22640 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128b22640
[0:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128baa0c0
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128a97800
[0:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128b22640
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128a97800, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128b22640, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128db1500 cdb=0x28:0000a408:0008
[1:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128db1500
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128baa0c0, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128db1500
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128db1500, sct=0, delta=0x14
Where can I disable this level of debug messages? Again, cluster is only utilized with itself. The is no data aggr, no cifs/nfs sharing, no export policy.
Thanks in advance.
BR Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you update your simulator to 9.8P1?
i have many instance of the simulator running without problem. i think your specific problem can be a BUG.
there is a command you can try from diag mode to disable sktrace.
debug sktrace tracepoint modify -node * -module * -enabled false
Hope this help.
Good luck!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Cool,
I have not updated the SIM yet, but at least the sktrace helped me already.
debug sktrace tracepoint modify -node * -module VHA_DISK -level INFO -enabled false
sktrace.log is only 5,4k after several hours.
Many thanks!
BR Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Unable to recover the local database of Data Replication Module - NetApp Community
Did all adviced, simulators fell in stalling state over and over and space and recover database events.
Installed simulators 9.8 on VMWorkstation and ESX1 7.0, both same issues.
After installing simulators 9.7 all OK ....
