Simulator Discussions

Question about Simiulator vol0 usage size

shypervisor
8,934 Views

I have a question about vol0 usage size. it relates to following

https://community.netapp.com/t5/Simulator-Discussions/Question-about-Simulator-database-recovery/m-p/163260

 

The vol0 usage size is continuously increasing.

Is there any way to reduce size? or it needs to continuously adding disk?

 

20210201_ontap_simulator.PNG

1 ACCEPTED SOLUTION

jcolonfzenpr
8,803 Views

you can add more disk to the simulator:

 

https://vmstorageguy.wordpress.com/2015/08/18/adding-more-disks-to-the-netapp-ontap-simulator/

 

Before doing this procedure properly shutdown the VM and take a snapshot of it then make the changes. After adding disk to the simulator you need to increase the agg0 size by adding disk, next grow vol0 with the newly added capacity.

 

When you feel all is ok delete the snapshot.

 

Good luck 👍 

Jonathan Colón | Blog | Linkedin

View solution in original post

14 REPLIES 14

shypervisor
8,918 Views

I need to increate pool0 size continuously? or Is there some method to decrease vol0 size?

jcolonfzenpr
8,876 Views

try to delete vol0 snapshot and modify the default vol0 snapshot policy:

 

run local

snap delete -a vol0

snap sched vol0 0 0 0

snap sched vol0

Jonathan Colón | Blog | Linkedin

shypervisor
8,848 Views

Thank you for your suggestion,

But nothing changed.

202102_ontapsim.PNG

shypervisor
8,845 Views

For this case,  Pool size expantion is required?

At this moment, add disk is not permitted, so I need to release other raid group disk and add it?

jcolonfzenpr
8,843 Views

what is the aggr0 size and space usage?

 

aggr status?

 

if size is low on aggr0 you can add disk to increase it size then you can grow vol0

 

vol size vol0 +1g

 

Reference:

https://community.netapp.com/t5/Simulator-Discussions/Simulator-ONTAP-8-3-Root-Volume-Space-Problem/m-p/109880/highlight/true#M1653

Jonathan Colón | Blog | Linkedin

shypervisor
8,824 Views

Currently, aggr status is follows `aggr0_ontapsim_01` is current vol0

 

20210202_ontapsim_aggr.PNG

jcolonfzenpr
8,804 Views

you can add more disk to the simulator:

 

https://vmstorageguy.wordpress.com/2015/08/18/adding-more-disks-to-the-netapp-ontap-simulator/

 

Before doing this procedure properly shutdown the VM and take a snapshot of it then make the changes. After adding disk to the simulator you need to increase the agg0 size by adding disk, next grow vol0 with the newly added capacity.

 

When you feel all is ok delete the snapshot.

 

Good luck 👍 

Jonathan Colón | Blog | Linkedin

shypervisor
8,770 Views

Thank you for you help.

 

Is there any guideline for vol0 sizes? it seems monotonically increasing.

If I keep number of volumes under 20, vol0 usage size keeps under 2GB?

jcolonfzenpr
8,753 Views

Sorry if i misunderstood your question but the only three times i encounter problem with vol0 are related to:

1. ontap upgrade.

2. vol0 default snapshot schedule.

3. when costumer configure vol0 to serve data as mount point for cifs, nfs etc.

 

Freeing up space on a node’s root volume:

https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-sag%2FGUID-F937AFE5-197D-4C12-A0D6-628276D5C6D9.html

Jonathan Colón | Blog | Linkedin

shypervisor
8,751 Views

Thank you for your suggestion.

 

The document is helpful. It seems vol0's golden-volume usage size is not controllable from user operation side.

 

In summarize,

Controllable usage size for root vol0 is snapshot.

Other usage size is not controllable. For this case, it needs to add disk.

Martin-Sto
8,304 Views

Sorry to takeover this thread,

 

but for me, it is neither the snapshot nor the vol0 serving data. I just started with NetApp administration a few month ago, so I am still an beginner.

I am having a brand new Netapp Ontap 9.8 Sim downloaded on Feb. 18 2021. Running within VMWare Player on my win10 workstation. 

 

Nevertheless, after starting and create the 2-node cluster, my sim continues to fill up the vol0. The first setup was full within 8 hours and broke the cluster entirely over night. This was repeating two times, because my daily business did not allows to spent much time on this. On my fourth try, I was at least able to add virtual disks and increase the vol0, disable and delete the aggr snapshots. Vol0 is still filling up, but now I have more time to delete. I did not even started to create a data aggr.

 

Thanks to some explanations here on how to clean up vol0 with diag mode, I found within /mroot/etc/log/mlog a lot of sktrace.log files. Even deleted, they are still expanding by 100mb per ~5 hours.

 

sim20c10-02% ls -lha *sktrace*

Spoiler
-rw-r--r-- 2 root wheel 12M Mar 2 13:54 sktrace.log
-rw-r--r-- 1 root wheel 100M Feb 27 03:00 sktrace.log.0000000021
-rw-r--r-- 1 root wheel 100M Feb 27 09:02 sktrace.log.0000000022
-rw-r--r-- 1 root wheel 100M Feb 27 14:38 sktrace.log.0000000023
-rw-r--r-- 1 root wheel 18M Feb 27 15:42 sktrace.log.0000000024
-rw-r--r-- 1 root wheel 100M Feb 27 21:22 sktrace.log.0000000025
-rw-r--r-- 1 root wheel 100M Feb 28 02:17 sktrace.log.0000000026
-rw-r--r-- 1 root wheel 100M Feb 28 07:57 sktrace.log.0000000027
-rw-r--r-- 1 root wheel 100M Feb 28 13:22 sktrace.log.0000000028
-rw-r--r-- 1 root wheel 43M Feb 28 15:42 sktrace.log.0000000029
-rw-r--r-- 1 root wheel 100M Feb 28 20:56 sktrace.log.0000000030
-rw-r--r-- 1 root wheel 100M Mar 1 02:00 sktrace.log.0000000031
-rw-r--r-- 1 root wheel 100M Mar 1 06:51 sktrace.log.0000000032
-rw-r--r-- 1 root wheel 100M Mar 1 11:30 sktrace.log.0000000033
-rw-r--r-- 1 root wheel 84M Mar 1 15:42 sktrace.log.0000000034
-rw-r--r-- 1 root wheel 100M Mar 1 20:07 sktrace.log.0000000035
-rw-r--r-- 1 root wheel 100M Mar 2 00:23 sktrace.log.0000000036
-rw-r--r-- 1 root wheel 100M Mar 2 04:34 sktrace.log.0000000037
-rw-r--r-- 1 root wheel 100M Mar 2 09:17 sktrace.log.0000000038
-rw-r--r-- 1 root wheel 100M Mar 2 13:20 sktrace.log.0000000039

 

The logs contains stuff like this

sim20c10-02% tail sktrace.log (skipped the date and log number per entry)

Spoiler
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128d7d340
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128e275c0, sct=0, delta=0x1d
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128d7d340, sct=0, delta=0x1d
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128e441c0, sct=0, delta=0x1d
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128a97800 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128a97800
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128baa0c0 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128baa0c0
[0:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128b22640 cdb=0x2a:0000a408:0008
[0:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128b22640
[0:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128baa0c0
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128a97800
[0:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128b22640
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128a97800, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128b22640, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk send: cmdblk=0xfffff80128db1500 cdb=0x28:0000a408:0008
[1:0] VHA_DISK_INFO: vha_disk sent: cmdblk=0xfffff80128db1500
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128baa0c0, sct=0, delta=0x1b
[1:0] VHA_DISK_INFO: vha_disk aio complete: cmdblk=0xfffff80128db1500
[1:0] VHA_DISK_INFO: vha_disk_main_handler: (normal dequeue) cmdblk=0xfffff80128db1500, sct=0, delta=0x14

 

 

Where can I disable this level of debug messages? Again, cluster is only utilized with itself. The is no data aggr, no cifs/nfs sharing, no export policy.

 

Thanks in advance.

 

BR Martin

jcolonfzenpr
8,267 Views

Can you update your simulator to 9.8P1?

 

i have many instance of the simulator running without problem. i think your specific problem can be a BUG. 

 

there is a command you can try from diag mode to disable sktrace.

 

debug sktrace tracepoint modify -node * -module * -enabled false

 

Hope this help.

 

Good luck!

Jonathan Colón | Blog | Linkedin

Martin-Sto
8,216 Views

Cool,

 

I have not updated the SIM yet, but at least the sktrace helped me already.

debug sktrace tracepoint modify -node * -module VHA_DISK -level INFO -enabled false

sktrace.log is only 5,4k after several hours.

Many thanks!

 

BR Martin

Peetrk
7,027 Views

Re: Unable to recover the local database of Data Replication Module - NetApp Community

 

Did all adviced, simulators fell in stalling state over and over and space and recover database events.

 

Installed simulators 9.8 on VMWorkstation and ESX1 7.0, both same issues.

 

After installing simulators 9.7 all OK ....

Public