ONTAP Discussions

Snapmirror Finalizing Disk Utilization at 100%

swordfish
3,011 Views

Hello,

 

I initialized multiple snapmirror transfers last Friday. All of them completed except three which are at finalizing state since last three days. All three being replicated to a 46 disk SATA aggregate. The disk utilization on the aggregate is at consistently at 100% and read latency is 32ms. I have a deadline to complete the transfers by Thursday. Is there a way to see how much more time the finalizing process will take ? Or is there a way to pause two of the current transfers and complete them one by one so they are easy on the aggregate. Will this help? These volumes are 90t, 35t and 12t in size, all of them are used 90%.

 

Thank you

8 REPLIES 8

KylieCampbell
2,955 Views

Unfortunately, you will not be able to speed up this process in any way.

paul_stejskal
2,949 Views

A few KBs will help. Can you get some outputs of things like wafltop, wafl scan status, qos statistics workload resource disk show?

https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Systems/FAS_Systems/How_to_address_High_Disk_Utilization

https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/How_to_collect_WAFLTOP_output_from_CLI

https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_is_the_%22wafl_scan_status%22_command%3F

 

That will give me an idea of what is going on and maybe there is something we can tweak?

 

If you have a case # open, we can try to help out too if you give the case #.

swordfish
2,939 Views

All the snapmirror transfers completed successfully. But still the disk utilization is at 100%. Not sure what post processes are running causing this. I have a case 2008741196.

 

 

paul_stejskal
2,937 Views

Please still get that output. Honestly the qos statistics workload resource disk show is probably the simplest command to get out of all of them.

swordfish
2,935 Views

 

xxxxxx::*> qos statistics workload resource disk show -node xxxxx-01
Workload ID Disk Number of HDD Disks Disk Number of SSD Disks
--------------- ------ ------ ------------------- ------ -------------------
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0
-total- - 0% 120 0% 0

 

 

swordfish_0-1619108980616.png

 

paul_stejskal
2,932 Views

Wow that wasn't helpful. 😞

 

Try wafltop. The sample command would be wafltop show -v io -i 10 -n 10 from the node shell

::> node run -node 1

> priv set advanced; wafltop show -v io -i 10 -n 10

swordfish
2,922 Views

Here we go:

 

I/O utilization
---------MB Read---------- ---------MB Write--------- --------IOs Read---------- --------IOs Write---------
Application MB Total Standard ExtCache Hybrid Standard ExtCache Hybrid Standard ExtCache Hybrid Standard ExtCache Hybrid
----------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- --------
aggr_xxxxx_01_sata_03:vol_prd_xxxxxxxxxx:dense:SHARING_MSG_PREFETCH: 173282 162843 10439 0 0 0 0 41687808 2672419 0 0 0 0
aggr_xxxxx_01_sata_03::file i/o:WAFL_WRITE: 1074 0 0 0 1074 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03::other:DENSE_PREFETCH: 1039 1014 25 0 0 0 0 5444 152 0 0 0 0
aggr_xxxxx_01_sata_03::file i/o:WAFL_READ: 107 107 0 0 0 0 0 630 34 0 0 0 0
aggr_xxxxx_01_sata_03:vol_prd_xxxxxxxxxx:dense:SHARING_MSG_LOAD_AUX: 79 77 2 0 0 0 0 19828 514 0 0 0 0
aggr_xxxxx_01_sata_03:vol_prd_xxxxxxxxxx:dense:SHARING_MSG_LOAD_REC: 27 27 0 0 0 0 0 7035 0 0 0 0 0
aggr_xxxxx_01_sata_03:vol_prd_xxxxxxxxxx:dense:SHARING_MSG_LOAD_MET: 9 8 1 0 0 0 0 1863 189 0 0 0 0
aggr_xxxxx_01_sata_03:clone_vol_xxxxxxxxx:walloc:WAFL_BLOG_FREE: 8 4 4 0 0 0 0 115 79 0 0 0 0
mroot_cidmst20_01::other:WAFL_CPWF_WALLOC_PRE: 3 0 0 0 3 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03::other:WAFL_CPWF_WALLOC_PRE: 3 0 0 0 3 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03:clone_vol_xxxxxxxxx:other:WAFL_VBC_CREATE: 1 0 0 0 1 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03::other:WAFL_APPLY_MAPS: 1 0 0 0 1 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03:vol_prd_xxxxxxxxxx:dense:SHARING_MSG_LOAD_DON: 1 1 0 0 0 0 0 371 1 0 0 0 0
aggr_xxxxx_01_sata_03:vol_prd_nfs_xxxxxxxxx_180000_dest:other:WAFL_CPWF_WABE_VOL_P: 1 0 0 0 1 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03::walloc:WAFL_POST_CLEAN_INOD: 1 0 0 0 1 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03:clone_vol_xxxxxxxxx:scanner:WAFL_SCAN_BLKS_USED: 1 0 0 0 1 0 0 6 0 0 0 0 0
aggr_xxxxx_01_sata_03:vol_prd_xxxxxxxxxx:other:WAFL_BLKFREE_VVOL_BM: 1 0 0 0 1 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03:clone_vol_prd_xxxxxxxxxx:scanner:WAFL_SCAN_BLKS_USED: 1 1 0 0 0 0 0 12 0 0 0 0 0
aggr_xxxxx_01_sata_03:clone_vol_prd_xxxxxxxx:other:WAFL_CP_APPF_UPDATE_: 1 0 0 0 1 0 0 0 0 0 0 0 0
aggr_xxxxx_01_sata_03:clone_vol_prd_xxxxxxxx:scanner:WAFL_SCAN_BLKS_USED: 1 1 0 0 0 0 0 24 1 0 0 0 0

paul_stejskal
2,917 Views
Dedupe is running.

sis status
sis stop /vol/volume on all active volumes
Public