ONTAP Discussions

Volumes missing from cluster shell but visable in node shell

Brian_McCullough
321 Views

Just ran into this during migration from FAS(eos) to AFF.   I had migrated all the volumes off with volume move and was just about to offline/delete a SAS aggregate, when I noticed aggregate volume count reported 3 volumes.   Volume show list no volumes for that aggregate in cluster shell, but when I drop the node shell there they are.  Running 9.8p21 on this lab cluster, but will fix that when I eject the FAS nodes and uplift the AFF to GA.  These three volumes are empty (so no worry about data loss) and not on list to be moved (not visible in cluster shell), so not from that.  Thought I would post here to see if there was a community known recovery path or node shell fix before moving node decom(wipe).


---- the outputs ---

c01::*> storage aggregate show -aggregate aggr_sas_c01_01 -fields volcount
aggregate  volcount
------------------------ --------
aggr_sas_c01_01  3

c01::*>

c01::*> storage aggregate show-space -aggregate aggr_sas_c01_01

Aggregate : aggr_sas_c01_01

Feature Used Used%
-------------------------------- ---------- ------
Volume Footprints 69.33MB 0%
Aggregate Metadata 4.19GB 0%
Snapshot Reserve 0B 0%
Total Used 4.25GB 0%

Total Physical Used 681.8GB 1%

 

c01::*>

c01::*> volume show -is-constituent * -vserver * -volume * -aggregate aggr_sas_c01_01
There are no entries matching your query.

c01::*>

c01::*> system node run -node c01-01
Type 'exit' or 'Ctrl-D' to return to the CLI
c01-01>

c01-01> vol status
Volume State Status Options
vol0 online raid_dp, flex root, nvfail=on, space_slo=none
64-bit
share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab online raid_dp, flex create_ucode=on, convert_ucode=on,
cluster schedsnapname=create_time, guarantee=none,
64-bit fractional_reserve=0, space_slo=none
share_312f0165_3be4_4299_a72b_9c9b62161b55 online raid_dp, flex create_ucode=on, convert_ucode=on,
cluster schedsnapname=create_time, guarantee=none,
64-bit fractional_reserve=0, space_slo=none
share_f287bdf6_0907_45ee_9638_1fa1c7703904 online raid_dp, flex create_ucode=on, convert_ucode=on,
cluster schedsnapname=create_time, guarantee=none,
64-bit fractional_reserve=0, space_slo=none
c01-01>

c01-01*> vol offline /vol/share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab/
vol offline: command not supported on cluster volume 'share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab'.
01-01*>

1 ACCEPTED SOLUTION

TMACMD
283 Views

Try this (don’t act on it, just try it)

 set diag

 debug vreport show

View solution in original post

3 REPLIES 3

TMACMD
284 Views

Try this (don’t act on it, just try it)

 set diag

 debug vreport show

Never used that one before, it did report mismatch and vreport fix removed the mismatch, vol count still remains at 3 and volumes online in node shell (still unable to offline/delete in node shell, not tried force yet).

 

c01::*> debug vreport show
aggregate Differences:

Name Reason Attributes
-------- ------- ---------------------------------------------------
aggr_sas_c01_01 Present both in VLDB and WAFL with differences
Node Name: c01-01
Aggregate UUID: 347f6bcf-e429-4969-923e-6a83f4e7c9f9
Aggregate State: online
Aggregate Raid Status: raid_dp
Aggregate HA Policy: sfo
Is Aggregate Root: false
Is Composite Aggregate: false
Differing Attribute: Volume Count (Try to fix volume differences using vreport for volume, and check again.)
WAFL Value: 3
VLDB Value: 0

volume Differences:

Name Reason Attributes
-------- ------- ---------------------------------------------------
os2-manila-svm:share_312f0165_3be4_4299_a72b_9c9b62161b55 Present in WAFL Only
Node Name: c01-01
Volume DSID:2538 MSID:2151338372
UUID: b52508a4-b892-11ef-a737-00a0985ee5ac
Aggregate Name: aggr_sas_dcw_uswl_c01_01
Aggregate UUID: 347f6bcf-e429-4969-923e-6a83f4e7c9f9
Vserver UUID: 571781ef-da94-11eb-8494-00a0985ee440
AccessType: READ_WRITE
StorageType: REGULAR
Constituent Role: none

os2-manila-svm:share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab Present in WAFL Only
Node Name: c01-01
Volume DSID:2537 MSID:2151338371
UUID: 2e33a797-b892-11ef-a737-00a0985ee5ac
Aggregate Name: aggr_sas_dcw_uswl_c01_01
Aggregate UUID: 347f6bcf-e429-4969-923e-6a83f4e7c9f9
Vserver UUID: 571781ef-da94-11eb-8494-00a0985ee440
AccessType: READ_WRITE
StorageType: REGULAR
Constituent Role: none

os2-manila-svm:share_f287bdf6_0907_45ee_9638_1fa1c7703904 Present in WAFL Only
Node Name: c01-01
Volume DSID:2539 MSID:2151338373
UUID: 6e014c6b-b893-11ef-a737-00a0985ee5ac
Aggregate Name: aggr_sas_dcw_uswl_c01_01
Aggregate UUID: 347f6bcf-e429-4969-923e-6a83f4e7c9f9
Vserver UUID: 571781ef-da94-11eb-8494-00a0985ee440
AccessType: READ_WRITE
StorageType: REGULAR
Constituent Role: none

4 entries were displayed.

c01::*>

c01::*> debug vreport fix -type volume -object os2-manila-svm:share_312f0165_3be4_4299_a72b_9c9b62161b55

c01::*> debug vreport fix -type volume -object os2-manila-svm:share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab

c01::*> debug vreport fix -type volume -object os2-manila-svm:share_f287bdf6_0907_45ee_9638_1fa1c7703904

c01::*> debug vreport show
This table is currently empty.

Info: WAFL and VLDB volume/aggregate records are consistent.

c01::*>

c01::*> storage aggregate show -aggregate aggr_sas_c01_01 -fields volcount
aggregate volcount
------------------------ --------
aggr_sas_c01_01 3

c01::*>



The one thing I did not check is to see if the volumes were visible in the cluster shell and they were.  Thanks for the tip and path to solution.  Now to migrate these last volumes.

c01::> volume show -is-constituent * -vserver * -aggregate aggr_sas_c01_01
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
os2-manila-svm
share_312f0165_3be4_4299_a72b_9c9b62161b55
aggr_sas_c01_01
online RW 2GB 1.90GB 0%
os2-manila-svm
share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab
aggr_sas_c01_01
online RW 2GB 1.90GB 0%
os2-manila-svm
share_f287bdf6_0907_45ee_9638_1fa1c7703904
aggr_sas_c01_01
online RW 1GB 972.4MB 0%
3 entries were displayed.

c01::>

 

Public