Looking at a cluster running 8.2.1P1, I notice that sometimes volume names as reported by the "qtree status" from the 7-Mode shell do not align with the volume names reported in the cDot CLI (or anywhere else for that matter.)
e.g. (names changed to protect the guilty)
cluster01:: > node run -node local qtree status testvol0(1)
Volume Tree Style Oplocks Status
-------- -------- ----- -------- ---------
testvol0(1) unix enabled normal
testvol0(1) database_undo001 unix enabled normal
cluster01:: > node run -node local vol status testvol0(1)
Volume State Status Options
testvol0(1) online raid_dp, flex create_ucode=on, convert_ucode=on,
cluster schedsnapname=create_time, guarantee=none,
sis fractional_reserve=0
64-bit
Containing aggregate: 'cluster01c01_aggr2'
cluster01:: > volume show -volume testvol0(1)
There are no entries matching your query.
cluster01:: > volume show -volume testvol0
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
cluster01 testvol0 cluster01_aggr2
online RW 320GB 320.0GB 0%
cluster01:: > node run -node local qtree status testvol02
Volume Tree Style Oplocks Status
-------- -------- ----- -------- ---------
testvol02 unix enabled normal
testvol02 qtreename01 unix enabled normal
cluster01:: > node run -node local vol status testvol02
Volume State Status Options
testvol02 online raid_dp, flex nvfail=on, create_ucode=on, convert_ucode=on,
cluster schedsnapname=create_time, guarantee=none,
sis fractional_reserve=0
64-bit
hybrid
Containing aggregate: 'cluster01c01_aggr1'
cluster01:: > volume show -volume testvol02
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
cluster01svm
testvol02 cluster01c01_aggr1
online RW 42.50GB 42.50GB 0%
Why does this variation exist?
Why isn't there a 1:1 correlation between volume names between all parts of cDot?