Just ran into this during migration from FAS(eos) to AFF. I had migrated all the volumes off with volume move and was just about to offline/delete a SAS aggregate, when I noticed aggregate volume count reported 3 volumes. Volume show list no volumes for that aggregate in cluster shell, but when I drop the node shell there they are. Running 9.8p21 on this lab cluster, but will fix that when I eject the FAS nodes and uplift the AFF to GA. These three volumes are empty (so no worry about data loss) and not on list to be moved (not visible in cluster shell), so not from that. Thought I would post here to see if there was a community known recovery path or node shell fix before moving node decom(wipe). ---- the outputs --- c01::*> storage aggregate show -aggregate aggr_sas_c01_01 -fields volcount aggregate volcount ------------------------ -------- aggr_sas_c01_01 3 c01::*> c01::*> storage aggregate show-space -aggregate aggr_sas_c01_01 Aggregate : aggr_sas_c01_01 Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 69.33MB 0% Aggregate Metadata 4.19GB 0% Snapshot Reserve 0B 0% Total Used 4.25GB 0% Total Physical Used 681.8GB 1% c01::*> c01::*> volume show -is-constituent * -vserver * -volume * -aggregate aggr_sas_c01_01 There are no entries matching your query. c01::*> c01::*> system node run -node c01-01 Type 'exit' or 'Ctrl-D' to return to the CLI c01-01> c01-01> vol status Volume State Status Options vol0 online raid_dp, flex root, nvfail=on, space_slo=none 64-bit share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab online raid_dp, flex create_ucode=on, convert_ucode=on, cluster schedsnapname=create_time, guarantee=none, 64-bit fractional_reserve=0, space_slo=none share_312f0165_3be4_4299_a72b_9c9b62161b55 online raid_dp, flex create_ucode=on, convert_ucode=on, cluster schedsnapname=create_time, guarantee=none, 64-bit fractional_reserve=0, space_slo=none share_f287bdf6_0907_45ee_9638_1fa1c7703904 online raid_dp, flex create_ucode=on, convert_ucode=on, cluster schedsnapname=create_time, guarantee=none, 64-bit fractional_reserve=0, space_slo=none c01-01> c01-01*> vol offline /vol/share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab/ vol offline: command not supported on cluster volume 'share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab'. 01-01*>
... View more
Hi all, I’d like to confirm whether a FAS2820 with no internal drives installed, and only one external DS212 shelf fully populated with SSDs, can enable ADPv2 (root-data-data). I’d also like to clarify the underlying principle and whether there is any official documentation to support this.🤔
... View more
Hello. I have an ongoing feature request with Veeam regarding SnapLock and the handling of SnapMirror labels. Veeam says that the SnapLock functionality will work as soon as NetApp has completed its new USAPI plugin. Is there any approximate timeline for when such a plugin might be ready?
... View more
I don't know if i should be asking this hear or on logic monitor. i have a pair of C800 in a metro cluster. I've added each array in logic monitor, but i'm getting alerts on logic monitor for some volumes all ending with -mc which i believe is the standby copy of the volume on the 2nd array, but because its not in use it has a status of 4 which is unknown. any idea if there is a way i can get logic monitor to understand them as copies and ignore them, and if they ever swap roles, ie the standby becomes the active side and visa versa, that it also knows it can ignore that as unkown status? or is there a way i should be adding the arrays in logicmonitor as a cluster, so its aware that the volume has a active and standby side?
... View more
Hello and sorry for my english, On SVM, i have Native Fpolicy who block dangerous extensions. It was recommandation of Netapp to apply this. So there is a lot of extension block. In this list, the extension .son is block and i want to allow it.... How to not block this specific extension ? see attach screenshot thanks a lot
... View more