I’m facing an issue while deploying an ONTAP Select cluster (2-node HA) using ONTAP Select Deploy 9.18.1 in a VMware environment. During the cluster creation, the Storage Pool dropdown is empty, even though I have available datastores that should meet the requirements. Environment details: ONTAP Select Deploy: 9.18.1 Deployment type: 2-node HA cluster License: Evaluation Mode VMware ESXi, 8.0.2, 23825572 Storage configuration: Tried Software RAID vCenter-managed ESXi hosts What I verified: I created new shared datastores (DS_SHARED_11 and DS_SHARED_12) Each datastore: 2 TB capacity 2 TB free space Datastores are visible and accessible in vCenter Both hosts are part of the same datacenter and see the same storage Minimum capacity requirement is met (UI shows ~1.9 TB required) Issue: Even with these datastores available, the “Select Storage Pool” dropdown does not list any options. Additional notes: I previously had smaller datastores (~1 TB free) that were not listed, which made sense due to requirements After adding larger datastores (2 TB free), the issue persists Software RAID is not an option (no qualified disks) Hosts are currently added via vCenter (not standalone) I’ve attached screenshots showing: The available datastores in vCenter The ONTAP Select Deploy UI where no storage pools appear Question: Is there any additional requirement or constraint (permissions, datastore type, host configuration, vCenter integration, etc.) that could prevent ONTAP Select Deploy from discovering these datastores?
... View more
Hi everyone, I'm reviving an old FAS2040 for a homelab. I'm stuck on 8.0.2 and need to upgrade to 8.1.x. before upgrade to 8.2.5 Please DM me a link if you can help. Thanks!
... View more
Just ran into this during migration from FAS(eos) to AFF. I had migrated all the volumes off with volume move and was just about to offline/delete a SAS aggregate, when I noticed aggregate volume count reported 3 volumes. Volume show list no volumes for that aggregate in cluster shell, but when I drop the node shell there they are. Running 9.8p21 on this lab cluster, but will fix that when I eject the FAS nodes and uplift the AFF to GA. These three volumes are empty (so no worry about data loss) and not on list to be moved (not visible in cluster shell), so not from that. Thought I would post here to see if there was a community known recovery path or node shell fix before moving node decom(wipe). ---- the outputs --- c01::*> storage aggregate show -aggregate aggr_sas_c01_01 -fields volcount aggregate volcount ------------------------ -------- aggr_sas_c01_01 3 c01::*> c01::*> storage aggregate show-space -aggregate aggr_sas_c01_01 Aggregate : aggr_sas_c01_01 Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 69.33MB 0% Aggregate Metadata 4.19GB 0% Snapshot Reserve 0B 0% Total Used 4.25GB 0% Total Physical Used 681.8GB 1% c01::*> c01::*> volume show -is-constituent * -vserver * -volume * -aggregate aggr_sas_c01_01 There are no entries matching your query. c01::*> c01::*> system node run -node c01-01 Type 'exit' or 'Ctrl-D' to return to the CLI c01-01> c01-01> vol status Volume State Status Options vol0 online raid_dp, flex root, nvfail=on, space_slo=none 64-bit share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab online raid_dp, flex create_ucode=on, convert_ucode=on, cluster schedsnapname=create_time, guarantee=none, 64-bit fractional_reserve=0, space_slo=none share_312f0165_3be4_4299_a72b_9c9b62161b55 online raid_dp, flex create_ucode=on, convert_ucode=on, cluster schedsnapname=create_time, guarantee=none, 64-bit fractional_reserve=0, space_slo=none share_f287bdf6_0907_45ee_9638_1fa1c7703904 online raid_dp, flex create_ucode=on, convert_ucode=on, cluster schedsnapname=create_time, guarantee=none, 64-bit fractional_reserve=0, space_slo=none c01-01> c01-01*> vol offline /vol/share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab/ vol offline: command not supported on cluster volume 'share_e5563b1d_b5a4_4fc3_a414_0b7b5254ddab'. 01-01*>
... View more
Hi all, I’d like to confirm whether a FAS2820 with no internal drives installed, and only one external DS212 shelf fully populated with SSDs, can enable ADPv2 (root-data-data). I’d also like to clarify the underlying principle and whether there is any official documentation to support this.🤔
... View more
Hello. I have an ongoing feature request with Veeam regarding SnapLock and the handling of SnapMirror labels. Veeam says that the SnapLock functionality will work as soon as NetApp has completed its new USAPI plugin. Is there any approximate timeline for when such a plugin might be ready?
... View more