I am trying to delete a problematic volume so that I can recreate it to host a LUN to be a new datastore in my VMware vCenter 7.0.3 instance. I created the volume to be an application volume. I have discovered that because I created it to be an application volume, I can not delete snapshots for this volume, nor the volume itself. Please advise. Command to list snapshots associated with the application volume: IFAS::*> volume snapshot show -vserver PKIFAS_02_SVM -volume SSD_PKIFAS_02_vsidata_ds_1 -fields dsid,owners,size vserver volume snapshot dsid owners size ------------- -------------------------- ------------------------- ------------- ------ ------- PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-weekly.2025-02-04_0430 2559800509844 - 553.9MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-weekly.2025-02-11_0430 3315714753940 - 536.8MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-daily.2025-02-11_0625 3328599655828 - 99.66MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-daily.2025-02-12_0625 3435973838228 - 59.77MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-daily.2025-02-13_0625 3543348020628 - 1.12TB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-daily.2025-02-14_0625 3650722203028 - 471.2GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-daily.2025-02-15_0625 3758096385428 - 1.51TB vserver volume snapshot dsid owners size ------------- -------------------------- ------------------------- ------------- ------ ------- PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-daily.2025-02-16_0625 3865470567828 - 126.5GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-16_2222 3934190044564 - 3.32GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-16_2322 3938485011860 - 111.9MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0022 3942779979156 - 744KB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0122 3947074946452 - 740KB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0222 3951369913748 - 740KB vserver volume snapshot dsid owners size ------------- -------------------------- ------------------------- ------------- ------ ------- PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0322 3955664881044 - 756KB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0422 3959959848340 - 336.7MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0522 3964254815636 - 18.66GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0622 3968549782932 - 16.56GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-daily.2025-02-17_0625 3972844750228 - 4.08GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0722 3977139717524 - 2.44GB vserver volume snapshot dsid owners size ------------- -------------------------- ------------------------- ------------- ------ ------- PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0822 3981434684820 - 2.73GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_0922 3985729652116 - 92.75MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1022 3990024619412 - 5.02GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1122 3994319586708 - 2.92GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1222 3998614554004 - 2.86GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1322 4002909521300 - 3.23GB vserver volume snapshot dsid owners size ------------- -------------------------- ------------------------- ------------- ------ ------- PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1422 4007204488596 - 3.13GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1522 4011499455892 - 6.43GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1622 4015794423188 - 45.57MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1722 4020089390484 - 3.66GB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1822 4024384357780 - 298.6MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_1922 4028679325076 - 48.66MB vserver volume snapshot dsid owners size ------------- -------------------------- ------------------------- ------------- ------ ------- PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_2022 4032974292372 - 43.73MB PKIFAS_02_SVM SSD_PKIFAS_02_vsidata_ds_1 pg-hourly.2025-02-17_2122 4037269259668 - 520KB Command to delete all snapshots for an application volume: PKIFAS::*> volume snapshot delete -vserver PKIFAS_02_SVM -volume SSD_PKIFAS_02_vsidata_ds_1 * Error: command failed on vserver "PKIFAS_02_SVM" volume "SSD_PKIFAS_02_vsidata_ds_1" snapshot "pg-weekly.2025-02-04_0430": Snapshot copy "pg-weekly.2025-02-04_0430" cannot be deleted because it is part of protection group "SSD_PKIFAS_02" in application "SSD_PKIFAS_02". Use the "application snapshot delete" command to delete the Snapshot copy. Warning: Do you want to continue running this command? {y|n}: According to this link: https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Application_snapshots_can_not_be_deleted The “application snapshot delete” command has been deprecated and last supported in ONTAP 9.5. In order to resolve the issue, you must open a NetApp technical support case. We do not have an active support contract with NetApp and we are on ONTAP 9.8P21. Command to delete a volume: PKIFAS::*> volume delete -vserver PKIFAS_02_SVM -volume SSD_PKIFAS_02_vsidata_ds_1 -force true -foreground true Error: command failed: Volume "SSD_PKIFAS_02_vsidata_ds_1" in Vserver "PKIFAS_02_SVM" is part of application "SSD_PKIFAS_02". You must remove the volume from application "SSD_PKIFAS_02" using "application volume remove" (privilege: advanced) command before you can delete this volume.
... View more
We have configured and installed SnapCenter v6.0 on Windows 2022, when configured SMTP server and testing emails got the below error: An error occurred while attempting to establish an SSL or TLS connection. The host name (172.19.64.12) did not match any of the names given in the server's SSL certificate: • *.ngahr.com • ngahr.com . The remote certificate was rejected by the provided RemoteCertificateValidationCallback. Anyone have this issue and get it fixed ? Kindly help out
... View more
We have been using a FAS2750 for years, and since last week with ONTAP 9.15.1P7 (this was necessary because we have acquired a C60 as a successor system and want to transfer the data there soon). As of today, the AIQUM (9.16) that we are using can no longer connect to the FAS2750. Under "Storage Management" / "Cluster Setup" the "Operation State" only shows "failed". Yesterday, a new certificate was issued by the AIQUM and transferred to the FAS2750. Today we also had to issue a new certificate for the AIQUM. This was also transferred to the FAS2750. Could there be a connection? We have already deleted the two newly created certificates on the 2750 and recreated the certificate on the AIQUM using "regenerate", without success.
... View more
we are thinking of transitioning to Hyper-V from VMware midst of all these greedy business model by broadcom. i was searching something similar to SnapCenter that i can use for Hyper-V but it seems that Snapmanager will be EOL this year and SnapDrive thats needed for SnapManager is already EOL. will the netapp team start working towards a similar solution to SnapCenter for Hyper-V? or even resume the production and support of snapmanager/snapdrive? any feedback or infomation is much appreciated. thanks.
... View more
Hi All, I'm doing a piece of work where we have installed two new controllers in to our production six node cluster and im moving all the storage and anything the first four nodes host to the new nodes. The majority of our virtual machines and storage are presented via iscsi. Its come to my attention that I would need to create new lifs on the new nodes, which means i would then have to configure each virtual machine with the new entry IP for iscsi which would be a pain. The easiest and quickest solution would be for us to migrate the iscsi lif thats on node1 to node 5/6. Not sure if this is possible. - if its possible would it cause any downtime or disruption to any services etc? thanks
... View more