The cloud targets are not being returned from the NetApp REST API as shown below from MetroCluster! GET https://XX.XX.XX.XX/api/cloud/targets { "records": [], "num_records": 0, "_links": { "self": { "href": "/api/cloud/targets" } } }
... View more
Hello,
we made an upgrade in August to ONATP version 9.4P1
Today I was checking some config in our cluster and I have seen, that two cluster lifs are not on their home port:
network interface show -role cluster
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
x-01_clus1 up/up 169.254.4.144/16 x-01 e0b false
x-01_clus2 up/up 169.254.130.246/16 x-01 e0c true
x-01_clus3 up/up 169.254.131.229/16 x-01 e0b true
x-01_clus4 up/up 169.254.168.120/16 x-01 e0d true
x-02_clus1 up/up 169.254.89.228/16 x-02 e0b false
x-02_clus2 up/up 169.254.106.140/16 x-02 e0c true
x-02_clus3 up/up 169.254.26.197/16 x-02 e0b true
x-02_clus4 up/up 169.254.4.232/16 x-02 e0d true
8 entries were displayed.
As you see here, two lifs are not at home... HomePort would be e0a
network port show -role cluster
Node: x-01
Ignore
Speed(Mbps) Health Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
--------- ------------ ---------------- ---- ---- ----------- -------- ------
e0a Cluster Cluster up 9000 auto/10000 healthy false
e0b Cluster Cluster up 9000 auto/10000 healthy false
e0c Cluster Cluster up 9000 auto/10000 healthy false
e0d Cluster Cluster up 9000 auto/10000 healthy false
Node: x-02
Ignore
Speed(Mbps) Health Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
--------- ------------ ---------------- ---- ---- ----------- -------- ------
e0a Cluster Cluster up 9000 auto/10000 healthy false
e0b Cluster Cluster up 9000 auto/10000 healthy false
e0c Cluster Cluster up 9000 auto/10000 healthy false
e0d Cluster Cluster up 9000 auto/10000 healthy false
8 entries were displayed.
network interface failover-groups show -vserver Cluster -failover-group Cluster
Vserver Name: Cluster
Failover Group Name: Cluster
Failover Targets: x-02:e0a, x-02:e0b, x-02:e0c, x-02:e0d,
x-01:e0a, x-01:e0b, x-01:e0c, x-01:e0d
Broadcast Domain: Cluster
network interface show -vserver Cluster -lif x-01_clus1 -instance
Vserver Name: Cluster
Logical Interface Name: x-01_clus1
Role: cluster
Data Protocol: none
Network Address: 169.254.4.144
Netmask: 255.255.0.0
Bits in the Netmask: 16
Subnet Name: -
Home Node: x-01
Home Port: e0a
Current Node: x-01
Current Port: e0b
Operational Status: up
Extended Status: -
Numeric ID: 1024
Is Home: false
Administrative Status: up
Failover Policy: local-only
Firewall Policy:
Auto Revert: true
Sticky Flag: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
(DEPRECATED)-Load Balancing Migrate Allowed: false
Load Balanced Weight: load
Failover Group Name: Cluster
FCP WWPN: -
Address family: ipv4
Comment: -
IPspace of LIF: Cluster
Is Dynamic DNS Update Enabled?: -
When I try to revert the cluster lif on the correct node (I know, that you can do this only on the local node), I get the following error:
network interface revert -vserver Cluster -lif x-01_clus1
->
Error: command failed: LIF "x-01_clus1" failed to migrate: failed to move cluster/node-mgmt LIF.
Also a migration to that port is failing with the same error.
On the clsuter switch, I saw some errors on that interface:
show interface 0/1
Packets Received Without Error................. 5389711331
Packets Received With Error.................... 1139
Broadcast Packets Received..................... 3128
Receive Packets Discarded...................... 0
Packets Transmitted Without Errors............. 6369143239
Transmit Packets Discarded..................... 0
Transmit Packet Errors......................... 0
Collision Frames............................... 0
Number of link down events..................... 6
I have shutted down that port and cleared the counters, but also then, I wasn't able to revert the lif...
I then shutted down the home port e0a and enabled again:
network port modify -node x-01 -port e0a -up-admin false
network port modify -node x-01 -port e0a -up-admin true
-> didn't helped! Same issue
Then I treid migrate with force flag:
net int migrate -vserver Cluster -lif x-01_clus1 -destination-node x-01 -destination-port e0a -force
->
Warning: Migrating LIF "x-01_clus1" to node "x-01" using the "force" parameter might cause this LIF to be configured on multiple nodes in the cluster. Use the "network interface show -vserver
Cluster -lif x-01_clus1" command to verify the LIF's operational status is not "up" before using this command.
Do you want to continue? {y|n}: y
Error: command failed: LIF "x-01_clus1" failed to migrate: failed to move cluster/node-mgmt LIF.
Same problem, not able to revert.
Does someone of you know this problem and has a solution for me?
We haven't changed the config of the netapp in the last time, only the upgrade from 9.3 to 9.4.
Best regards
Florian
... View more
I have a powershell script that has been working well for the past 4 yrs., Recently I migrated the script to a new environment and it started giving errors what the script does is to login the source filer A, change the snapshots' snapmirror labels to "daily" then snapvault these snapshots to the destination filer B When i run it, it seems the script is able to login both filer A and filer B, and replicate the snapshots over. It failed on getting the snapshots, and gave me 400 errors are below PS C:\Users\adm_gwen> powershell.exe E:\Scripts\Veeam-NetAppSnaplock\Set-SM-Label-Update_SV-With-File_Input.ps1 -PrimaryCluster "PUG3AXRSTR1.pue1m.ad" -PrimarySVM "PUG3AXRSTR1-vs01" -ClusterUser veeam -ClusterPass "E:\Scripts\Veeam-NetAppSnaplock\password.txt" -PassKey "E:\Scripts\Veeam-NetAppSnaplock\AES.key" -SecondaryCluster "pcs1bxdsbr.pcs1m.ad" -SecondarySVM "pcs1bxdsbr-uge-vs01" -VolumeListFile "E:\Scripts\Veeam-NetAppSnaplock\SiteA_pug3axrstr_oasys.txt" [04.03.2025 15:01:22] Starting new log file [04.03.2025 15:01:22] Trying to load NetApp Powershell module [04.03.2025 15:01:22] Loaded NetApp Powershell module sucessfully [04.03.2025 15:01:22] Trying to connect to SVM PUG3AXRSTR1-vs01 on cluster PUG3AXRSTR1.pue1m.ad [04.03.2025 15:01:24] Connection established to PUG3AXRSTR1-vs01 on cluster PUG3AXRSTR1.pue1m.ad Get-NcSnapshot : The remote server returned an error: (400) Bad Request. At E:\Scripts\Veeam-NetAppSnaplock\Set-SM-Label-Update_SV-With-File_Input.ps1:239 char:3 + Get-NcSnapshot -SnapName *Veeam* | Set-NcSnapshot -SnapmirrorLabel ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Get-NcSnapshot], WebException + FullyQualifiedErrorId : System.Net.WebException,DataONTAP.C.PowerShell.SDK.Cmdlets.Snapshot.GetNcSnapshot [04.03.2025 15:01:24] Trying to connect to SVM pcs1bxdsbr-uge-vs01 on cluster pcs1bxdsbr.pcs1m.ad [04.03.2025 15:01:28] Connection established to pcs1bxdsbr-uge-vs01 on cluster pcs1bxdsbr.pcs1m.ad [04.03.2025 15:01:28] File E:\Scripts\Veeam-NetAppSnaplock\SiteA_pug3axrstr_oasys.txt was found [04.03.2025 15:01:29] Volume tp_pug3axrstr1_vs01_pgu3arj_esx_ds_data_bc_even01_vol was found [04.03.2025 15:01:29] SecondarySVM: pcs1bxdsbr-uge-vs01 [04.03.2025 15:01:29] SecondaryVolume: tp_pug3axrstr1_vs01_pgu3arj_esx_ds_data_bc_even01_vol VERBOSE: Updating SnapMirror with destination //pcs1bxdsbr-uge-vs01/tp_pug3axrstr1_vs01_pgu3arj_esx_ds_data_bc_even01_vol. NcController : pcs1bxdsbr.pcs1m.ad ResultOperationId : 76301282-f933-11ef-bcd2-d039eabd3a83 ErrorCode : ErrorMessage : JobId : JobVserver : Status : succeeded Uuid : Message : [04.03.2025 15:01:29] Volume tp_pug3axrstr1_vs01_pgu3arj_esx_ds_data_bc_odd01_vol was found [04.03.2025 15:01:29] SecondarySVM: pcs1bxdsbr-uge-vs01 [04.03.2025 15:01:29] SecondaryVolume: tp_pug3axrstr1_vs01_pgu3arj_esx_ds_data_bc_odd01_vol VERBOSE: Updating SnapMirror with destination //pcs1bxdsbr-uge-vs01/tp_pug3axrstr1_vs01_pgu3arj_esx_ds_data_bc_odd01_vol. NcController : pcs1bxdsbr.pcs1m.ad ResultOperationId : 768f5208-f933-11ef-bcd2-d039eabd3a83 ErrorCode : ErrorMessage : JobId : JobVserver : Status : succeeded Uuid : Message : [04.03.2025 15:01:30] Volume tp_pug3axrstr1_vs01_pgu3arj_esx_ds_data_nbc_even01_vol was found Any help is appreciated!
... View more
Hi, I created this discussion: https://community.netapp.com/t5/ONTAP-Discussions/Google-Cloud-NetApp-Volume-performance/m-p/458864#M44712 Another user accepted a reply as a solution, but that reply was not a solution to my issue. Does this mean my discussion is closed to other solutions? Why is the original poster not the only person who can mark a reply as a solution?
... View more
We have received a request from our colleagues who take care of the masOS systems: Bonjour is based on mDNS over UDP and finds the file server on our macOS devices. I can connect to the file server with my ID by double-clicking, but I can only access certain data pools - i.e. with correctly enforced permissions for my ID. I am now wondering where this service comes from, whether this is an oversight or whether Bonjour is already being used for other purposes. Or does NetApp communicate with mDNS and UDP by default, so that it only looks like Bonjour in macOS? If Bonjour is already being used, we could test this as an alternative to SMB.
... View more