Dear Netapp experts! Could you please help us troubleshoot the following error? Issue: Unexpected NDMP error communicating with tape server (it is Commvault Media Agent server] Detailed description : Our NDMP backup runs very slowly (approximately 2MB/Sec) and eventually enters a pending state with the error above. Has anyone experienced this issue or have any insights? How to troubleshoot failing and slow NDMP connection?
... View more
Hi, I have 8 clusters talking happily to NABox 4.0 except one cluster. They set auto pick between REST and ONTAPI but this cluster didn't like it. SO, I forced it to use REST and talking fine. But I see few error messages popping up in the event log of the cluster saying " ERROR httpd.api.expired.session.limit: A Manage ONTAP(R) API request (GET /api/cluster/counter/tables/volume:node/rows) has expired after 184 seconds. Last error: This request exceeds the configured session limit (20) for your application ("Harvest/24.11.1")." - I am showing only one instance but there are errors for each API request My all the clusters are set same way for API security session limit show -interface ontapi Interface Category Max-Active --------- ----------- ---------- ontapi application 20 ontapi location 20 ontapi request 20 ontapi vserver 10 4 entries were displayed. There is no firewall between the NABox and the cluster. Has anyone seen this error before? Thanks
... View more
Hello all, I'm posting here since Harvest's Github page seems to be reserved for bug reports and feature requests. We're using Harvest to monitor our NetApp systems, associated with Prometheus and Grafana. Everything is working fine, except for Snapmirrors, where I can't collect anything it seems. I use the "readonly" role with Harvest to collect metrics (which has almost full access to /api), I've tried to create a custom role with /api/snapmirror readonly access, but this doesn't change anything, when I access http://<harvest_ip>:12991/metrics. I don't get anything regarding snapmirrors. I've tried to see if I had any logs with the snapmirror module with this command : $ harvest start cluster-01 --foreground --verbose --collectors zapi --objects SnapMirror But no WARN of ERR lines. The metrics page only display me metadata in this case : metadata_component_status{poller="cluster-01",version="21.05.4",hostname="REDACTED",type="exporter",name="Prometheus",target="prometheus1",reason="initialized"} 0 metadata_component_count{poller="cluster-01",version="21.05.4",hostname="REDACTED",type="exporter",name="Prometheus",target="prometheus1",reason="initialized"} 0 metadata_component_status{poller="cluster-01",version="21.05.4",hostname="REDACTED",type="collector",name="zapi",target="SnapMirror",reason="running"} 0 metadata_component_count{poller="cluster-01",version="21.05.4",hostname="REDACTED",type="collector",name="zapi",target="SnapMirror",reason="running"} 0 metadata_target_status{poller="cluster-01",version="21.05.4",hostname="REDACTED",addr="REDACTED"} 0 metadata_target_ping{poller="cluster-01",version="21.05.4",hostname="REDACTED",addr="REDACTED"} 0.178 metadata_target_goroutines{poller="cluster-01",version="21.05.4",hostname="REDACTED",addr="REDACTED"} 10 metadata_exporter_time{poller="cluster-01",exporter="Prometheus",target="prometheus1",hostname="REDACTED",version="21.05.4",task="http"} 586 metadata_exporter_count{poller="cluster-01",exporter="Prometheus",target="prometheus1",hostname="REDACTED",version="21.05.4",task="http"} 9 Is there something to activate in our NetAPP system to get metrics on SnapMirrors? What might be the problem? Regards.
... View more
Hi, So I am having issues being able to configure, save and connect to whatever I set my e0a IP address to. The ip i am using is free on my network, and I am able to ping it after setting it, but unable to connect via web or via Putty. Then, as soon as I reboot - all the config disappears. I assume there is a save command in Maintenance mode? Is it something really simple I am missing here? TIA for support.
... View more
Hello, To have better knowledge/understanding about MCCIP, i try to configure one in my lab. I use AFF A220 HA pair and a Lenovo DM5000H HA pair + BES53248 switchs => It is only for test. All node/ chassis put in mccip intercluster lif are ok cluster peering ok A220_MCC1::> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- A220_MCC1 A200_MCC1-01_icl01 up/up 192.168.0.101/16 A220_MCC1-01 e0c true A200_MCC1-01_icl02 up/up 192.168.0.102/16 A220_MCC1-01 e0d true A200_MCC1-02_icl01 up/up 192.168.0.103/16 A220_MCC1-02 e0c true A200_MCC1-02_icl02 up/up 192.168.0.104/16 A220_MCC1-02 e0d true A220_MCC1-01_mgmt1 up/up 10.72.12.150/21 A220_MCC1-01 e0M true A220_MCC1-02_mgmt1 up/up 10.72.12.151/21 A220_MCC1-02 e0M true cluster_mgmt up/up 10.72.12.170/21 A220_MCC1-01 e0M true Cluster A220_MCC1-01_clus1 up/up 169.254.241.189/16 A220_MCC1-01 e0a true A220_MCC1-01_clus2 up/up 169.254.116.19/16 A220_MCC1-01 e0b true A220_MCC1-02_clus1 up/up 169.254.33.121/16 A220_MCC1-02 e0a true A220_MCC1-02_clus2 up/up 169.254.129.164/16 A220_MCC1-02 e0b true 11 entries were displayed. A220_MCC1::> net port show (network port show) Node: A220_MCC1-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - Node: A220_MCC1-02 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - 14 entries were displayed. A220_MCC1::> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- DM5000H_MCC2 1-80-000011 Available ok DM5000H_MCC2::> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster DM5000H_MCC2-01_clus1 up/up 169.254.161.255/16 DM5000H_MCC2-01 e0a true DM5000H_MCC2-01_clus2 up/up 169.254.44.35/16 DM5000H_MCC2-01 e0b true DM5000H_MCC2-02_clus1 up/up 169.254.248.44/16 DM5000H_MCC2-02 e0a true DM5000H_MCC2-02_clus2 up/up 169.254.132.80/16 DM5000H_MCC2-02 e0b true DM5000H_MCC2 DM5000H_MCC2-01_icl01 up/up 192.168.0.201/16 DM5000H_MCC2-01 e0c true DM5000H_MCC2-01_icl02 up/up 192.168.0.202/16 DM5000H_MCC2-01 e0d true DM5000H_MCC2-01_mgmt_auto up/up 10.72.12.166/21 DM5000H_MCC2-01 e0M true DM5000H_MCC2-02_icl01 up/up 192.168.0.203/16 DM5000H_MCC2-02 e0c true DM5000H_MCC2-02_icl02 up/up 192.168.0.204/16 DM5000H_MCC2-02 e0d true DM5000H_MCC2-02_mgmt1 up/up 10.72.12.167/21 DM5000H_MCC2-02 e0M true cluster_mgmt up/up 10.72.12.178/21 DM5000H_MCC2-01 e0M true 11 entries were displayed. DM5000H_MCC2::> net port show (network port show) Node: DM5000H_MCC2-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - Node: DM5000H_MCC2-02 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - 14 entries were displayed. DM5000H_MCC2::> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- A220_MCC1 1-80-000011 Available ok DR-group created: A220_MCC1::> dr-group show (metrocluster configuration-settings dr-group show) DR Group ID Cluster Node DR Partner Node ----------- -------------------------- ------------------ ------------------ 1 A220_MCC1 A220_MCC1-02 DM5000H_MCC2-02 A220_MCC1-01 DM5000H_MCC2-01 DM5000H_MCC2 DM5000H_MCC2-02 A220_MCC1-02 DM5000H_MCC2-01 A220_MCC1-01 4 entries were displayed. But when i try to create the interconnect, i've got issues: A220_MCC1::> metrocluster configuration-settings show Cluster Node Configuration Settings Status -------------------------- ------------------ --------------------------------- A220_MCC1 A220_MCC1-01 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.1" and "10.1.1.3" on node "A220_MCC1-01" in cluster "A220_MCC1". A220_MCC1-02 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.2" and "10.1.1.4" on node "A220_MCC1-02" in cluster "A220_MCC1". DM5000H_MCC2 DM5000H_MCC2-01 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.3" and "10.1.1.4" on node "DM5000H_MCC2-01" in cluster "DM5000H_MCC2". DM5000H_MCC2-02 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.4" and "10.1.1.3" on node "DM5000H_MCC2-02" in cluster "DM5000H_MCC2". 4 entries were displayed. A220_MCC1::> metrocluster configuration-settings interface show DR Config Group Cluster Node Network Address Netmask Gateway State ----- ------- ------- --------------- --------------- --------------- --------- 1 A220_MCC1 A220_MCC1-02 Home Port: e0a-10 10.1.1.2 255.255.255.0 - completed Home Port: e0b-20 10.1.2.2 255.255.255.0 - completed A220_MCC1-01 Home Port: e0a-10 10.1.1.1 255.255.255.0 - completed Home Port: e0b-20 10.1.2.1 255.255.255.0 - completed DM5000H_MCC2 DM5000H_MCC2-02 Home Port: e0a-10 10.1.1.4 255.255.255.0 - completed Home Port: e0b-20 10.1.2.4 255.255.255.0 - completed DM5000H_MCC2-01 Home Port: e0a-10 10.1.1.3 255.255.255.0 - completed Home Port: e0b-20 10.1.2.3 255.255.255.0 - completed 8 entries were displayed. A220_MCC1::> sto fail show (storage failover show) Takeover Node Partner Possible State Description -------------- -------------- -------- ------------------------------------- A220_MCC1-01 A220_MCC1-02 false Waiting for A220_MCC1-02, Takeover is not possible: Storage failover interconnect error, NVRAM log not synchronized, Disk inventory not exchanged A220_MCC1-02 A220_MCC1-01 false Waiting for A220_MCC1-01, Takeover is not possible: Storage failover interconnect error, NVRAM log not synchronized, Disk inventory not exchanged 2 entries were displayed. I guess i'm missing something but don't know what. If anybody have a clue or hint here it will be really helpfull. edit: A220_MCC1::> metrocluster interconnect adapter show Adapter Link Node Adapter Name Type Status IP Address Port Number ------------ --------------- ------- ------ ----------- ----------- A220_MCC1-01 e0a-10 iWARP UP 10.1.1.1 e0a-10 A220_MCC1-01 e0b-20 iWARP UP 10.1.2.1 e0b-20 A220_MCC1-02 e0a-10 iWARP UP 10.1.1.2 e0a-10 A220_MCC1-02 e0b-20 iWARP UP 10.1.2.2 e0b-20 4 entries were displayed. DM5000H_MCC2::> metrocluster interconnect adapter show Adapter Link Node Adapter Name Type Status IP Address Port Number ------------ --------------- ------- ------ ----------- ----------- DM5000H_MCC2-01 e0a-10 iWARP UP 10.1.1.3 e0a-10 DM5000H_MCC2-01 e0b-20 iWARP UP 10.1.2.3 e0b-20 DM5000H_MCC2-02 e0a-10 iWARP UP 10.1.1.4 e0a-10 DM5000H_MCC2-02 e0b-20 iWARP UP 10.1.2.4 e0b-20 4 entries were displayed.
... View more