I am not sure if this is the correct forum, but I did see other posts about NetApp volumes here. Please advise of the correct forum if needed. On the Google Cloud Platform, I deployed a STANDARD service level 2048gb NetApp storagepool in uscentral1. Then I created a 100gb volume also in uscentral1. I have a question about this volume's performance. When I use the FIND command to traverse through a directory, the performance is noticeably slower on the Netapp volume versus the same data on the local drive. At first I was thinking I needed to increase my throughput performance by increasing my service level or making the volume bigger(which increases throughput performance). But after thinking about it, my FIND command is not moving data, it's just traversing through the directory. So it seems this is not a throughput issue? Does anyone have insight as to the performance issue here: root@server1:~# time find /data_local/cache/4c71-4cb6-bd50-d80aa7/ -type f | wc -l 4995 real 0m0.119s user 0m0.058s sys 0m0.059s root@server1:~# time find /data_NetappVol/cache/4c71-4cb6-bd50-d80aa7/ -type f | wc -l 4995 real 0m3.968s user 0m0.078s sys 0m0.219s
... View more
Hello, To have better knowledge/understanding about MCCIP, i try to configure one in my lab. I use AFF A220 HA pair and a Lenovo DM5000H HA pair + BES53248 switchs => It is only for test. All node/ chassis put in mccip intercluster lif are ok cluster peering ok A220_MCC1::> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- A220_MCC1 A200_MCC1-01_icl01 up/up 192.168.0.101/16 A220_MCC1-01 e0c true A200_MCC1-01_icl02 up/up 192.168.0.102/16 A220_MCC1-01 e0d true A200_MCC1-02_icl01 up/up 192.168.0.103/16 A220_MCC1-02 e0c true A200_MCC1-02_icl02 up/up 192.168.0.104/16 A220_MCC1-02 e0d true A220_MCC1-01_mgmt1 up/up 10.72.12.150/21 A220_MCC1-01 e0M true A220_MCC1-02_mgmt1 up/up 10.72.12.151/21 A220_MCC1-02 e0M true cluster_mgmt up/up 10.72.12.170/21 A220_MCC1-01 e0M true Cluster A220_MCC1-01_clus1 up/up 169.254.241.189/16 A220_MCC1-01 e0a true A220_MCC1-01_clus2 up/up 169.254.116.19/16 A220_MCC1-01 e0b true A220_MCC1-02_clus1 up/up 169.254.33.121/16 A220_MCC1-02 e0a true A220_MCC1-02_clus2 up/up 169.254.129.164/16 A220_MCC1-02 e0b true 11 entries were displayed. A220_MCC1::> net port show (network port show) Node: A220_MCC1-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - Node: A220_MCC1-02 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - 14 entries were displayed. A220_MCC1::> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- DM5000H_MCC2 1-80-000011 Available ok DM5000H_MCC2::> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster DM5000H_MCC2-01_clus1 up/up 169.254.161.255/16 DM5000H_MCC2-01 e0a true DM5000H_MCC2-01_clus2 up/up 169.254.44.35/16 DM5000H_MCC2-01 e0b true DM5000H_MCC2-02_clus1 up/up 169.254.248.44/16 DM5000H_MCC2-02 e0a true DM5000H_MCC2-02_clus2 up/up 169.254.132.80/16 DM5000H_MCC2-02 e0b true DM5000H_MCC2 DM5000H_MCC2-01_icl01 up/up 192.168.0.201/16 DM5000H_MCC2-01 e0c true DM5000H_MCC2-01_icl02 up/up 192.168.0.202/16 DM5000H_MCC2-01 e0d true DM5000H_MCC2-01_mgmt_auto up/up 10.72.12.166/21 DM5000H_MCC2-01 e0M true DM5000H_MCC2-02_icl01 up/up 192.168.0.203/16 DM5000H_MCC2-02 e0c true DM5000H_MCC2-02_icl02 up/up 192.168.0.204/16 DM5000H_MCC2-02 e0d true DM5000H_MCC2-02_mgmt1 up/up 10.72.12.167/21 DM5000H_MCC2-02 e0M true cluster_mgmt up/up 10.72.12.178/21 DM5000H_MCC2-01 e0M true 11 entries were displayed. DM5000H_MCC2::> net port show (network port show) Node: DM5000H_MCC2-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - Node: DM5000H_MCC2-02 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - 14 entries were displayed. DM5000H_MCC2::> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- A220_MCC1 1-80-000011 Available ok DR-group created: A220_MCC1::> dr-group show (metrocluster configuration-settings dr-group show) DR Group ID Cluster Node DR Partner Node ----------- -------------------------- ------------------ ------------------ 1 A220_MCC1 A220_MCC1-02 DM5000H_MCC2-02 A220_MCC1-01 DM5000H_MCC2-01 DM5000H_MCC2 DM5000H_MCC2-02 A220_MCC1-02 DM5000H_MCC2-01 A220_MCC1-01 4 entries were displayed. But when i try to create the interconnect, i've got issues: A220_MCC1::> metrocluster configuration-settings show Cluster Node Configuration Settings Status -------------------------- ------------------ --------------------------------- A220_MCC1 A220_MCC1-01 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.1" and "10.1.1.3" on node "A220_MCC1-01" in cluster "A220_MCC1". A220_MCC1-02 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.2" and "10.1.1.4" on node "A220_MCC1-02" in cluster "A220_MCC1". DM5000H_MCC2 DM5000H_MCC2-01 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.3" and "10.1.1.4" on node "DM5000H_MCC2-01" in cluster "DM5000H_MCC2". DM5000H_MCC2-02 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.4" and "10.1.1.3" on node "DM5000H_MCC2-02" in cluster "DM5000H_MCC2". 4 entries were displayed. A220_MCC1::> metrocluster configuration-settings interface show DR Config Group Cluster Node Network Address Netmask Gateway State ----- ------- ------- --------------- --------------- --------------- --------- 1 A220_MCC1 A220_MCC1-02 Home Port: e0a-10 10.1.1.2 255.255.255.0 - completed Home Port: e0b-20 10.1.2.2 255.255.255.0 - completed A220_MCC1-01 Home Port: e0a-10 10.1.1.1 255.255.255.0 - completed Home Port: e0b-20 10.1.2.1 255.255.255.0 - completed DM5000H_MCC2 DM5000H_MCC2-02 Home Port: e0a-10 10.1.1.4 255.255.255.0 - completed Home Port: e0b-20 10.1.2.4 255.255.255.0 - completed DM5000H_MCC2-01 Home Port: e0a-10 10.1.1.3 255.255.255.0 - completed Home Port: e0b-20 10.1.2.3 255.255.255.0 - completed 8 entries were displayed. A220_MCC1::> sto fail show (storage failover show) Takeover Node Partner Possible State Description -------------- -------------- -------- ------------------------------------- A220_MCC1-01 A220_MCC1-02 false Waiting for A220_MCC1-02, Takeover is not possible: Storage failover interconnect error, NVRAM log not synchronized, Disk inventory not exchanged A220_MCC1-02 A220_MCC1-01 false Waiting for A220_MCC1-01, Takeover is not possible: Storage failover interconnect error, NVRAM log not synchronized, Disk inventory not exchanged 2 entries were displayed. I guess i'm missing something but don't know what. If anybody have a clue or hint here it will be really helpfull. edit: A220_MCC1::> metrocluster interconnect adapter show Adapter Link Node Adapter Name Type Status IP Address Port Number ------------ --------------- ------- ------ ----------- ----------- A220_MCC1-01 e0a-10 iWARP UP 10.1.1.1 e0a-10 A220_MCC1-01 e0b-20 iWARP UP 10.1.2.1 e0b-20 A220_MCC1-02 e0a-10 iWARP UP 10.1.1.2 e0a-10 A220_MCC1-02 e0b-20 iWARP UP 10.1.2.2 e0b-20 4 entries were displayed. DM5000H_MCC2::> metrocluster interconnect adapter show Adapter Link Node Adapter Name Type Status IP Address Port Number ------------ --------------- ------- ------ ----------- ----------- DM5000H_MCC2-01 e0a-10 iWARP UP 10.1.1.3 e0a-10 DM5000H_MCC2-01 e0b-20 iWARP UP 10.1.2.3 e0b-20 DM5000H_MCC2-02 e0a-10 iWARP UP 10.1.1.4 e0a-10 DM5000H_MCC2-02 e0b-20 iWARP UP 10.1.2.4 e0b-20 4 entries were displayed.
... View more
Hi Team, I recently created an NVMe Namespace, and during the process, an associated volume was automatically created. Below are the details: NVMe Namespace Path: /vol/namespace_nvme/namespace_nvme_1 Associated Volume: namespace_nvme I’m trying to understand the performance differences between the NVMe namespace (/vol/namespace_nvme/namespace_nvme_1) and the associated volume (namespace_nvme). Specifically, is the performance of the NVMe namespace the same as that of the associated volume, or are there key differences in how each performs? Looking forward to your insights! Thank you!
... View more
Hello and sorry for my english, Do ONTAP updates include disk firmware updates or do I have to update the firmware independently? thanks a lot
... View more
Dear Netapp experts! Could you please help us troubleshoot the following error? Issue: Unexpected NDMP error communicating with tape server (it is Commvault Media Agent server] Detailed description : Our NDMP backup runs very slowly (approximately 2MB/Sec) and eventually enters a pending state with the error above. Has anyone experienced this issue or have any insights? How to troubleshoot failing and slow NDMP connection?
... View more