Hi There, I am not sure i picked the correct Board, but here goes. We have a lot of older data on several windows CIFS that needs to be cleaned up , preferably deleted. Rather than just search the folder structure, we were wondering if we could use Netapps built in File Analytics Rest APIs to help us develop and use scripts. My problem is we don't have much in the way of in house expertise to create these . I am looking for some guidance on what i could share from your input here to a consultant who would know what do do with it. We are looking to get rid of data that hasn't been modified or accessed in 6 years. My thinking is a script could scan our cifs and give us a report and then give us a option to delete. Feel free to ask me questions on this request. thanks, Scott Is this doable?
... View more
Hi, trying to use ansible to create fc target for a vserver but no luck, playbook below , the error is fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error: the python NetApp-Lib module is required. Import error: No module named 'netapp_lib'"} not sure what is missing, any idea? playbook -- - hosts: localhost vars: netapp_hostname: cluster01 netapp_username: svmadmin vars_prompt: - name: netapp_password prompt: "Please enter Netapp password (hidden from output)" private: yes tasks: - name: Create FCP interface netapp.ontap.na_ontap_interface: vserver: svmfcp31 state: present interface_name: svmtest_0g_tgt interface_type: fc data_protocol: fcp current_node: node-01 current_port: 0g role: data failover_policy: disabled force_subnet_association: false admin_status: up hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}"
... View more
Hello, To have better knowledge/understanding about MCCIP, i try to configure one in my lab. I use AFF A220 HA pair and a Lenovo DM5000H HA pair + BES53248 switchs => It is only for test. All node/ chassis put in mccip intercluster lif are ok cluster peering ok A220_MCC1::> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- A220_MCC1 A200_MCC1-01_icl01 up/up 192.168.0.101/16 A220_MCC1-01 e0c true A200_MCC1-01_icl02 up/up 192.168.0.102/16 A220_MCC1-01 e0d true A200_MCC1-02_icl01 up/up 192.168.0.103/16 A220_MCC1-02 e0c true A200_MCC1-02_icl02 up/up 192.168.0.104/16 A220_MCC1-02 e0d true A220_MCC1-01_mgmt1 up/up 10.72.12.150/21 A220_MCC1-01 e0M true A220_MCC1-02_mgmt1 up/up 10.72.12.151/21 A220_MCC1-02 e0M true cluster_mgmt up/up 10.72.12.170/21 A220_MCC1-01 e0M true Cluster A220_MCC1-01_clus1 up/up 169.254.241.189/16 A220_MCC1-01 e0a true A220_MCC1-01_clus2 up/up 169.254.116.19/16 A220_MCC1-01 e0b true A220_MCC1-02_clus1 up/up 169.254.33.121/16 A220_MCC1-02 e0a true A220_MCC1-02_clus2 up/up 169.254.129.164/16 A220_MCC1-02 e0b true 11 entries were displayed. A220_MCC1::> net port show (network port show) Node: A220_MCC1-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - Node: A220_MCC1-02 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - 14 entries were displayed. A220_MCC1::> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- DM5000H_MCC2 1-80-000011 Available ok DM5000H_MCC2::> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster DM5000H_MCC2-01_clus1 up/up 169.254.161.255/16 DM5000H_MCC2-01 e0a true DM5000H_MCC2-01_clus2 up/up 169.254.44.35/16 DM5000H_MCC2-01 e0b true DM5000H_MCC2-02_clus1 up/up 169.254.248.44/16 DM5000H_MCC2-02 e0a true DM5000H_MCC2-02_clus2 up/up 169.254.132.80/16 DM5000H_MCC2-02 e0b true DM5000H_MCC2 DM5000H_MCC2-01_icl01 up/up 192.168.0.201/16 DM5000H_MCC2-01 e0c true DM5000H_MCC2-01_icl02 up/up 192.168.0.202/16 DM5000H_MCC2-01 e0d true DM5000H_MCC2-01_mgmt_auto up/up 10.72.12.166/21 DM5000H_MCC2-01 e0M true DM5000H_MCC2-02_icl01 up/up 192.168.0.203/16 DM5000H_MCC2-02 e0c true DM5000H_MCC2-02_icl02 up/up 192.168.0.204/16 DM5000H_MCC2-02 e0d true DM5000H_MCC2-02_mgmt1 up/up 10.72.12.167/21 DM5000H_MCC2-02 e0M true cluster_mgmt up/up 10.72.12.178/21 DM5000H_MCC2-01 e0M true 11 entries were displayed. DM5000H_MCC2::> net port show (network port show) Node: DM5000H_MCC2-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - Node: DM5000H_MCC2-02 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Cluster Cluster up 9000 auto/10000 healthy e0b Cluster Cluster up 9000 auto/10000 healthy e0c Default Default up 1500 auto/10000 healthy e0d Default Default up 1500 auto/10000 healthy e0e Default Default down 1500 auto/- - e0f Default Default down 1500 auto/- - 14 entries were displayed. DM5000H_MCC2::> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- A220_MCC1 1-80-000011 Available ok DR-group created: A220_MCC1::> dr-group show (metrocluster configuration-settings dr-group show) DR Group ID Cluster Node DR Partner Node ----------- -------------------------- ------------------ ------------------ 1 A220_MCC1 A220_MCC1-02 DM5000H_MCC2-02 A220_MCC1-01 DM5000H_MCC2-01 DM5000H_MCC2 DM5000H_MCC2-02 A220_MCC1-02 DM5000H_MCC2-01 A220_MCC1-01 4 entries were displayed. But when i try to create the interconnect, i've got issues: A220_MCC1::> metrocluster configuration-settings show Cluster Node Configuration Settings Status -------------------------- ------------------ --------------------------------- A220_MCC1 A220_MCC1-01 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.1" and "10.1.1.3" on node "A220_MCC1-01" in cluster "A220_MCC1". A220_MCC1-02 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.2" and "10.1.1.4" on node "A220_MCC1-02" in cluster "A220_MCC1". DM5000H_MCC2 DM5000H_MCC2-01 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.3" and "10.1.1.4" on node "DM5000H_MCC2-01" in cluster "DM5000H_MCC2". DM5000H_MCC2-02 connection error Error: Did not receive a "ping" response using the network addresses "10.1.1.4" and "10.1.1.3" on node "DM5000H_MCC2-02" in cluster "DM5000H_MCC2". 4 entries were displayed. A220_MCC1::> metrocluster configuration-settings interface show DR Config Group Cluster Node Network Address Netmask Gateway State ----- ------- ------- --------------- --------------- --------------- --------- 1 A220_MCC1 A220_MCC1-02 Home Port: e0a-10 10.1.1.2 255.255.255.0 - completed Home Port: e0b-20 10.1.2.2 255.255.255.0 - completed A220_MCC1-01 Home Port: e0a-10 10.1.1.1 255.255.255.0 - completed Home Port: e0b-20 10.1.2.1 255.255.255.0 - completed DM5000H_MCC2 DM5000H_MCC2-02 Home Port: e0a-10 10.1.1.4 255.255.255.0 - completed Home Port: e0b-20 10.1.2.4 255.255.255.0 - completed DM5000H_MCC2-01 Home Port: e0a-10 10.1.1.3 255.255.255.0 - completed Home Port: e0b-20 10.1.2.3 255.255.255.0 - completed 8 entries were displayed. A220_MCC1::> sto fail show (storage failover show) Takeover Node Partner Possible State Description -------------- -------------- -------- ------------------------------------- A220_MCC1-01 A220_MCC1-02 false Waiting for A220_MCC1-02, Takeover is not possible: Storage failover interconnect error, NVRAM log not synchronized, Disk inventory not exchanged A220_MCC1-02 A220_MCC1-01 false Waiting for A220_MCC1-01, Takeover is not possible: Storage failover interconnect error, NVRAM log not synchronized, Disk inventory not exchanged 2 entries were displayed. I guess i'm missing something but don't know what. If anybody have a clue or hint here it will be really helpfull. edit: A220_MCC1::> metrocluster interconnect adapter show Adapter Link Node Adapter Name Type Status IP Address Port Number ------------ --------------- ------- ------ ----------- ----------- A220_MCC1-01 e0a-10 iWARP UP 10.1.1.1 e0a-10 A220_MCC1-01 e0b-20 iWARP UP 10.1.2.1 e0b-20 A220_MCC1-02 e0a-10 iWARP UP 10.1.1.2 e0a-10 A220_MCC1-02 e0b-20 iWARP UP 10.1.2.2 e0b-20 4 entries were displayed. DM5000H_MCC2::> metrocluster interconnect adapter show Adapter Link Node Adapter Name Type Status IP Address Port Number ------------ --------------- ------- ------ ----------- ----------- DM5000H_MCC2-01 e0a-10 iWARP UP 10.1.1.3 e0a-10 DM5000H_MCC2-01 e0b-20 iWARP UP 10.1.2.3 e0b-20 DM5000H_MCC2-02 e0a-10 iWARP UP 10.1.1.4 e0a-10 DM5000H_MCC2-02 e0b-20 iWARP UP 10.1.2.4 e0b-20 4 entries were displayed.
... View more
I am not sure if this is the correct forum, but I did see other posts about NetApp volumes here. Please advise of the correct forum if needed. On the Google Cloud Platform, I deployed a STANDARD service level 2048gb NetApp storagepool in uscentral1. Then I created a 100gb volume also in uscentral1. I have a question about this volume's performance. When I use the FIND command to traverse through a directory, the performance is noticeably slower on the Netapp volume versus the same data on the local drive. At first I was thinking I needed to increase my throughput performance by increasing my service level or making the volume bigger(which increases throughput performance). But after thinking about it, my FIND command is not moving data, it's just traversing through the directory. So it seems this is not a throughput issue? Does anyone have insight as to the performance issue here: root@server1:~# time find /data_local/cache/4c71-4cb6-bd50-d80aa7/ -type f | wc -l 4995 real 0m0.119s user 0m0.058s sys 0m0.059s root@server1:~# time find /data_NetappVol/cache/4c71-4cb6-bd50-d80aa7/ -type f | wc -l 4995 real 0m3.968s user 0m0.078s sys 0m0.219s
... View more
we are currently using Rubrik CDM and snapdiff for NAS backups. according to NetApp, the snapdiff will not be supported after version 8.17. Rubrik is pushing us to a different backup solution of NAS cloud direct for NAS backups, but this is a duplicated investment and a lot of data movement. does anyone have more details about NetApp snapdiff support? looks if we lost snapdiff, the NAS backup on Rubrik CDM will still work, but on some shares which have multi millions files, the indexing and metascan will take hours to complete, it is causing user access issue to the shares. Please share the information or workaround if you have. thanks in advance
... View more