We are trying to setup zero touch provisiong for Netapp CDOT systems.
Does Netapp support any option to boot OnTap non-interactively, (may be using netboot) and configure interfaces, user account, and cluster ?
... View more
device: NetApp A220 In our staging environment, I ran through the initial management network configure via the USB connected console, then configured the everything else via the web portal (data NICs, SVM, first LUN). I did this in order to document the process for production. I realize that there is way to reset the configuration and wipe the disks in a secure way. There is no sensitive data on the SAN. Is there any way to get the configuration back to the way I received it initially without requiring a secure wipe of the disks (hours of unnecessary zerorizing). i.e., I would like to use the exactly same steps I followed when I first configured the device to deploy it in production as it will be an audited process. I found this link that seems to be ok but I heard in some situations resetting to factory config leaves the SAN in a state that is not always the way the vendor shipped it e.g., licenses may be preloaded from the vendor: https://dailysysadmin.com/KB/Article/8724/how-to-wipe-or-decommission-a-netapp-san-to-clear-config-and-wipe-or-zero-disks/ thanks in advance! Marcus
... View more
Hello everyone, I am trying to rename the cluster management with Ansible via na_ontap_rest_cli and get an error message back. FAILED! => {"changed": false, "msg": "Error: {'message': 'Field \"lif\" is not supported in the body of a PATCH.', 'code': '262203', 'target': 'lif'}"} Does anyone know the solution to this problem? - name: run ontap rest cli command netapp.ontap.na_ontap_rest_cli: hostname: "{{ dhcp_node_a }}" command: 'network/interface/rename' verb: 'PATCH' params: { 'vserver': 'nas-p01' } body: { 'lif': 'cluster_mgmt', 'newname': 'nas-p01_mgmt' } FAILED! => {"changed": false, "msg": "Error: {'message': 'Field \"lif\" is not supported in the body of a PATCH.', 'code': '262203', 'target': 'lif'}"} I also tried the na_ontap_interface module, but it also returned an error message that the cluster_mgnt interface could not be found. - name: rename der Management-LIF netapp.ontap.na_ontap_interface: hostname: "{{ dhcp_node_a }}" vserver: nas-p01 state: present from_name: cluster_mgmt interface_name: nas-p01_mgmt use_rest: always FAILED! => {"changed": false, "msg": "Error renaming interface nas-p01_mgmt: no interface with from_name cluster_mgmt."} This interface is displayed on the console. This is a newly set up cluster, where access is via the previously assigned DHCP IP address. nas-p01::> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- nas-p01 cluster_mgmt up/up x.x.x.x/27 nas-p01b e0M true nas-p01a_mgmt up/up x.x.x.x/27 nas-p01a e0M true nas-p01b_mgmt up/up x.x.x.x/27 nas-p01b e0M true nas-p01b_mgmt_auto up/up x.x.x.x/24 nas-p01b e0M true
... View more
we have 2 cifs shares on vserver Both were working till date but since yesterday only one of the paths connect at one time with the respective credentials. we are getting below , if we try to connect to other share Error : The network Folder Specified is currently mapped using a different username and password To connect using a different user name and password, first disconnect existing mapping to this network drive The issue may be due to trying to connect to the same server with different credentials, both share1 & share 2 paths were working till yesterday the issue occurred today. Kindly help resolve the same as it will impact our scheduler. I am suspecting that earlier the one CIFS share was connecting through IP1 and Other CIFS share was connecting through IP2 or vice versa. Thus allowing both shares to connect from same source server. Currently the connection is going to the same IP either IP1 or IP2 I have tested the scenario through and it worked when I connected one of them with IP1 and other with IP2 via respective service accounts Please advise on the fix since as per norms, we should not use the Ip addresses directly. The naming convention should be used. Please advise.
... View more
Greetings to All, Trying to configure the S3 protocol in NetApp AFFC250 Version 9.12.1P1. Got the below error message. An eligible broadcast domain was not found, and the network interface could not be added. Validated the list of IPs from the both cluster node 1 and node 2 (e0c and e0d) through the cmd - broadcast-domain show and network port show Node 1 cluster is up and healthy @ e0c and e0d Node 2 cluster is up and healthy @e0c and e0d Used the Network port IP's of Node 1 e0c and Node 2 e0c while creating SVM got the above mentioned error. Tried with e0d pair aswell. Unable to fix the error, Appreciate if anyone could share their experience and knowledge that can resolve this error. Thankyou.
... View more