I apologize if my questions was discussed in the past but I couldnt find anything similar.
Trying to understand the best practices between NAS Clients configuration between Commvault and NetApp. Have NetApp storage systems running ONTAP 9.x and Commvault as the backup system. NDMP was configured and enabled and essentially want to send backups from volumes to tape.
In regards to the network/data flow I have two different types of information which I dont seem to fully understand.
For any ONTAP version, when you configure a NAS client or when you configure a storage array, provide the host name or IP address of adata port (for example, e0A, or e0B), instead of using the management port (for example, e0M). If you use the management port, then reassign the host name or IP address of the management port to a data port on the file server.
"In order for Commvault IntelliSnap for NetApp to discover volumes on an SVM, you must add the cluster management LIF of the cluster that contains the SVM to the CommCell console in the Array Management dialog box."
Isnt the Commvault statement controversial to what Netapp paper is saying?
You can perform NDMP backup from any LIF as permitted by the firewall policies. If you use a "data LIF", you must select a LIF that is not configured for failover. If a data LIF fails over during an NDMP operation, the NDMP operation fails and must be run again.
Page 48says that the Data LIFs doesnt have access to tape.
Intercluster LIF: A LIF that is used for cross-cluster communication, "backup", and replication. You must create an intercluster LIF on each node in the cluster before a cluster peering relationship can be established. These LIFs can only fail over to ports in the same node. They cannot be migrated or failed over to another node in the cluster.
1- What is the best practice for NetApp as to NDMP Backup? Use or not Data LIF?
2- If Data LIFs can't access tape devices, how does the backup operation dumps the data into tape?
3- Am I missing something to fully understand the NDMP process/capabilities?
I read the documents, in fact I posted my references from commvault and netapp, that is one of the reasons why I created this thread, because to me some statements are conflicting. The link isnt helpul since it is related to configuration which isnt my problem. Is this something you can help answering @Vijay_ramamurthy ?
There two type of connections that they needed. (Control connection and data connection)
If the backup application supports Cluster Aware Backup (CAB), you can configure NDMP as SVM-scoped at the cluster (admin SVM) level, which enables you to back up all volumes hosted across different nodes of the cluster. Otherwise, you can configure node-scoped NDMP, which enables you to back up all the volumes hosted on that node.
You must identify the LIFs that will be used for establishing a data connection between the data and tape resources, and for a control connection between the admin SVM and the backup application. After identifying the LIFs, you must verify that firewall and failover policies are set for the LIFs, and specify the preferred interface role.
Thanks for taking time and responding, but I dont think I made myself clear on my question. I am not looking for the differecences between node scope or svm. I am trying to understand why Commvault recommends "data port(for example, e0A, or e0B), instead of using the management port (for example, e0M)." and NetApp recommends "Ensure that the firewall policy is enabled for NDMP on the intercluster, cluster-management (cluster-mgmt), and node-management (node-mgmt) LIFs" NetApp and Commvault reference link above.
In the past e0M was only a 100Mbit port, nowadays it is 1GBit, but data ports e0c, e0d, etc. are at minimum 10Gbit. So Commvault recommends using those data ports, because of the supported speed.
Back to NetApp: You can configure NDMP backup traffic through intercluster (default), cluster-mgmt or node-mgmt. You would host your intercluster LIFs on 10G Ports, maybe your Cluster-Mgmt as well, when you are using VLANs. Node-Management remains the last ressort, since those are always hosted on e0M.
Back to Commvault: Communication to NetApp will run through Cluster-Management (Tunneling) or SVM Management. NDMP Backuptraffic will always run through the configured LIFs (Intercluster by default) from the node where the backup volume is hosted. So make sure those LIFs are reachable from the MediaAgent from the Primary Copy of the used storage policy.
NetApp Best practice is to use the intercluster lif, this is also a data lif. All replicated data, "Snap's" from the ONTAP perspective use these lifs to tranfer data between devices. The document also says to use lifs that are not part of a failover group. Which, if configured properly are your intercluster lifs.
It is confusing but I read both Commvaults Data lif's and NetApp's intercluster lif as the same thing.
I have worked with Commvault, so may be I can highligth further on this.
During my CommVault days, I had come across text/wordings which weren't always interpreted correctly and had to request the documentation team to make the correction. They were quick to correct it as well.
You have made a valid and a very good observation : First of all, wordings used in CommVault documentation are wrong from the onset.
1) ONTAP word is used from 9.x onwards, before that it was called cDOT and the word 'Data Ontap' only refers to 7-mode: In cDOT/ONTAP, Data is served using LIFs and no longer using Physical ethernet Ports. But, they have used 'ONTAP' in capital which means they are refering to 9.x releases, however they contradict it by using words like e0A & e0B which are physical ports.
2) In cDOT/ONTAP: We have node-scope & svm-scope (recommended) : svm-scope is govered by '-preferred-interface-role', once a LIF that matches the criteria is found, NDMP attempts to make the data connection.
The default value for this option is intercluster,data for data vServers, and intercluster,cluster-mgmt,node-mgmt for admin vServers.
3) The intentions were good, but not very well documented for example : Use e0A or e0B, instead of using the management port (for example, e0M) : This only applies to 7-mode filers. In CommVault/SnapProtect, you can do NDMP or IntelliSnap backups. For NDMP backups, when the NDMP Client is added, usually it is the Hostname (Management IP), and it is usually e0M. However, they haven't given proper reason, why you shouldn't use e0M. The reason, is e0M is for management purpose which could be 100Mbps port and when you start the NDMP backup, it uses the e0M for both Control and Data pipe. This could lead to performance issues. Therefore, they insist on using Data Ports, e0a or e0b, or for that sake anything except e0M.
As far as 7-mode NDMP client is concerned, you can use e0M port to detect the NDMP Host in the CommVault nothing wrong here, however it is important that you further configure the DATA PAIR (DPIP) so that NDMP Data connection is established using the 'e0a' and the Backup applicaiton.
I usually suggest writing to CommVault doc team and bring it to their attention. Even if you open a ticket with them, you can ask them to raise a concern with Documentation team.
I am bit out of touch NDMP stuff, but it's a very interesting subject.