Subscribe

Netapp zoning and DR

Hello , I am new to netapp and would like to  get few clarificaion . 

 

We are doing a DR test for the customer and the vcenter and Vms will be built at the time of the test , There will 4 ESX hosts to be recovered during the test .The Luns are snapmirrored .

Process would be to break the replication . 

Questios :

1)Can I create one igroup and map all the luns to that Igroup as the ESX hosts need to see all the luns . ?

 

 

2)what is the commnd to see from the netapp  the wwpn of the ESX hosts once zoned  ?

 

3)Also how should I zone the SVM Lifs to the esx host .Appreciate some sample steps . 

 

Kindy asssit as I am still learning netapp 

 

thanks 

Re: Netapp zoning and DR

Here is a link to the SAN Administration guide for ONTAP 8.3.  You tagged this as ONTAP 8.  Not much has changed between the versions.

https://library.netapp.com/ecm/ecm_download_file/ECMP1636035

 

Here is a Vmware Vsphere with NetApp guide

https://www.netapp.com/us/media/tr-4597.pdf

 

How to configure VMware vSphere 6.x on Data ONTAP 8.x    (There are some very import configurations in this)

https://kb.netapp.com/support/s/article/ka31A00000014F7QAI/How-to-configure-VMware-vSphere-6-x-on-Data-ONTAP-8-x?language=en_US

 

 

To answer a couple of your questions..

I create one iGroup and put all of my ESX Cluster Host initiators in it.  One iGroup per ESX cluster that shares the storage. 4 Hosts with two HBA ports would have 8 initiators in an iGroup.  This makes sure that all luns mapped to the iGroup have the same LUN ID across each host.  You can create a separate iGroup for each host but then have to manage the LUN IDs individually.

 

You can use these to see some of the initiator, target, lif, and igroup info.  Go back through the SAN administration guide and follow all best practices. 

cluster::>net int show -vserver <vserverName>     (this will show you the target wwpns of the fcp lifs)

cluster::>wwpn show                            (this will show all the initiators logged in)

cluster::>vserver fcp initiator show          (this will show you the initiators per target that are logged in, useful to verify zoning is correct)

cluster::>vserver fcp interface show      (this wil also show you the target wwpns of the fcp lifs and what node/port they are on.)

cluster::>fcp wwpn-alias set            (this will assign aliases to each of your initiators)

cluster::> igroup show -instance     (this shows you more verbose details about your igroups and shows whether the initiators are logged in)

 

 

Zone each host with 1 initiator + all SVM FC target lifs in each zone.  If you have 4 hosts then you should have 4 zones.  Soft zoning is best.  Make sure you zone the Host initiators with the SVM LIFs and NOT the target cards.  Do you have 2 FC fabrics or only 1?  Most have 2 so you would have to create a zone on both fabrics for each host.

Re: Netapp zoning and DR

Thanks you so much for the information . 

 

one more question :

 

you mentioned to  "Zone each host with 1 initiator + all SVM FC target lifs in each zone" 

so can I have ESX host  zoned to 8 SVM FC target ?

is there a way to check from the Brocade switch end  to determine the number of FC target present . 

 

Reason being initially I was told there are  4 aliases created in switch 

a_DRNACLUS01N01_0c; a_DRNACLUS01N01_0d;

a_DRNACLUS01N02_0c; a_DRNACLUS01N02_0d”

 

Now I see two more aliases present 

 

a_DRNACLUS01N03_0c; a_DRNACLUS01N03_0d;

a_DRNACLUS01N04_0c; a_DRNACLUS01N04_0d”

 

could 03 & 04 part of different SVM ? Only one SVM would be part of the test . 

 

Hope my question is clear . 

 

Regards

 

Re: Netapp zoning and DR

 

If your SVM spans nodes 1, 2, 3, and 4, then you should have FC LIFs configured for those 4 nodes that are homed to 0c and 0d.  Don't zone the hosts into Ports 0c and 0d directly.  Zone them into the Logical Interfaces for that SVM.  You can zone them into all of your node lifs if you wish to move the Volume>LUNs between the nodes.  ONTAP 8.3.2 uses NPIV and will allow virtual WWPNs for each of the SVMs on those ports.  I have mode ESXi LUN (datastores) non-disruptively many times between all of my nodes.   

 

 

What is the output of your   ::>fcp interface show   ?

 

 

I have a cluster with 4 nodes.  A SVM that spans all 4 nodes.  Each node has 2 FCP ports 0c and 0d.  So I have a total of 8 Fiber Channel ports.  My SVM has 8 Fiber Channel LIFs that are homed to each of those nodes.  Those 8 ports are split between 2 fabrics.  So I have 4 ports on each fabric.  My hosts have a dual port FC card attached to each fabric.  So each host has a zone on the fabric with the Host Initiator plus 4 SVM FC LIFs.  

 

Fabric A Zone = 

a_host1_p1       (port 1 on the ESXi host)

a_svm1_fc_lif1   (homed to node 1 0c)

a_svm1_fc_lif3  (homed to node 2 0c)

a_svm1_fc_lif5  homed to node 3 0c)

a_svm1_fc_lif7  (homed to node 4 0c)

 

Fabric B Zone =

a_host1_p2       (port 2 on the same ESXi host)

a_svm1_fc_lif2   (homed to node 1 0d)

a_svm1_fc_lif4   (homed to node 2 0d)

a_svm1_fc_lif6   (homed to node 3 0d)

a_svm1_fc_lif8   (homed to node 4 0d)

 

This allows the host to see all 8 paths.  Repat this for all hosts.  Make sure all of those lifs are in the same FC Port Set.  It will use SLM (Selective LUN Mapping) and only see the paths on the owning HA pair as the Active-Optimized/Active paths.  Look up SLM in the SAN Admin Guide.