ONTAP Discussions

cdot beginner questions

eladgotfrid1

Coming from 7-mode I have some hard time grasping the cluster mode concepts.

If I have 2 shelves and a FAS80xx switchless HA pair, am I better off dividing the disks between 2 aggregates between the two nodes to take advantage of both nodes? Or does it not matter anymore with cdot?

 

If using NAS only, and if I create a data SVM per node and configure a LIF on it, what data LIF should the clients connect to? So if there is 1 volume on the aggr created on the disks owned by node1 and 1 volume on the aggr created on the disks owned by node2, should the clients mount the NFS volumes on the corresponding LIF? When do I use DNS load-balancing? Is there such a thing as a cluster data lif?

 

Thanks!

8 REPLIES 8

Re: cdot beginner questions

AlexDawson

"It depends"

 

There's never a single right answer - and your second question feeds into the first - for a single workload, like a single vmware data store, or a single CIFS workload of a lot of data for a small number of people, the benefits of doing the first may outweigh the performance benefits of doing the second.

 

For example however, if you have a single CIFS workload running on there used by many people, using flexgroups to spread the IO between more volumes may provide benefits. If you're doing CIFS for general NAS and NFS for VMWare, you might want two aggrs, each on different controllers, with two SVMs, one for each workload.

 

How are you planning on using the system?

Re: cdot beginner questions

eladgotfrid1

The purpose is to hold VMware VMDKs over NFS with many ESX hosts as clients.

Re: cdot beginner questions

pedro_rocha

Hello,

 

I would divide the disks between the controllers. You would have an ESX SVM that could span the aggregates on both nodes. In that way you would have both controllers working (so not forcing only one node), balanced workloads and kind of eliminating a SPOF (I know there's failover, but some workloads could feel in a non desirable way - in a node failure or panic, half of the datastores wouldn't feel anything, the other half would feel a glitch).

 

I would though give it all to one node if I needed an amount of contiguous space that would only be achieved with both shelves (bear in mind of course that you still need to create a root aggregate with a few drives on both nodes).

 

Regards,

Pedro

Re: cdot beginner questions

eladgotfrid1

What does "ESX SVM that could span the aggregates on both nodes" mean?

 

Re: cdot beginner questions

aborzenkov

@eladgotfrid1 wrote:

Coming from 7-mode I have some hard time grasping the cluster mode concepts.

If I have 2 shelves and a FAS80xx switchless HA pair, am I better off dividing the disks between 2 aggregates between the two nodes to take advantage of both nodes? Or does it not matter anymore with cdot?

 


That did not change. Aggregate belongs to a node and all IO to volumes on aggregate is served by a single node. To utilize both nodes you need two aggregates.

 

The primary difference is that single "vFiler" (which is called SVM here) may contain volumes from different aggregates/nodes and transparently present them via the same LIF. But this incurs overhead of redirecting requests over internal cluster interconnect if LIF is not on aggregate owner.

 

So there is not much difference in planning, really. Coming from 7-Mode you will probably feel more at home with two SVM each owning resources on one node.

Re: cdot beginner questions

paul_stejskal

If you only have 2 shelves, you'll become spindle bound before CPU bound more than likely. A single node does make sense and have it active/passive. Now if it's AFF, that's different!

Re: cdot beginner questions

eladgotfrid1

What happens during failover? both  LIF IPs are still accessible? same for both data and mgmt LIFs?

Re: cdot beginner questions

cruxrealm

Your question is a bit loaded and there's always multiple ways to do it:

 

Coming from 7-mode I have some hard time grasping the cluster mode concepts.

If I have 2 shelves and a FAS80xx switchless HA pair, am I better off dividing the disks between 2 aggregates between the two nodes to take advantage of both nodes? Or does it not matter anymore with cdot?

 

-> I suggest to read through TR-4597: VMware vSphere for ONTAP:   https://docs.netapp.com/us-en/netapp-solutions/hybrid-cloud/vsphere_ontap_ontap_for_vsphere.html

 

-> If I am doing it,  I will create 2 aggregate each owned by a node, create the 1 volume on each aggregate, assign it to the SVM,  and then create the datastores on the volumes assuming your system is active/active.   

 

If using NAS only, and if I create a data SVM per node and configure a LIF on it, what data LIF should the clients connect to? So if there is 1 volume on the aggr created on the disks owned by node1 and 1 volume on the aggr created on the disks owned by node2, should the clients mount the NFS volumes on the corresponding LIF? When do I use DNS load-balancing? Is there such a thing as a cluster data lif?

 

-> clients connect to the data lif assign to the SVM.   Typical setup is that you have at least 1 lif per node, then do a dns load-balance on it using your site dns or internal netapp lb.    Using netapp lb, however, will need a lot more configuration ( e.g. dns delegation, etc).      https://docs.netapp.com/us-en/ontap/pdfs/fullsite-networking-app_sidebar/ONTAP_____Network_Management_Documentation.pdf

View solution in original post

Earn Rewards for Your Review!
GPI Review Banner
All Community Forums
Public