VMware Solutions Discussions
VMware Solutions Discussions
We are setting up a 2 node cluster running on cDOT 8.3.1 for production which will replicate data via SM to the DR site. As I have not much experience running VMware utilizing NFS I need your input.
Here is what I'm proposing
1. Datastore volumes and Oracle DB volumes will have dedicated four * 10G LACP link in RED (flow control off and Jumbo frame; MTU 9000)
2. CIFS User home folders' volumes / typical Windows app volumes will have another dedicated four * 10G LACP link in BLUE (flow control off and non-jumbo frame; MTU 1500)
3. Snapmirror to the DR site will have dedicated four * 1G links in GREEN (paired in a ifgrp from each node) The reason for this is I recalled Netapp's best practice is to have multi separate links instead of one LACP link made of 4 * 1G ports in order to maximize throughput.
Q1. Is this setup a overkill by having two LACP links for 1. and 2.?
Q2. As we increase more number of datastore volumes in the future, is it better to have dedicated LIF and IP for each datastore volume? I remember one of the Netapp doc was suggesting to do so.
Q3. Is it recommended to have separate VLAN for datastore and the DB traffic? I'm planning to setup separate VLAN for VMware ESXi traffic and CIFS USER folder / Windows data traffic.
We're not going to have a private IP for the datastore volumes but I believe separating the VMware traffic from the CIFS in 2. should be sufficient. Any gotchas you can suggest or recommend?
Solved! See The Solution
If you don't have > 1Gb of bandwidth between the primary and DR site, then I don't think it will have significant impact. Just make sure it's setup for redundancy...could be as simple as ensuring the broadcast domain has 2+ ports (connected to different switches) on each host and the LIF is assigned to the correct failover group so that if one link fails it will move to the other.
LACP would accomplish the same thing, but with the added benefit of spreading the traffic across more than one link...remember though that a single primary node ICL IP address to the DR node ICL IP address will always hash to using just a single link in the LACP aggregate, so a single flow will only get the maximum throughput of a single link. Assuming the DR site is an HA pair, and the volumes are equally spread across the nodes, it would approximately balance the connections across the links. Keep in mind though that it's just the connections, not the amount of traffic...e.g. if one volume is 1TB in size and another is 1GB in size, they may use different links (assuming the DR volumes are on different nodes), but one of those connections will be busy for a lot longer period of time.
Andrew
Hello @nasmanrox,
I think it's important to remember that best practices are recommendations, not laws. Make sure you are making the right decisions for your infrastructure, and more importantly, application requirements based off of real needs, not just because it's what the best practices say.
Hope that helps. Please feel free to reach out if you have any questions.
Andrew
Thanks for your input. What's your thoughts on my SM link setup to the DR site? 2 * 1G LACP sounds about right? so 2 LACP 1Gb links to DR instead of having 1 LACP made up 4 * 1Gb ports.
If you don't have > 1Gb of bandwidth between the primary and DR site, then I don't think it will have significant impact. Just make sure it's setup for redundancy...could be as simple as ensuring the broadcast domain has 2+ ports (connected to different switches) on each host and the LIF is assigned to the correct failover group so that if one link fails it will move to the other.
LACP would accomplish the same thing, but with the added benefit of spreading the traffic across more than one link...remember though that a single primary node ICL IP address to the DR node ICL IP address will always hash to using just a single link in the LACP aggregate, so a single flow will only get the maximum throughput of a single link. Assuming the DR site is an HA pair, and the volumes are equally spread across the nodes, it would approximately balance the connections across the links. Keep in mind though that it's just the connections, not the amount of traffic...e.g. if one volume is 1TB in size and another is 1GB in size, they may use different links (assuming the DR volumes are on different nodes), but one of those connections will be busy for a lot longer period of time.
Andrew
One more question. Is it recommended to have separate SVM for VMware datastore volumes and CIFS user home folder volumes? Or is it OK to share the same SVM as long as utilizing different VLANs?
My personal opinion is that SVMs are for separation of privileges when you're delegating storage management tasks. For example, if the teams managing the CIFS/SMB shares is separate than the team managing the VMware datastores, then separate their permissions using an SVM. If it's the same team managing all of it, or storage management isn't delegated from the storage admin team, then there's no significant reason for multiple SVMs.
That being said, there are some edge cases...for example, if you want to use SVM DR and have the different data types failover independently, then it certainly makes sense to use multiple SVMs.
Andrew
Delegations and SVM DR are good reasons to create multiple SVMs. Security best practices and ipspaces, too.
As Datastores are usually on a private network (not routed) and CIFS shares are accessible from users, i would not create a unique SVM. In fact, with NFS, per default, protocol is accessible from every lif attached to a SVM. Only export policy avoids user to access to LUNs. With iSCSI, reachable IP addresses are choosed by admin.
I think that there is lot of reasons for multiple SVMs :
Multitenancy
Delegation
Security
SVM limits
Functions (infinite volume)
SVM DR
Ease of use, readibility, organisation
...
You should cable each filer to each switch in lacp config as opposed to each filer being directly connected to a single switch in a port channel.
4 x 10g LACP for CIFS is probably overkill, but if you have the ports then why not...
A dedicated LIF per volume is advantageous, however, it can become cumbersome to manage when you have lots of volumes..
You should always at least seperate protocols with VLANS. Sperating datatstore and DB is better practice than not doing it.
I made some changes to the visio diagram. This looks good?
@RPHELANIN wrote:You should cable each filer to each switch in lacp config as opposed to each filer being directly connected to a single switch in a port channel.
looks beter 🙂