ONTAP Hardware

FAS2040 NFS CIFS design

TEST_DESIGNS
2,926 Views

Hello All,

Just getting a dual controller FAS2040 and DS4243 shelf online, and I'm curious what network/port setup you all would recommend. The plan is to utilize NFS to 3 VMware hosts and possibly CIFS for data. Desiring to find that optimal balance of available ports for traffic and failover. The downside to CIFS for data is that those ports would then have to run to the house network rather than the isolated SAN network. Thoughts? I have 2 stand-alone (not stacked) switches set aside that do support LACP w/ jumbo frames. Just looking for basic setup ideas.

Let me know if you need more information.

Thanks.

3 REPLIES 3

nagunixharbor
2,926 Views

Matt,

It seems to me that there are a few critical details missing in regards to what you're trying to do.  You didn't mention port speed for your switches.  You mentioned VMware and 1g ports is a bare minimum for vMotion.  ( You also did not mention if you plan to use standalone ESXi hosts or clustered)  If you have 1g ports you might have to vif a few ports to get the vMotion performance your environment requires.  Of course there are the requirements of your production network too and you'll need a few ports for that too.  So switch/port speed is a huge issue if you are going NFS for Vmware.  VMDK size will also play into that as well.  So since you have a 2040 and those units don't come with a lot of nics you might have to sacrifice some redundancy.

So consulting VMware best practices for NetApp NFS would probably be a huge help in your planning.

Good luck

TEST_DESIGNS
2,926 Views

Hey. Thanks for the reply. I wasn't sure how much detail NetApp admins needed to comment on some general best practices, and I didn't want to make my post overly long. I have read through the VMware document for NetApp NFS. I'm really looking for NetApp general config info. If you have time and the knowledge, I'd appreciate your comment.

The two SAN switches are 24x1Gb switches with jumbo frames and LACP capability (Juniper EX2200 if you care to know). The VMware servers are indeed clustered. They have 8x1Gb ethernet ports to split between storage and the house network. I was definitely planning to vif a few ports. Just asking what NetApp specific setup you would use to get the most bandwidth possible to the cluster and still provide decent speed to CIFS and redundancy for switch or controller failure. It was made quite clear to me over the past 3 months that the 2040 and its 8 data ports would handle this setup.

The production network is also 1Gb, and there are plenty of ports available.

Just wondering, with the 8 ports available on the 2040, what ports you would vif, what switch you'd connect them to, and how you'd specify failover to the 2040.

Thanks for your time.

radek_kubka
2,926 Views

Hi Matt,

Have you seen this thread: https://communities.netapp.com/message/47811#47811 ? There is a lot of info there.

On a high level, I would probably create two vifs (per controller)- one for NFS, one for CIFS - each containing two ports.

It's a pity though your switches for NFS are not stacked, as you won't be able to use cross-stack LACP.

Regards,

Radek

Public