ONTAP Hardware

FAS2040 Network Configuration

genisysau
11,879 Views

Hi

Just wondering if anyone can recommend or share any experience with regards to best practices in configuring the storage networking on a FAS2040 (Dual Controller).

We are looking to run CIFS, NFS and iSCSI on a FAS204 within a virtual environment. Filer 1 will be running iSCSI traffic predominantly, whilst Filer 2 will run NFS and CIFS shares.Management of the filers will be via administered via System Manager.

The NFS traffic will only be used for our virtual environment to store our virtual disks, CIFS will be used for general Windows shares / home folders and iSCSI traffic for our Microsoft Exchange databases (in order to use SnapManager for Exchange).

What we want to achieve is adequate throughput for each of these protocols whilst segregating the iSCSI traffic from the NFS/CIFS and management networks. Redundancy also needs to be kept in mind to eliminate a physical switch or port failing. Ive read that we can create VIF's, sVIFS's and use LACP as part of the design. Not too sure how this can be achieved as my networking expertise extends to basic at best.

What would happen in the event of a partner takeover with respect to the network configuration?

Any thoughts or recommendations or guidance would be greatly apprecaited.

9 REPLIES 9

radek_kubka
11,879 Views

Hi,

Just a quick thought:

In VMware environment I would rather group iSCSI & NFS together (as my IP Storage network) & separate CIFS out as file sharing protocol.

Regards,
Radek

mayumbarob
11,879 Views

Hello genisysau,

The FAS2040, is a good work horse, we are only running CIFS, but we also have over 13000 users; We procured two of these as an upgrade and believe me, we are very impressed with them, I have them setup 17 miles apart over dark fiber in replication mode running SnapMirror Synchronous.

There are several parts to your question, and I don’t have all the answers;

VIF’s are really easy to use, once you understand the visual picture, the beauty of the FAS2040, is that it has a total of 8 data ports, four per head and two management ports, (Dual Controller). So this will help  for what you are trying to do, 

LACP is a Link aggregation protocol which is really using multiple network ports in parallel to increase of your network pipe and or create redundancy for higher availability.

BUT YOUR NETWORK SWITCH HAVE TO SUPPORT IT,

Before you complicate your configuration, just ask your network guys, to provide simple port aggregation on the switch, without any sharing information, like LACP, let the ONTAP VIF daemon control all the that stuff, because the commands are really all straight forward on the NetApp side,

Just ensure that all interfaces to be included in the VIF are configured to the down status, which means that your management port has to been operational first, unless you have direct console access to the units.

You can test different VIF configurations as you decide to separate the different protocol traffic, CIFS, NFS, and iSCSI, the VIF create command is not persistent across reboots unless the command is added to the /etc/rc file, and once you are satisfied with the config, run setup, to lock it into the /etc/rc file.

Some helpful stuff;

http://now.netapp.com/NOW/knowledge/docs/ontap/rel733/html/ontap/nag/GUID-708701F9-0ED1-4B0D-A1D8-8F0F4DEED03B.html

Ours is an enterprise class solution for even higher redundancy, the traffic goes across two switches;

Your network guys have to be fully onboard with this solution other wise you can bring the entire network down with a “Broadcast storm” A state in which a message that has been broadcast across a network  results in even more responses, and each response results in still more responses into a snowball effect. A severe broadcast storm can block all other network traffic, resulting in a network meltdown. Broadcast storms can usually be prevented by carefully configuring a network to block illegal broadcast messages.

http://now.netapp.com/NOW/knowledge/docs/ontap/rel733/html/ontap/nag/GUID-A293832B-C740-4A60-9FB0-0353676EE2A6.html

During your testing don’t forget to add the route and the gateway  commands;  “that’s not common knowledge”

Hope this helps,

Robert M.

genisysau
11,879 Views

Hi

Thanks for the reply and apprecaite your feedback. Further to your comments, I have the following comments and questions.

Im still a little unsure of the design for the Filers from the network configuration side. I spoke with the network guys and they have advised that the switches we use support LACP, trunking, VLAN's without an issue.

To give you a little more idea as to what we are doing, a summary is as follows:

We have the following as part of the infrastructure solution:


1 x IBM BladeCenter with 8 gigabit ethernet ports per blade server

8 x L2 Ethernet Switches in the Blade Chassis (IBM Server Connectivity Module)

1 x FAS204 Dual Controller with 4 gigabit ethernet ports per controller


We are looking to use NFS shares primarily for our Citrix XenServer Storage Repositories, CIFS for general Windows shares & home directories, and iSCSI for our Microsoft Exchange environment.


Filer 1 (Controller A) will host the iSCSI LUN and Filer 2 (Controller B) will host the NFS and CIFS shares.


At the moment, as far as I understand, my thoughts around the network design are as follows:

e0a - Single Interface for CIFS & Management

e0b & e0c - MultiMode VIF for NFS traffic only

e0d - Single Interace for iSCSI traffic only

or

VIF0 (e0a & e0b) - MultiMode VIF for Management and NFS

VIF1 (e0c & e0d - MultiMode VIF for ISCSI and CIFS traffic

or

VIF0 (e0a, e0b, e0c, e0d) - Multimode VIF to support all protocols

Typically, I would split the iSCSI traffic from the NFS and CIFS due to its chatty nature or does it not matter with the NetApp filers?


Based on the above information, what would be the best way to design the networking. It could be either through multiple VIF's, sVIFS, VLAN's, Alias etc etc. Can we load balance traffic between both filers or are they still considered 2 seperate filers with seperate networking?

Should the exact same network configuration be implemented on both Filers (in case of partner takeover), even though we wont be hosting any CIFS or NFS shares on Filer 1 at this stage or should we configure each Filers's network according to its functional role and the protocols that it will be serving?

Do you recommend turning on Jumbo Frames for iSCSI and/or NFS?

If you have has any experience with this same or similar solution (can be VMWare ESX or Citrix XenServer) or can assist with any part of it, it will be greatly appreciated.

If it easier to illustrate using a diagram with a sample set of IP addresses for segregating the storage network from the production network, that would be very useful.


Thanking you in advance.

aborzenkov
11,879 Views

Should the exact same network configuration be implemented on both Filers (in case of partner takeover), even though we wont be hosting any CIFS or NFS shares on Filer 1 at this stage or should we configure each Filers's network according to its functional role and the protocols that it will be serving?


For takeover to work partner must have access to the same VLANs and interfaces in the same VLANs must be configured to take over partner interfaces. Configuration need not to be physically identical, but IMHO it really simplifies it to keep it the same on both heads.

As for other questions - unfortunately the answer really is "it depends". I would avoid using single interfaces as this just adds another SPOF (it can be mitigated by configuring NFO though). Easiest to manage is to just configure all interfaces as single LACP VIF; but this could result in uneven traffic distribution. In any case do not expect to find a silver bullet - whatever you chose, monitor your configuration for overload and possible bottlenecks.

mayumbarob
11,879 Views

Hello  genisysau,

Good questions, and a good add on from aborzenkov, thanks!

You are trying to achieve a lot of different service options with meager resources; and you want to service iSCSI, NFS and CIFS requests.

The FAS2040, has only 4 data NICs; some of the above mentioned configs will work but with out NIC redundancy you may incur downtime, which negates the purpose of a high availability dual controller cluster system like this one.

When you use VIF’s you still have to issue the VIF an IP address, if that’s correct, then I believe you will have 3 separate IP’s per controller, for iSCSI, NFS and CIFS services and you only have 4 NICs, you need to go back to the drawing board and plan this right.

If you decide to use vif create lacp, make sure that LACP is enabled on your Network switch, this has never worked well for us because at any one time only one port is channeling traffic.

You can use vif create multi and use the ip load balancing switch that’s what I recommend.

To achieve high network throughput, in many NetApp benchmark illustrations, Jumbo frames has been enabled, and even thought the advantage is less CPU overload because of less header processing.

Jumbo frames most legitimate deployment is most seen between trunks, and not client server setups, your clients and intermediate routers MTU will have to be configured to use jumbo frames on their NIC’s, if they are not, then communication will translate to the clients frame size.

You cannot load balance, cannot bond the NIC cards on both filers, these are two separate filers with different identities, and the same interface names, .i.e. e0a, e0b, e0c and e0d, designed to be able to take over each others roles, please clarify with Netapp, therefore the configurations should be the same.

I recommend going with the Citrix Desktop on ESX hosts instead of the CitrixZen server, we discovered that our full Blade Chassis could only give us 96GB of RAM, for 1000 desktops at 2GB of RAM, VM can over subscribe to 140GB, ZenServer could not.

dennis_von_eulenburg
11,879 Views

Ok, here are my solutions:

1. I would configure 1 multimode vif over the interfaces e0a+e0c(vif1) and e0b+e0d(vif2)

2. create a singlemode vif(vif3) with the new interfaces vif1 and vif2 

2. Configure 3 vlans for vif3 vif3-1 vif3-2 vif3-3

3. give each vlan an IP-adress one for iscsi. one for nfs and one for cifs

4. connect e0a and e0c to bladeswitch 1 and e0b and e0c to bladeswitch 2 (don't forget to configure your vlans and set the port to tagged)

Pro: Network redundancy

Con: just 2 usable ports

or

1. I would configure 1 multimode vif over the interfaces e0a+e0c+e0b+e0d(vif1)

2. Configure 3 vlans for vif3 vif1-1 vif1-2 vif1-3

3. give each vlan an IP-adress one for iscsi. one for nfs and one for cifs

4. connect e0a e0c e0b e0d to bladeswitch 1

5. Set filer option cf.takeover.on_network_interface_failure on

Pro: 4 usable ports

Con: Cluster takeover on network failure

RAFAEL_GUEDES
11,879 Views

Hi Dennis,

I'm planning a deployment that seem you suggested. I'm planning aggregate all 4 interfaces of the one head in a lacp vif, but I have two 2960s in a stack, two interfaces from filer1 to switch1 and two interfaces from filer1 to switch2.

One question about your second suggestion:

Did you turn on cf.takeover.on_network_interface_failure considering you have two standalone switches, right? If I have a switch wich support cross-stack ether-channel, this option is not necessary, right?

Tks!

ssibenetapp
11,879 Views

Hello mayumbarob

It is possible replicate two fas 2040 with fc? I have read that it is not possible because you need an extra I/O card that with this model is not available...

thanks

mayumbarob
11,879 Views

genisysau , I forgot, for the Custer Failover both filers should be licensed to run all your protocols, CIFS, NFS …..

Public