Network and Storage Protocols

Disk assign with multi-path HA on 2246

beaumontj
4,919 Views

I am setting up our 2246-2 and have a couple questions regarding disk assignment.  Per the documentation I used the System Setup tool to discover and initially setup my system.

After setup I ran "disk show -v"...  The results show that all but 6 out of 72 disks have the same owner.  This owner is the A node in the HA pair.  Is this common and/or correct?  Should the disks be divided up among the two controllers?  If possible and best pratice I would like to allow OnTap to auto assign disks.

My next question is regarding the network interfaces hosting the pair.  When running setup, my 10g card was not listed as an available option.  I went ahead and configured the pair with with the standard e0a.  Since then I have configured my 10g interfaces and would like to move the HA pair to them.  Is there documentation regarding this or can someone provide some guidance.

Thanks!

7 REPLIES 7

CCOLEMAN_
4,919 Views

Hello,

Since both NetApp controllers are setup as "active-active" it's typically best to divide the disks evenly to utilize both controllers. That being said, it's also feasible to assign a lot of disks to one controller and make larger aggregates to yield more space and iops.

Couple of questions,

Are they all the same type of disks?

Did you want to evenly divide the workloads on both controllers?

Are your controllers capable of handling the majority of your workload on one controller?

In response to the network question,

Can you putty into one of the controllers and type "ifconfig -a" and paste back the results?

Thanks!

beaumontj
4,919 Views

Are they all the same type of disks?

- Yes 600 GB SAS

Did you want to evenly divide the workloads on both controllers?

- I figured that would be the best but talking with NetApp support they recommended allowing OnTap to assign.  The current assignment is based on the auto assign.

Are your controllers capable of handling the majority of your workload on one controller?

- We are going to be using a large portion for virtual disks so maybe I should divide them up a bit.  I was hoping that auto assign would handle best practices but understand configuration is really based on usage/situation.

Here is ifconfig -a from the A node.  I need to move from the e0a to e1a or e1b

e0a: flags=0x1f4c867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

          inet 156.123.149.186 netmask 0xffffff00 broadcast 156.123.149.255

          partner inet 156.123.149.187 (not in use)

          ether 00:a0:98:30:b9:28 (auto-1000t-fd-up) flowcontrol full

e0b: flags=0x170c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

          ether 00:a0:98:30:b9:29 (auto-unknown-down) flowcontrol full

e0c: flags=0x170c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

          ether 00:a0:98:30:b9:2a (auto-unknown-down) flowcontrol full

e0d: flags=0x170c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

          ether 00:a0:98:30:b9:2b (auto-unknown-down) flowcontrol full

e1a: flags=0x1f4c867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,LRO> mtu 1500

          inet 156.123.149.188 netmask 0xffffff00 broadcast 156.123.149.255

          ether 00:a0:98:1a:db:e8 (auto-10g_twinax-fd-up) flowcontrol full

e1b: flags=0x1f4c867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,LRO> mtu 1500

          inet 156.123.149.189 netmask 0xffffff00 broadcast 156.123.149.255

          ether 00:a0:98:1a:db:e9 (auto-10g_twinax-fd-up) flowcontrol full

e0M: flags=0x2b0c866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM,MGMT_PORT> mtu 1500

          ether 00:a0:98:30:b9:2d (auto-100tx-fd-up) flowcontrol full

e0P: flags=0x2b4c867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,ACP_PORT> mtu 1500 PRIVATE

          inet 192.168.1.158 netmask 0xfffffc00 broadcast 192.168.3.255 noddns

          ether 00:a0:98:30:b9:2c (auto-100tx-fd-up) flowcontrol full

lo: flags=0x1b48049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160

          inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1

          ether 00:00:00:00:00:00 (VIA Provider)

losk: flags=0x40a400c9<UP,LOOPBACK,RUNNING> mtu 9188

          inet 127.0.20.1 netmask 0xff000000 broadcast 127.0.20.1

CCOLEMAN_
4,919 Views

disk auto assign is good for when a disk fails it will automatically assign a global hot spare before a disk replacement arrives. But as you already mentioned, it's best to assign your disks to each controller based on your requirements.

A good option for you is,


Controller 1:

Make 1 big aggregate with a raid-group size of "23" so you will have two even raid groups of 23, and one spare. Then assign these disks to controller 1.

Controller 2:

Make another big aggregate with a raid group size of "24" and 1 spare.

This will give you the most storage, most iops, best position for dedupe, and a fair balance between the controllers. (you can increase the raid group size as you add more storage to keep even raid groups)

Can you type "rdfile /etc/rc" and paste the output and I will show you where to make the necessary Ethernet changes. Just curious, what kind of switches do you have? We can leverage all of these ports for redundancy.

beaumontj
4,919 Views

Thanks so much for your help.

rc config:

ifconfig e0a `hostname`-e0a netmask 255.255.255.0 broadcast 156.123.149.255 flowcontrol full partner 156.123.149.187 up

ifconfig e0b flowcontrol full up

ifconfig e0c flowcontrol full up

ifconfig e0d flowcontrol full up

ifconfig e1a flowcontrol full up (e1a and e1b are the 10gb ports)

ifconfig e1b flowcontrol full up

ifconfig e0M flowcontrol full up

ifconfig e0P `hostname`-e0P netmask 255.255.252.0 broadcast 192.168.3.255 flowcontrol full up

The SAN It will be connecting up directly to the dual blade enclosure switches (Dell M8024-k 10GbE SW).


CCOLEMAN_
4,919 Views

hmmm..that is a weird looking RC file.

you should probably run the "setup" command from each console and rerun the setup.

When it comes time to specify the network interfaces use "e1a" and "e1b".

I would read through the network documentation, because you can use LACP or MULTI for better throughput and redundancy. You can also just leave one port on stand by just in case a cable gets unplugged, switch fails, or the ports fails.

http://www.netapp.com/templates/mediaView?m=tr-3802.pdf&cc=us&wid=83443832&mid=29853432 <== check this doc out.

One you rerun setup, paste back your /etc/rc file again, and we can double check everything. I would do some digging on the DELL switches and see which "ifgrp" makes the most since for you. (they're called "VIFs" in this document, but they're called "ifgrps" now)

Ping us back with any questions! If you're uncomfortable with any of this, it might be best to reach out to a NetApp partner and get some help on your initial setup. Reaching out to whoever sold you the NetApp will be able to assist you this this.

beaumontj
4,919 Views

I have done a good bit of reading and I am documenting a possible configuration.  I do have a couple questions.

Q: I know you mentioned previously assigning disks evenly to controllers as a possibility.  With HA multipath it's my understanding that both controllers will service requests for all disks in the partnership??.  If this is true does it matter which controller a disk is assigned?

A start on a config...  *Systems is wired for HA Multipath

Controller 1:

*Combining 2 10gb interfaces to be connected to a private 10gb switch for iscsi and nfs traffic to vm farm.

ifgrp create lacp vif-private-1 -b ip e1a e1b
ifconfig vif-private-1 10.10.1.100 netmask 255.255.255.0 partner vif-private-2 mtusize 1500

*Combing 2 1gb interfaces for direct client nfs, cifs, and iscsi traffic.

ifgrp create lacp vif-public-3 -b ip e0c e0d
ifconfig vif-public-3 10.10.3.100 netmask 255.255.255.0 partner vif-public-4 mtusize 1500

vlan create vif-private 1

vlan create vif-public 3

Controller 2:

*Same as controller 1 with partners reversed.

ifgrp create lacp vif-private-2 -b ip e1a e1b

ifconfig vif-private-2 10.10.2.100 netmask 255.255.255.0 partner vif-private-1 mtusize 1500

ifgrp create lacp vif-public-4 -b ip e0c e0d

ifconfig vif-public-4 10.10.4.100 netmask 255.255.255.0 partner vif-public-3 mtusize 1500

vlan create vif-private 2

vlan create vif-public 4

I feel my vlan config isn't quite right....

Q: I plan to use 10. addressing for vifs and vlans.  When creating partner vifs is there a preferred addressing scheme?  For instance if vif-private-1 is 10.10.1.100 is there a requirement on what vif-private-2 should be?

Q: Do you know where I can find an example of two controller configurations in HA multipath mode?

Q: I plan to create a single vlan for each vif.  Is there any additional addressing/vlan considerations here?  At this time I do not think it is necessary to create vlans per protocol.

We do actually have a NetApp partner scheduled to come onsite and help us configure the unit.  Personally I would like to really understand how everything comes together before they get here.  Then I can ask constructive questions and have a better understanding of their guidance.

Thanks again for your help.

CCOLEMAN_
4,919 Views

Perfect! Glad you're trying to get a better understanding, always happy to answer questions for this cause

1. The disks are in an "active-active" pair, but each controller only serves the data that is assigned to it. So you have to balance the load as evenly as possible across each controllers.

2. If you're using 10g, it's easy enough to vlan each protocol and put everything over the 10g ports as it will be very challenging to max out this pipe. You can even do "multi-level" VIFs and use one of the 1gbit interfaces as a fail over to get better switch redundancy.

3. The vlans look fine, but just put it earlier in the RC file per the example below

4. Use any address you want, just make sure it's not on the same subnet as e0M <== DOUBLE QUAD CHECK THIS! Also, this is a lesser priority, but make sure ACP is not using the same first two octets as the SP.

Here is an example of a config that would be suitable for you,

---------------------------------------------------------------------------------------

#Created by ccoleman 11-30-2012

hostname hana01

ifgrp create lacp hana-01-10g -b ip e1a e1b

vlan create hana-01-10g 14 109

ifconfig hana-01-10g-14 10.140.4.56 netmask 255.255.255.224 partner hana-02-10g-14 mtusize 1500

ifconfig hana-01-10g-109 192.168.5.28 netmask 255.255.255.0 partner hana-02-10g-109 mtusize 1500

ifconfig hana-01-10g-109 alias 192.168.5.20 netmask 255.255.255.0 partner hana-02-10g-109 mtusize 1500

route add default 10.140.4.1 1

routed on

options dns.domainname domainname.local

options dns.enable on

options nis.enable off

savecore

--------------------------------------------------------

vlan 14 = CIFs

vlan 109 = NFS w/ an alias added so I can alternate IPs when presenting datastores to VMware

Public