Network and Storage Protocols

FAS2040 best practices question

borismekler
12,828 Views

Or two of them, actually.

Background - I'm preparing to migrate from a dual controller FAS2020 with 12x300GB drives to a dual controller FAS2040 with no internal drives and 24x300GB drives in a DS4243. Migration will be performed by moving the 12 internal drives into the empty FAS2040 chassis according to this procedure. The FAS2020 is currently configured with 1 controller owning 10 drives, 9 of them in a single RAID-DP aggregate, and the other controller owning 2 drives in a RAID4 aggregate. End result, immediately after the migration, I'll have my old network settings, 25 spare drives total, and 2 unconfigured NICs per controller. The system provides (and will continue to do so) NFS access to vSphere hosts, CIFS access to IIS web farms, and iSCSI LUNs to MSSQL and Exchange (100% virtual). Right now, on the FAS2020, e0a and e0b are configured into static multimode VIFs, with each controller plugged into a separate switch (switches don't support stacking, so I can't go across). iSCSI is using a single path to that VIF address.

What I'm planning for the new box:

  1. Networking. Change the VIF from static multimode to single mode, and plug e0a/e0b into separate switches. This will give me switch failure resiliency. Plug e0c into one switch, e0d into another, configure them into separate VLANs that don't communicate across switches. On the vSphere hosts, configure two extra vSwitches, each with one physical NIC and a VM port group assigned the proper VLAN tag. Each VM that needs iSCSI access gets two vNICs, one in each of those networks, and MPIO is configured.
  2. Disks. After the upgrade is complete, assign all 24 disks in the DS4243 shelf to 2nd controller and create a new aggregate using 23 disks, RG size 23. Migrate the root volume there and delete the old RAID4 aggregate. Give the two freed up disks to the first controller, expand its aggregate from 9 to 11 disks. Use storage VMotion to migrate most of the VMDKs (there's about 1.5TB total, pre-ASIS) to the second controller. End result, first controller will own all 12 internal drives with 9 data, 2 parity and 1 spare, while second controller will own all 24 external drives, with 21 data, 2 parity and 1 spare. Come next upgrade, in 4-6 years' time, even if it will turn out to be impossible to move the first controller's internal drives to a new home, migrating the second one (with most of the data) should be as simple as plugging the shelf into a new head.
10 REPLIES 10

mcope
12,775 Views

Nothing in your plan that stands out as unusual.  Looks good.

A couple of suggestions for migrating the internal drives to the FAS2040 chassis:

1.  Boot into Maintenance Mode and use disk reassign -s <FAS2020 controller ID> -d <FAS2040 controller ID> to do disk assignment and there's no way to confuse disk ownership (not too hard when only 3 drives belong to second controller)

2.  While in Maintenance Mode, destroy the mailbox disks.  If you don't, ONTAP will complain about unsynchronized NVRAM logs, panic and core dump.

     Run these commands on both controllers:

     *> storage release disks

     *> mailbox destroy local

     *> mailbox destroy partner

During boot up you will probably see some error messages about the unsynchronized NVRAM, no root aggregate found because it's marked as foreign, and so on.  These are OK.  When you finally see some messages where ONTAP assumes NVRAM has been changed, then it will sort everything out and boot up just fine.  New cf mailbox disks will be created during boot up and when you enable cf for the first time.

borismekler
12,775 Views

Ah, so that is what "mailbox destroy local/remote" does - it's mentioned in the upgrade procedure document, but without in-depth explanation of the command's purpose. What does "storage release disks" do exactly? It's not mentioned in the guide, and I can't find any clarification in the manual command reference pdf either. Storage management guide mentions it as part of a procedure to convert from software to hardware disk ownership, but that's not what I'm after here.

kuether
12,775 Views

Hi Boris,

At the first. I´m in your mind if do you say this answer is not really early.

The command mailbox destroy, issued deleting the corresponding disk between a cluster pair.

This disk containing meta data about the storage systems, e.g. raid groups, configutation and so on.

This disk will also be used if the interconnect fails on both lines, cause if the clusterpartner can still see the mailbox disks, he knows that the partner is still alive.

The command "storage release disks" will release but not remove the ownership of a disk that has a other system ID as owner.

If do you dont use this command (usable only in maintenance mode) you`ll get a error in kind of "the ownership cant be removed, this system is not owner of this disk" some like.

aborzenkov
12,774 Views

This disk containing meta data about the storage systems, e.g. raid groups, configutation and so on.

This is the first time I see someone says it. You probably know better being from NetApp, but I was always sure raid group configuration is kept in disk labels on every disk belonging to this raid group and definitely not in mailbox; and I am not sure what other "configuration" you mean. Could be more specific?

The simple proof that raid group config is not kept in mailbox is the fact that you can easily destroy mailbox and still have all your aggregates back.

ERICMB_INT
12,774 Views

But can you destroy the mailbox, do a failover and THEN have all your RGs and aggr. intact?

Eric

mcope
12,774 Views

Each disk stores metadata indicating the raid group it belongs to and what aggregate the raid group belongs to.  The only way to remove the metadata is to convert the disk to a spare and to zero it - or to reinitialize the entire system.  By putting this information on the disks, we aren't restricted to keeping a disk in a specific bay of a specific shelf.  When I was a NetApp customer in the 90's, my sales rep used to talk about playing 52 Pickup with disk drives.  You can take all the drives out of the shelves, shuffle them around, and then put them into any bay in any shelf, and WAFL won't complain as long as all the disks for an aggregate are present.  Keeping the metadata on disk allows us to do some really interesting things like physical data migrations by moving all the disks in an aggregate to another system.  One of my old customers captures has ships and airplanes outfitted with low-end FAS systems to capture sensor data.  When the ship/plane goes into port, from anywhere in the world, they disconnect the shelves and send them by FedEx back to the corporate office and attach them to a high-end FAS and copy off the data.  Then they scrub the disks and ship them to the next port of call.

mcope
8,244 Views

Forgot to mention, once you destroy the mailboxes you can't do a failover.  Destroying the mailboxes converts an HA pair into two standalone systems. You would have to reboot both controllers so WAFL could create new mailboxes before CF functionality is restored.

mcope
12,775 Views

The 'storage release disks' command removes any reservations on the disks to allow you to make changes.  In most situations, a graceful shutdown will remove the reservations, but I find it's better to run the command just to be sure.

The mailboxes are essentially a journal file stored on one parity drive and one data drive in the root aggregate. Each controller in a HA/Active-Active pair creates these mailboxes when the systems are first paired together.  There is NO configuration information stored in the mailbox (as you mentioned, that information is on the RAID label of each disk).  The mailboxes store information about the state of the HA pair and activity each controller is performing at the moment.  If you are familiar with quorum disks in other 'cluster' configurations (e.g., Microsoft Cluster), the mailboxes serve the same purpose.   You have to destroy these mailboxes during a storage controller upgrade (head swap) because they are associated with the hardware IDs of the controllers.

D_BEREZENKO
12,775 Views

Hello

mailbox disks and how it works?

mcope
12,775 Views

Well...until a year or two ago they weren't even mentioned in the documentation (the mailbox commands still aren't).  Even today there's very little detailed information available.  This is from the Data ONTAP 8.1 Active-Active Configuration Guide:

They each have mailbox disks or array LUNs on the root volume that do the following tasks:

• Maintain consistency between the pair

• Continually check whether the other node is running or whether it has performed a takeover

• Store configuration information that is not specific to any particular node

Public