ONTAP Discussions

Moving a volume to an aggregate on the second HA within a 4-node CDOT environment

mdvillanueva
7,780 Views

Hi experts,

 

I have vol1 containing LUN1 that is connected to a Windows machine. So my SVM1 is configured with two portsets; porsetHA1 and portsetHA2. I have the igroup for the Windows machine mapped to porsetHA1. vol1 is in HA1_aggr1 in HA1. I want to move vol1 to HA2_aggr1. Since I am moving vol1 to an aggregate accross a diffrent HA, will that be creating an outage on the LUN?

 

What can I do to avoid a downtime?

 

I am new to CDOT so any help would be great.

 

Thanks!

1 ACCEPTED SOLUTION

bobshouseofcards
7,755 Views

Hi.

 

You general configuration will not present a problem to accomplish the move, but it could present some potential failure scenarios leading to loss of access.

 

From your description I understand that you have two HA pairs, HA1 and HA2, in a single cluster.  You have an aggregate within each HA pair.  Each HA pair has a portset.  And you're new to cDOT.  All good.  let's start with a few basics then.

 

A cDOT cluster is for all intents a single storage system image to connecting clients.  Your volumes can be located anywhere within the cluster on any aggregate.  A request can come in to access that data on any "available" interface on any node, and the data will be located on the storage and delivered back through the interface to the connected client.  That just works, using the private cluster interconnect network on the backend.  It is completely transparent to the client.

 

The "virtualization" of the storage is done thorugh SVMs.  SVMs manage the volume/lun "space" that is presented and the interfaces (network, iSCSI, FC) thorugh which the volumes/luns are presented.  An SVM can also be limted as to where its volumes can live on the backend if desired.  So the magic of the physical storage is hidden through the SVM.

 

Now - you can tell the SVM "how" to present data.  Take an IP interface  for example.  An SVM may have only a single logical network interface, with a single logical IP address (not best practice of course, but suffices for this discussion).  That interface will be mapped on the backend to a physical network port on one physical node (no matter the size of the cluster).  Any access to the data that the SVM owns must come through that single interface.  The aggregate which hold the data could be controlled by any other node in the cluster or it might be on the node where the logical interface lives.  Either way, the data will be located and delivered. 

 

When a request comes into a node via any interface, it is fully processed on that node - for instance authenticaiton lookups, etc.  When the node goes to disk to access specific data blocks, that's when the cluster interconnect comes into play as needed.  Surprisingly there is little difference in access to local disk versus access to "remote" disk from a node perspective, although in extremely high IO environments it oculd make a small difference.  cDOT is highly optimized to address accessing data through an alternate node.  But, one could overwhelm the request processing capability of a node, in theory, if all the requests came into a single node and the other nodes were just serving disk traffic.  Having multiple "access points" is a good thing both for both performance and failure isolation if the logical access points are mapped to physical access points in separate networks or SANs.

 

So how does a portset play into this.  Just like in 7-mode, a portset limits the ports on which a given iGroup will see mapped LUNs.  In your example, your have a portset defined per each HA pair.  So for LUN1, clients are zoned to WWN's in portset1 and access the LUN through ports in the HA1 pair.  It doesn't matter where the volume/lun is physically located, your clients will still map the LUN through the portset on HA1 and access the data.  The volume move is transparent and non-disruptive.

 

The downside:  you only have access to LUN1 through ports in the HA1 pair.  So - rare, but what happens if the entire HA1 pair goes offline?  For instance, you hit a bug or condition that panics one controller in HA1, and during the giveback you hit the same bug or condition on the other controller and it panics leading to a moent in time when both controllers are down.  Granted - should be rare, but I've been there any number of times in both 7-mode and cDot.  So let's assume that you hit an odd condition that HA1 is completely off.  Your data is still up and good on HA2, but by limiting the access to a portset that only uses ports from HA1, you have lost access to the data, because HA1 is still the gatekeeper to that igroup and LUN.  

 

A more robust portset deifinition might be to chose two ports from each HA pair to be in the portset (certainly doesn't need to be all of the ports) or even at least one port per controller assuming you have carefully split the ports up between SAN fabrics.  Standard multi-pathing is going to locate and choose the best port to use for direct access to the LUN, no matter where you move it.

 

There are lots of ways to work this, and each has specific advantages and disadvantages - what works for you and your organization still counts for a lot.  But rest assured, so long as you have a means to access the SVM via network interface or block protocol interface, the data's physical location will be transparent to the client.

 

Hope this helps you out. 

 

Bob Greenwald

Lead Storage Engineer

Huron Legal | Huron Consulting Group

NCIE - SAN, Data Protection

 

 

 

View solution in original post

5 REPLIES 5

bobshouseofcards
7,756 Views

Hi.

 

You general configuration will not present a problem to accomplish the move, but it could present some potential failure scenarios leading to loss of access.

 

From your description I understand that you have two HA pairs, HA1 and HA2, in a single cluster.  You have an aggregate within each HA pair.  Each HA pair has a portset.  And you're new to cDOT.  All good.  let's start with a few basics then.

 

A cDOT cluster is for all intents a single storage system image to connecting clients.  Your volumes can be located anywhere within the cluster on any aggregate.  A request can come in to access that data on any "available" interface on any node, and the data will be located on the storage and delivered back through the interface to the connected client.  That just works, using the private cluster interconnect network on the backend.  It is completely transparent to the client.

 

The "virtualization" of the storage is done thorugh SVMs.  SVMs manage the volume/lun "space" that is presented and the interfaces (network, iSCSI, FC) thorugh which the volumes/luns are presented.  An SVM can also be limted as to where its volumes can live on the backend if desired.  So the magic of the physical storage is hidden through the SVM.

 

Now - you can tell the SVM "how" to present data.  Take an IP interface  for example.  An SVM may have only a single logical network interface, with a single logical IP address (not best practice of course, but suffices for this discussion).  That interface will be mapped on the backend to a physical network port on one physical node (no matter the size of the cluster).  Any access to the data that the SVM owns must come through that single interface.  The aggregate which hold the data could be controlled by any other node in the cluster or it might be on the node where the logical interface lives.  Either way, the data will be located and delivered. 

 

When a request comes into a node via any interface, it is fully processed on that node - for instance authenticaiton lookups, etc.  When the node goes to disk to access specific data blocks, that's when the cluster interconnect comes into play as needed.  Surprisingly there is little difference in access to local disk versus access to "remote" disk from a node perspective, although in extremely high IO environments it oculd make a small difference.  cDOT is highly optimized to address accessing data through an alternate node.  But, one could overwhelm the request processing capability of a node, in theory, if all the requests came into a single node and the other nodes were just serving disk traffic.  Having multiple "access points" is a good thing both for both performance and failure isolation if the logical access points are mapped to physical access points in separate networks or SANs.

 

So how does a portset play into this.  Just like in 7-mode, a portset limits the ports on which a given iGroup will see mapped LUNs.  In your example, your have a portset defined per each HA pair.  So for LUN1, clients are zoned to WWN's in portset1 and access the LUN through ports in the HA1 pair.  It doesn't matter where the volume/lun is physically located, your clients will still map the LUN through the portset on HA1 and access the data.  The volume move is transparent and non-disruptive.

 

The downside:  you only have access to LUN1 through ports in the HA1 pair.  So - rare, but what happens if the entire HA1 pair goes offline?  For instance, you hit a bug or condition that panics one controller in HA1, and during the giveback you hit the same bug or condition on the other controller and it panics leading to a moent in time when both controllers are down.  Granted - should be rare, but I've been there any number of times in both 7-mode and cDot.  So let's assume that you hit an odd condition that HA1 is completely off.  Your data is still up and good on HA2, but by limiting the access to a portset that only uses ports from HA1, you have lost access to the data, because HA1 is still the gatekeeper to that igroup and LUN.  

 

A more robust portset deifinition might be to chose two ports from each HA pair to be in the portset (certainly doesn't need to be all of the ports) or even at least one port per controller assuming you have carefully split the ports up between SAN fabrics.  Standard multi-pathing is going to locate and choose the best port to use for direct access to the LUN, no matter where you move it.

 

There are lots of ways to work this, and each has specific advantages and disadvantages - what works for you and your organization still counts for a lot.  But rest assured, so long as you have a means to access the SVM via network interface or block protocol interface, the data's physical location will be transparent to the client.

 

Hope this helps you out. 

 

Bob Greenwald

Lead Storage Engineer

Huron Legal | Huron Consulting Group

NCIE - SAN, Data Protection

 

 

 

mdvillanueva
7,749 Views

Hi,

 

Thank you so much for the detailed explanation. So not going away from my example, it would be best for me to just combine portsetHA1 and portsetHA2 into one portset?

 

In a single SVM, what benefit does a porset offer then?

 

If an igroup is not associated to any portset, but the client is configured to have sessions to each LIF in all nodes (both HA1 and HA2), would that achieve the same redundancy that you mentioned despite not having a portset?

bobshouseofcards
7,730 Views

Great follow-up questions.

 

Portsets come into play when you have a *lot* of ports to choose from using whatever block procotol is in play.  Very common in FC land, rarer in iSCSI world but still available if desired.

 

So sticking with FC, lets do the math.  Client has 2 HBAs for redundancy (# of fabrics not so important).  You have let's say a 6 node cDOT cluster, each node has 4 HBAs into the switches.  If fully zoned (already big list of zones especially if you do one-to-one zonin) that's 2 x 6 x 4 =48 potential paths for every LUN without a portset, also assuming you defined one FC-LIF to match every cluster node port in the SVM.  Most OS's I know tend to choke on over 16 or 32 paths per LUN, and even if they handle it they might not failover as cleany while they try to resolve changes in the paths.  

 

So, to limit the total paths presented, you can either limit the available FC lifs in the SVM.  For instance, let's say for "DemoSVM" you create lifs associated to every node but to only 2 ports in every node.  Well, that's 2 x 6 x 2 = 24 potential paths, zones, etc.  Still potentially a lot.  That's where portsets come in as an additional layer of "limitation" if you will.  My personal environment the default would be 128 paths on some host clients because they have 4 HBAs to start and the node count gets pretty big.

 

In a smaller environment, it becomes less clear why one would use portsets, as you indicate.  But it remains a good idea to plan for a portset design up front, which is essentially planning for growth.  You might, for exmaple, create a "development" portset which has no or limited redundancy or in the future might exist only on specific nodes in the cluster where you will build out a less expensive class of disk.  The equivalent "production" portset might then live on a higher class of nodes in the cluster or might have more total paths to spread the workload out over more storage engines (read : "nodes") to process the IO.  Initially, both portsets might simply contain all the nodes - but that can change in the future.

 

Another common thing for portsets is if you boot from SAN - certain environments choke on too many paths in the boot from SAN world, so a portset could be used to limit total presented paths.  In such a setup you might create mutliple igroups for the same client - one for the "boot" LUN and one for everything else.  The boot igroup can be mapped to a limited portset, and the regular igroup can be mapped to a larger igroup or not at all.  I use that technique on those 4 port HBA servers I mentioned above - the boot igroup only lists 2 of the HBAs, and that igroup is bound to a limited igroup to minimize path handling.  Also improves boot time with fewer paths to login during the boot process.

 

It can be hard to plan for that without a really clear direction laid out of course, but I hope some of these ideas will help you.  With respect to portsets, if you have a background in 7-mode and are just new to cDOT, then this is the easy one.  The reasons to use portsets remain the same between both.  The effects of scale are larger in cDOT.

 

Hope this helps you.

 

Bob Greenwald

Lead Storage Engineer

Huron Legal | Huron Consulting Group

NCIE - SAN, Data Protection

mdvillanueva
7,697 Views

Thanks again for a clear explanation.

 

Our environment is pure iscsi. Right now in each of my nodes, I create an ifgrp of two 10Gbe to create something like a0a-250 (I vlan tag also), then for each nodes, I only added one lif for iscsi traffic in the SAN-SVM. So total I have four LIFs. May I have your recommendation on this? Should I add more LIFS on the SVM on each nodes or is that sufficient for both redundancy and load balancing?

 

Thanks!

MV

bobshouseofcards
7,685 Views

Hi MV -

 

iSCSI introduces possibilities that FC doesn't have in terms of the path mechanisms.  In some ways it's more flexible and simpler to establish multi-pathing as a result.

 

Consider this scenario:  Assume a dedicated network segment for your iSCSI traffic.  On your hosts, you need only one logical network interface (perhaps provided by a team, perhaps by failover networking, perhaps just a single NIC).  You have a single LIF on each node provided by a team already and a VLAN-port.  So there is network level redundancy at storage and tons of potential bandwidth into your cluster (at least 4x10GbE, possibly 8x10GbE depending on how you define your ifgrps.

 

Your host can use its single logical NIC to connect to all four target IP addresses.  Like FC, iSCSI interfaces don't migrate, so each interface you define on the nodes will stay local to that node.  Well - that covers a whole ton of redundancy for stuff going offline through network switches, nodes changing, etc.  Multipath control software at your hosts will determine which path to use based on which target holds the LUN in question, but you can force a different balancing if desired by limiting which hosts use which targets.

 

So you have a ton of options where to apply various controls - from a storage standpoint, I prefer to define all paths I can (within reason) to storage, and then use portsets to dial things back for LUN availability.  Mostly that is because it is something I can control, and it primarily affects me.  If a network link is becoming over-utilized, for example, to one iSCSI port on a node, I can take direct action with a portset to force certain hosts to use an alternate path by changing the associated portset.  Might not be the "optimal" path, but if I have key servers that need a dedicated bandwidth, I can make those adjustments.  

 

I grant that such situations tend to be highly specialized and therefore rare, but I like that the choices are there to be controlled at storage rather than trying to touch a bunch of hosts.  But as always everything needs to be in the context of what works for your and the environment in which you hang out.

 

 

Hope this helps you.

 

Bob Greenwald

Lead Storage Engineer

Huron Legal | Huron Consulting Group

NCIE - SAN, Data Protection

 

If particularly useful, kudos and/or accepted solutions are both welcome and very much appreciated.

Public