ONTAP Discussions

Reporting Nodes Added Automatically When Moving iSCSI-Based Volumes?

TMADOCTHOMAS
8,473 Views

A colleague of mine indicated he's never reconfigured reporting nodes when moving volumes containing LUNs from one HA Pair in a cluster to another HA Pair. My understanding from NetApp documentation has been that you have to add the destination nodes as reporting nodes in advance of a volume move if you want to retain optimized paths.

 

I created a volume as a test and moved it to another HA Pair without adjusting reporting nodes. Sure enough, the nodes I moved the volume to were added automatically to the list of reporting nodes for the LUN! This behavior contradicts NetApp documentation which indicates it must be done manually. Has anyone else encountered this? Does anyone know if there's a scenario where it wouldn't happen automatically?

10 REPLIES 10

wronald
8,415 Views

Hi @TMADOCTHOMAS,

 

You are correct, this should be done manually based on our docs.

First sentence is "You can move a LUN across volumes within a storage virtual machine (SVM), but you cannot move a LUN across SVMs. LUNs moved across volumes within an SVM are moved immediately and without loss of connectivity."

 

I have to try it out in my lab...

 

Regards,

Ron.

TMADOCTHOMAS
8,400 Views

@wronald,  I should note that I did the volume move in System Manager. I'm suspicious that this is the difference. Perhaps it would not work that way at the command line.

 

We are on OnTAP 9.2P2.

wronald
8,397 Views

Yes, that might be it.

But it should be documented somewhere, at least as a note.

I will try tonight and get back to confirm if it is "normal".

 

I will then trigger a document update on our side.

 

Cheers,

Ronald.

TMADOCTHOMAS
8,394 Views

Thanks!

gregorio
8,212 Views

I moved a few volumes using the CLI and the reporting nodes were added automatically. I am running Ontap 9.2P1, is this the new behavior or do I need add the reporting nodes manually to avoid an outage in the future. 

johnhil
8,070 Views

This is the new behavior. As Volumes and LUNs are moved from their original location to a new one, the reporting nodes are updated to include the new HA-Pair to prevent an outage. Once the new locataion is settled it's recommended to remove the previous reporting nodes unless you still wish them to be reported during any ALUA query.

unixnation
7,912 Views

Just following up to report that we have depended on this functionality when moving volumes hosting LUNs since 8.3.x days I think.

 

However, we did discover today there is a scenario in which the list of reporting nodes do NOT appear to get updated.

 

We moved volumes today that had previous been moved between other HA pairs in the cluster, without removing old reporting nodes (so these volumes already had 4 reporting nodes configured). Using a vol move command, the volume moved as normal however the nodes of the newly-hosting HA pair were not added to the configured reporting nodes on the LUNs in these volumes.

 

Running a lun mapping add-reporting-nodes -vserver <SVM_name> -volume * -local-nodes true -lun * fixed our issue, adding the relevant reporting nodes and increasing the list to 6 reporting nodes. Only those LUNs that already had 4 reporting nodes configured were affected.

I'm not sure I've explianed that particularly well - hopefully you get the idea! It would be good if @wronald could get the expected behavoir properly documented and let us know if what we saw was a bug (we're currently on 9.1p11).

 

Cheers,

Steve

TMADOCTHOMAS
7,873 Views

Interesting @unixnation. So if we have a 4-node cluster and add an HA pair to replace an aging HA pair, reporting nodes would not automatically update as we moved LUNs to the new HA Pair. That is good to know for sure!

unixnation
7,832 Views

Hi @TMADOCTHOMAS

 

Not specifically because it was a 4-node cluster but because the volumes had been moved between HA pairs to move from a NL-SAS --> SAS+FlashPool aggregate. This resulted in all 4 nodes ending up in the reporting nodes list. The migration procecss to new nodes is exactly what we're going through now.

 

I suspect if we'd removed the old reporting nodes this wouldn't have happened - something we should have done. However I still think NetApp could easily protect the customer by just adding the new reporting nodes seeing as it seems to work fine. My view is stuff like this is always going to get missed in busy environments with hundreds of volumes, especially if you move stuff semi-regularly around your clusters.

 

something to be

 

Cheers,

Steve

TMADOCTHOMAS
7,283 Views

@unixnation, totally agree. Way too complicated. We are a 4 node AFF 8080 cluster, and likely any replacements would be all-flash as well. If the system hasn't been improved by that time, I will likely manually add reporting nodes for each LUN in advance, just to be safe!

Public