EF & E-Series, SANtricity, and Related Plug-ins
EF & E-Series, SANtricity, and Related Plug-ins
Hi everybody,
this happens on different controllers installed in different sites on different models too (E-5500 with SAS and SATA hdd and on EF-550)
Sometime the "Need attention" warn appears on SANTricity Storage Manager and the warining is due to the fact that some LUN are declared served by the "B" controller that is not the default one.
The involved LUNs belong to the same disk pool but not all the LUNs of this disk pool are "trespassed".
Using the automatic configuration to put all the LUNs to the default "A" controller it works with no issues but after some time (hours and/or days) the array report that the same LUNs have been passed to the "B".
From the host point of view nothing changes, the mpio information still continue to report that the LUN is on the default optimized path.
What it can be?
Regards
hey guys,
sorry not an answer but a callout for help - anyone had a clue about this issue?
if got a costumer having the same issue on a e2700 with vmware on the hosts - everything looks good, performance is good but recovery guru comes up with the annoying message:
"Event Message: Volume not on preferred path due to AVT/RDAC failover Event Priority: Critical Component Type: Controller Component Location: Tray 99, Slot A"
until the volumes get redistributed
i read some old stuff about that AVT (automatic volume transfer) which should be disabled if multiple hosts accesses the same luns but host group type "vmware" should fix this those article mentioned.
any ideas on that?
the only article i was able to find was that one: https://kb.netapp.com/support/index?page=content&id=2018776&locale=en_US
which just mentions to install MPIO driver and redistribute volumes..
not a solution in my case.
thanks in advance
-AJ
I've experienced the same exact issue. I opened a case and at the end of the analisys this was the answer of support. I can also say that there are no network issues, or multipath ones and so on.
The bad thing is that SANTricity report this as a "Critical Alert Message" causing worries in the customer!!! It should be better that the level of criticism is lowered.
Here follow answer from support.
===================================================================================
From my analysis, you are not experiencing an anomaly. This behavior is normal operation of the storage array.
From the Santricity Storage Manager Concepts for version 11.10:
https://library.netapp.com/ecm/ecm_download_file/ECMP1394573
“Most host multi-path drivers will attempt to access each volume on a path to its preferred controller. However, if this preferred path becomes
unavailable, the multi-path driver on the host will failover to an alternate path. This failover might cause the volume
ownership to change to the alternate controller.”
In the event viewer, I see Volume not on preferred path due to AVT/RDAC failoverI also see: IO shipping implicit volume transfer
These confirm my suspicion that I/O shipping is causing the condition.
According to my interpretation of the data, the above described behavior is what you are seeing. This leaves us wondering why the hosts are choosing to use the alternate path. Is there a networking issue? A Host issue? I can’t be certain based on the provided data. Certainly your MPIO is configured and working, but there are underlying problems somewhere causing this behavior.
Other entries in the event viewer give us a clue that there are likely underlying network issues:
Session terminated unexpectedly
Connection terminated unexpectedly
These are both iSCSI issues. Please check into the possibility of network issues.
I hope this answers your questions.
Hi all
I have the exact same issue. Some setup information:
I did not changed the settings from the section "Configuring the multipath software Multipathing" in the SANtricity guide, because everything workds perfect. But still have this message:
Volume not on preferred path due to AVT/RDAC failover"
Any news on this?
Regards
Solero
I have the same issue....
Has anbody fixed the problem right now?!