@low944 wrote:
So with HA between controllers, would I need to run setup on each controller module via serial?
Yes, you need. You cannot setup HA without completing basic setup of both controllers first. You also need to assign some disks to the second controller at least for root aggregate. You may also need cf license depending on Data ONTAP version.
... View more
That’s not the normal steps at all. You never need to disable HA to power off systems (in general you should never disable HA on two node cluster unless following well defined procedure). I suspect OP confused HA and storage failover.
Because storage failover was not inhibited, one node is in failover. Now if it is also node that has epsilon (HA is disabled), this is really a problem. I agree, better open support case, it goes above normal forum level.
... View more
Could you attach full VM definition for reference? Such instructions tend to become outdated as software changes, having actual working definition can always be applied even if defaults change.
... View more
If “these 16GB LIF's are not zoned with any of the host” is really what it sounds, how hosts are supposed to log into these LIFs? Zones must include hosts (initiators) *and* LIFs (targets).
... View more
If this is new controller, there is no point in downgrading; just install desired version from boot menu. Downgrade is needed when you want to preserve data.
You may want to clean up disks before to avoid issues with label mismatch; if you have 9.4 installed currently, boot menu offers it as well.
... View more
Cabling behind ATTO bridge can be traced. Here is example of (7-Mode/node shell) storage bridge and environment shelf output:
storage bridge:
SAS Port A:
QSFP Vendor: Molex Inc.
QSFP Part Number: 112-00177+A0
QSFP Type: Passive Copper 2m ID:01
QSFP Serial Number: 417721061
environment shelf:
SAS cable information by element:
[1] Vendor: Molex Inc.
Type: QSFP passive copper 2m ID: 00 Swaps: 0
Serial number: 417721061 Part number: 112-00177+A0
Just follow serial numbers. Shelf port numbers are documented in SAS adapter connected to misconfigured SAS domain
... View more
I am aware of a single internal KB article that mentions it. May be some information is contained in service partner training, but those materials I have do not have anything.
I got feeling that exam is heavily oriented towards internal stuff, not partners.
... View more
If there are at least two spares, it is possible to move root to new aggregate avoiding full reinit then destroy existing root and reuse disks for SFO aggregates. Although it will take exactly the same time and is more involved than simple reinstall.
... View more
I think that if CIFS is terminated, shares should not be affected. But I do not have much first hand experience with CIFS, hopefully someone can chime in.
... View more
Yes, SnapMirror will work as described. Paths used in shares may be renamed automatically once you rename source volume, so you will need to manually change them. In case on NFS any existing export will continue to refer to original volume even if it is renamed, you will need to re-export. Likely similar with CIFS.
... View more
7MTT is using the same TDP SnapMirror under the hood. According to IMT, 7MTT CBT from 8.2.2 to 9.3 is supported which means TDP should also work. This sounds more like some networking issue between systems.
... View more
This requires double amount of storage to relocate data. It is equally possible to simply replace old controllers one by one reusing existing storage. This also works online.
... View more
@shuklas wrote:
Please pass this feedback using above link and HWU team will reach out to you .
HWU is unclear about mixed SAS/SSD or SATA/SSD stack limits
... View more
@Veldmaat wrote:
Indeed, SSD stack is restricted to 4.
This can also be interpreted as "no more than 4 SSD shelves in a mixed stack".
Would you be willing the share the internal document link with me, so i can check that document for reference?
Sure. 6Gb SAS-2 SE Presentation, pages 47 which explicitly gives different limit for mixed stacks.
... View more
Could someone point me to a document that clear states mixed stack limits? I know that pure SSD stack is restricted to 4 shelves, but at least one document claims 6 shelves in mixed stack (it is internal document so I'm happy to discuss it offline if appropriate). HWU shows limits for SAS/SATA/SSD separately, but again is silent about mixed usage.
I would say HWU should explicitly describe limits in this case.
... View more
If both nodes in HA pair fail, clients will lose access to data on disks, connected to this HA pair. Also clients will lose connectivity to cluster if LIF they use cannot failover to another HA pair.
... View more
Well ... you have two possibilities.
1. Use cables between nodes and switches as temporary interconnect. It is enough to ensure connectivity for a short time before you move LIFs to new switch. You will have two unused cables after you move the first switch.
2. Temporary enable switch less cluster mode. It means no ISL will be expected by ONTAP.
With high probability simply moving LIFs will work as well, but i’d Open support case to be sure.
... View more
You could temporary reconfigure cluster as switchless - it should not require connectivity between different adapter pairs. You can actually simply leave your cluster as switchless now when you have only two nodes, and avoid moving switches completely.
oh, and you must reconfigure your cluster as two-node now, otherwise failover won’t work properly.
... View more