ONTAP Discussions

ONTAP 9.6 cannot create mirror aggregate with SyncMirror

cluster-maido
4,503 Views

Hi all,

For evaluating of Local SyncMirror, I tried to create a mirror aggregate using
AFF8040A and 4 DS224Cs.

First of all, I tried to create with -diskcount option, but ONTAP didn't chose
1.1.* and 2.11.*, but chose out of range. So I did manual choice of each 5 disks
like that.

# storage aggregate create -aggregate aggr4 -disklist 1.1.0,1.1.1,1.1.2,1.1.3,1.1.4 \
-mirror-disklist 2.11.0,2.11.1,2.11.2,2.11.3,2.11.4

But ONTAP said.
Error: command failed: Aggregate creation would fail for aggregate "aggr4" on
node "netapp-n11-02". Reason: Current disk pool assignments do not
guarantee fault isolation for aggregates mirrored with SyncMirror. Other
disks in the loop are in a different pool than disks: "2.11.6",
"2.11.6". Use "disk show -v" to view and "disk assign" to change the
disk pool assignments.

Disk "2.11.6" was in pool1 and not chosen in the command line.
I don't have any ideas why "2.11.6" was related about creating mirror
aggregate.
Could you advise why and how can I create a mirror aggregate?

---
# storage disk show
--snip--
1.1.0 3.49TB 1 0 SSD spare Pool0 netapp-n11-02
1.1.1 3.49TB 1 1 SSD spare Pool0 netapp-n11-02
1.1.2 3.49TB 1 2 SSD spare Pool0 netapp-n11-02
1.1.3 3.49TB 1 3 SSD spare Pool0 netapp-n11-02
1.1.4 3.49TB 1 4 SSD spare Pool0 netapp-n11-02
1.1.5 3.49TB 1 5 SSD spare Pool0 netapp-n11-02
1.1.6 3.49TB 1 6 SSD spare Pool0 netapp-n11-02
1.1.7 3.49TB 1 7 SSD spare Pool0 netapp-n11-02
1.1.8 3.49TB 1 8 SSD spare Pool0 netapp-n11-02
1.1.9 3.49TB 1 9 SSD spare Pool0 netapp-n11-02
1.1.10 3.49TB 1 10 SSD spare Pool0 netapp-n11-02
1.1.11 3.49TB 1 11 SSD spare Pool0 netapp-n11-02

2.11.0 3.49TB 11 0 SSD spare Pool1 netapp-n11-02
2.11.1 3.49TB 11 1 SSD spare Pool1 netapp-n11-02
2.11.2 3.49TB 11 2 SSD spare Pool1 netapp-n11-02
2.11.3 3.49TB 11 3 SSD spare Pool1 netapp-n11-02
2.11.4 3.49TB 11 4 SSD spare Pool1 netapp-n11-02
2.11.5 3.49TB 11 5 SSD spare Pool1 netapp-n11-02
2.11.6 3.49TB 11 6 SSD spare Pool1 netapp-n11-02
2.11.7 3.49TB 11 7 SSD spare Pool1 netapp-n11-02
2.11.8 3.49TB 11 8 SSD spare Pool1 netapp-n11-02
2.11.9 3.49TB 11 9 SSD spare Pool1 netapp-n11-02
2.11.10 3.49TB 11 10 SSD spare Pool1 netapp-n11-02
2.11.11 3.49TB 11 11 SSD spare Pool1 netapp-n11-02
---

regards,
mau

1 ACCEPTED SOLUTION

Ontapforrum
4,295 Views

Hi,

 

Thanks for that information.

I cannot find any dedicated TR on this, may be b'cos this is the same concept used in Metro-Cluster and there are multiple TRs on it.


Coming back to aggregate mirroring, I am reading this Ontap 9 doc:

https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-mcc-mgmt-dr%2FGUID-2FFFF8E3-5771-4248-A7DE-621DB28B30BA.html


As a workaround, can we try this:
1) Create a first aggregate [un-mirrored]
cluster1::>aggr create -aggr test1 -diskcount 6

Once the first one created, step2: mirror the first aggreate.
2) cluster1::> storage aggregate mirror -aggregate test1


If step 2 complains: use this command
cluster1::> storage aggregate mirror -aggregate test1 -mirror-disklist <disk-list> -ignore-pool-checks true

 

What happens ?


Also, let us know this output;

cluster1::> storage shelf show
also, disk show -v from all nodes.

 

Even I am eager to know what is the basis of disk selection by ontap for mirroring aggr.

View solution in original post

4 REPLIES 4

Ontapforrum
4,391 Views

Hi,

 

Interesting case.

 

Can I know : You mentioned you tried with '-diskcount' option, but it chose disks out of order/random, but did it go through ? If yes, did you capture the disk selections by any chance ?

 

If we know the disk selection was nothing special [just random] then we can use '-force' command along with regular disklist, that should not complain.


But, I am curious to know when it does go through via -diskcount option, what selection of disks are/were?

 

Thanks!

cluster-maido
4,331 Views

Hi,

 

Thank you for your reply.

 

>You mentioned you tried with '-diskcount' option

I'd like to modify the description, ONTAP chose disks from pool0 and pool1 each.
I don't know why Not enough spares on node1, because there're some spares assigned to node1.

Actually ONTAP selected 6 disks from pool1 in 2.1.*, but asked about
"2.1.0", "2.1.0" in the same loop.
Do you have any ideas and could you share with me?

 

Anyway does NetApp release a latest TR about ONTAP 9 SyncMirror?
I can't find TR or best practice about the detailed procedure and how to connect
nodes and disk shelves for SyncMirror.

 

---
netapp-n11::> storage aggregate create agg4 -mirror -diskcount 12

Info: The layout for aggregate "agg4" on node "netapp-n11-01" would be:

First Plex

RAID Group rg0, 6 disks (block checksum, raid_dp)
Usable Physical
Position Disk Type Size Size
---------- ------------------------- ---------- -------- --------
shared 2.0.12 SSD - -
shared 1.10.12 SSD - -
shared 2.0.0 SSD 1.74TB 1.74TB
shared 1.10.0 SSD 1.74TB 1.74TB
shared 2.0.13 SSD 1.74TB 1.74TB
shared 1.10.13 SSD 1.74TB 1.74TB

Second Plex

RAID Group rg0, 6 disks (block checksum, raid_dp)
Usable Physical
Position Disk Type Size Size
---------- ------------------------- ---------- -------- --------
shared 2.1.0 SSD - -
shared 2.1.1 SSD - -
shared 2.1.2 SSD 1.74TB 1.74TB
shared 2.1.3 SSD 1.74TB 1.74TB
shared 2.1.4 SSD 1.74TB 1.74TB
shared 2.1.5 SSD 1.74TB 1.74TB

Aggregate capacity available for volume use would be 5.94TB.

The following disks would be partitioned: 2.1.0, 2.1.1, 2.1.2, 2.1.3,
2.1.4, 2.1.5.
Warning: Not enough spares on node netapp-n11-01.

Do you want to continue? {y|n}: y
[Job 50] Job is queued: Create agg4.

Error: command failed: [Job 50] Job failed: Failed to create aggregate "agg4"
on "netapp-n11-01". Reason: Current disk pool assignments do not
guarantee fault isolation for aggregates mirrored with SyncMirror. Other
disks in the loop are in a different pool than disks: "2.1.0", "2.1.0".
Use "disk show -v" to view and "disk assign" to change the disk pool
assignments.
--

Ontapforrum
4,296 Views

Hi,

 

Thanks for that information.

I cannot find any dedicated TR on this, may be b'cos this is the same concept used in Metro-Cluster and there are multiple TRs on it.


Coming back to aggregate mirroring, I am reading this Ontap 9 doc:

https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-mcc-mgmt-dr%2FGUID-2FFFF8E3-5771-4248-A7DE-621DB28B30BA.html


As a workaround, can we try this:
1) Create a first aggregate [un-mirrored]
cluster1::>aggr create -aggr test1 -diskcount 6

Once the first one created, step2: mirror the first aggreate.
2) cluster1::> storage aggregate mirror -aggregate test1


If step 2 complains: use this command
cluster1::> storage aggregate mirror -aggregate test1 -mirror-disklist <disk-list> -ignore-pool-checks true

 

What happens ?


Also, let us know this output;

cluster1::> storage shelf show
also, disk show -v from all nodes.

 

Even I am eager to know what is the basis of disk selection by ontap for mirroring aggr.

cluster-maido
4,077 Views

Hi there,,

I tried to do your workaroud, but the result was the same with a simular
output and couldn't create a mirrored aggregate.
But as your new advice, I could build Local SyncMirror.

Here's a procedure, which is a little unique and probably not a formal.
1. Power on shelf#1 in stack#1 and zeroing in node#1.
2. Setup a cluster and after that shutdown in node#1
3. Power on shelf#1 in stack#2 and zeroing in node#2
4. Join the cluster and shutdown in node#2
5. Power on all the disk shelves and nodes. After creating
a local aggregate, mirroring the aggregate with -ignore-pool-checks
option, you described in step2.

Before mirroring, it seems that assigning and choice the disk for
root aggreate in disk shelf#1 is important, but I can't find the
useful infomation to deply.

 

regards,

Public