ONTAP Hardware

FlashPool and Metrocluster - how to setup?

iUser
2,574 Views

Hi! 

  I'm using fabric metrocluster (fas8200) with disk-only aggregates. I want to add a FlashPool to one of the aggregates. I've read the TR-4070 and ONTAP documentation (link ) but i'm not sure about the procedure of changing the aggregate from HDD-only to hybrid in case of metrocluster. Here's what i have now:

 

cluster_a::>  aggr show -fields mirror,node,aggregate,storage-type,hybrid-enabled,owner-name
aggregate          storage-type mirror node         hybrid-enabled owner-name   
------------------ ------------ ------ ------------ -------------- ------------ 
aggr0_cluster_a_01 hdd          true   cluster_a-01 false          cluster_a-01 
aggr0_cluster_a_02 hdd          true   cluster_a-02 false          cluster_a-02 
aggr1_a            hdd          true   cluster_a-01 false          cluster_a-01 
aggr2_a            hdd          true   cluster_a-02 false          cluster_a-02 
4 entries were displayed.

cluster_a::> disk show -type ssd       
                     Usable           Disk    Container   Container   
Disk                   Size Shelf Bay Type    Type        Name      Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.4.0               894.0GB     4   0 SSD     spare       Pool0     cluster_a-01
1.4.1               894.0GB     4   1 SSD     spare       Pool0     cluster_a-01
1.4.2               894.0GB     4   2 SSD     spare       Pool0     cluster_a-01
1.4.3               894.0GB     4   3 SSD     spare       Pool0     cluster_a-01
1.4.4               894.0GB     4   4 SSD     spare       Pool0     cluster_a-01
1.4.5               894.0GB     4   5 SSD     spare       Pool0     cluster_a-01
1.4.6               894.0GB     4   6 SSD     spare       Pool0     cluster_a-01
1.4.7               894.0GB     4   7 SSD     spare       Pool0     cluster_a-01
1.4.8               894.0GB     4   8 SSD     spare       Pool0     cluster_a-01
1.4.9               894.0GB     4   9 SSD     spare       Pool0     cluster_a-01
1.4.10              894.0GB     4  10 SSD     spare       Pool0     cluster_a-01
1.4.11              894.0GB     4  11 SSD     spare       Pool0     cluster_a-01
1.4.12              894.0GB     4  12 SSD     spare       Pool0     cluster_a-02
1.4.13              894.0GB     4  13 SSD     spare       Pool0     cluster_a-02
1.4.14              894.0GB     4  14 SSD     spare       Pool0     cluster_a-02
1.4.15              894.0GB     4  15 SSD     spare       Pool0     cluster_a-02
1.4.16              894.0GB     4  16 SSD     spare       Pool0     cluster_a-02
1.4.17              894.0GB     4  17 SSD     spare       Pool0     cluster_a-02
1.4.18              894.0GB     4  18 SSD     spare       Pool0     cluster_a-02
1.4.19              894.0GB     4  19 SSD     spare       Pool0     cluster_a-02
1.4.20              894.0GB     4  20 SSD     spare       Pool0     cluster_a-02
1.4.21              894.0GB     4  21 SSD     spare       Pool0     cluster_a-02
1.4.22              894.0GB     4  22 SSD     spare       Pool0     cluster_a-02
1.4.23              894.0GB     4  23 SSD     spare       Pool0     cluster_a-02
4.14.0                    -    14   0 SSD     remote      -         cluster_b-01
4.14.1                    -    14   1 SSD     remote      -         cluster_b-01
4.14.2                    -    14   2 SSD     remote      -         cluster_b-01
4.14.3                    -    14   3 SSD     remote      -         cluster_b-01
4.14.4                    -    14   4 SSD     remote      -         cluster_b-01
4.14.5                    -    14   5 SSD     remote      -         cluster_b-01
4.14.6                    -    14   6 SSD     remote      -         cluster_b-01
4.14.7                    -    14   7 SSD     remote      -         cluster_b-01
4.14.8                    -    14   8 SSD     remote      -         cluster_b-01
4.14.9                    -    14   9 SSD     remote      -         cluster_b-01
4.14.10                   -    14  10 SSD     remote      -         cluster_b-01
4.14.11                   -    14  11 SSD     remote      -         cluster_b-01
4.14.12                   -    14  12 SSD     remote      -         cluster_b-02
4.14.13                   -    14  13 SSD     remote      -         cluster_b-02
4.14.14                   -    14  14 SSD     remote      -         cluster_b-02
4.14.15                   -    14  15 SSD     remote      -         cluster_b-02
4.14.16                   -    14  16 SSD     remote      -         cluster_b-02
4.14.17                   -    14  17 SSD     remote      -         cluster_b-02
4.14.18                   -    14  18 SSD     remote      -         cluster_b-02
4.14.19                   -    14  19 SSD     remote      -         cluster_b-02
4.14.20                   -    14  20 SSD     remote      -         cluster_b-02
4.14.21                   -    14  21 SSD     remote      -         cluster_b-02
4.14.22                   -    14  22 SSD     remote      -         cluster_b-02
4.14.23                   -    14  23 SSD     remote      -         cluster_b-02
48 entries were displayed.

cluster_a::> 

As you can see, my aggregates and SSDs are owned equally by all the cluster nodes. My plan is to use "add cache" button from the web interface to add the couple of SSD to one of the aggregates. My question is: how many SSD will be spent in case if i choose 2 SSDs to add to an aggregate? Why i'm asking - i've read that the flashpool cache is also mirrored via two sites, so it's obvoius that i should use equal quantity of SSDs on both sites for one aggregate. In this case, how the other side of the cluster is going to be configured? I mean, which SSDs (from which node) are going to be added to the aggregate?

 

1 REPLY 1

iUser
2,498 Views

Unfortunately,  I couldn't find the documentation on this case, but looks like the config is quite simple, according my colleague's advice:

 

 

cluster_a::> storage disk assign -disklist 1.4.0, 1.4.1, 1.4.2 -pool 0 

cluster_a::> storage disk assign -disklist 4.14.0, 4.14.1, 4.14.2 -pool 1

cluster_a::> aggr modify -aggregate aggr1_a -hybrid-enabled true

I haven't done that yet,  but looks correct.

 

Public