I have a FAS 2552 with ONTAP 8.2.2 P1 7-mode with two controllers (HA pair) and 24 disks SAS.
After the setup, i see 24 disks per controller, but i have to create two aggregates minimum (one per controller( 12 disks each)).
First question :
How can i have :
- just one big aggregate with all my disks?
- is it possible with ONTAP 8.2.2 or i need to upgrade the system with ONTAP 8.3 and use ADP ?
Second question :
A LUN (LUN1) is create on the first aggregate on the first controller and an other LUN (LUN2) is create on the second aggregate on the second controller. My ESXi sees my two LUNs ==> OK
But when i disconnect the two links ISCSI on my first controller (e0c et e0d), the ESXi stop to see my LUN1 ....
It is normal? The second controller wouldn't recover my LUN1? It is not the function of HA?
Thank you if you can help me on these issues.
Please tell me if i not "clear" on my explanation.
In 7mode, you could make one large aggregate on one node, which is actively serving data, but you still need a minimal root aggregate (3 disks) on the 'idle' node for HA.
In cluster mode in 8.3 and later, you can use ADP, which yields more usable capacity by removing the requirement for dedicating entire disks to support the root aggregates. If you have the option to switch it to cluster mode you can take advantage of ADP by reinitializing the system on 8.3 or later.
In 7mode, takeover is not triggered by simple loss of link by default.
In Cluster Mode, MPIO with ALUA would fail the traffic over to an active/non-preferred path.
The SAN configuration guide for 7mode may be pertinent:
In comparison to the Clustered Data ONTAP version:
Question 1 - I would definitely recommend rebuilding your HA pair using ONTAP 8.3 or newer (9.0 is GA and 9.1 is pending very soon). I say rebuild as there is no way to move from 7-mode to Clustered ONTAP in-place, so you'll need to move any data you need off first and move it back (if you need temporary storage your NetApp rep/reseller can probably help you with temporary "swing gear" to get it done). Then, using ADP you can have a single large aggregate using 22 or 23 of the disks (2 or 1 spare) and also gain the performance of the extra spindles. Do bear in mind that you would be making the 2 nodes active/passive, as there would be no data access on the opposite node of the data aggregate outside of failover scenarios, but you wouldn't be able to overwork the CPU and memory of the single FAS2552 node with the small number of disks anyway. If you later add more shelves, you can either add the disks to the same aggregate or create a new one on the other node and spead the workload to make it active/active.
Question 2 - This is correct, in 7-mode you have 2 paths from your host(s) to the first controller, and if you lose both of those paths your host(s) lose access. Using Clustered ONTAP, you would have 4 iSCSI paths from the host(s) to the SVM (Storage Virtual Machine), as only the disk aggregate(s) are directly linked to the individual nodes of the cluster. This also means if you have multiple aggregates, you can safely move LUNs and Volumes between them with no outage - the SVM simply moves it's back-end access to the data when you move between aggregates. This is a big win in larger enviroments, and can also mean no outage when you do storage system refreshes - you simply add new nodes to the cluster, perform Volume Moves, then retire the old nodes (of course I left out some details but you get the drift). The SVM is an abstract above the physical hardware, just like using a hypervisor for compute resources. 7-mode HA will failover the entire ONTAP stack onto the opposite node (which can be manually invoked using cf takeover commands or by ONTAP seeing hardware failures and auto-failing), which does make the system highly available from a hardware failure or software upgrade view, but Clustered ONTAP (now simply known as ONTAP again in 9.0+ since 7-mode is depreciated) is much more more capable of making network interfaces, protocols and your data highly available (and also up to 24 nodes, not just 2).
I hope my long-winded explanation makes sense and helps, let me know if you need more info.
OK . Thank you for your reply.
About the data, it's a new infrastruture, so i can remove all LUN, volumes, aggr,... to make what i want.
If i understand correctly, the better solution for me is to upgrade using ontap 8.3, and move from 7-node to clustered ONTAP. Why it would not work if i keep ONTAP 8.2.2 and i simply move from 7-node to clustered ONTAP? Because ADP is not available on ONTAP 8.2.2?
If i upgrade, i could get an big aggregate with 20 disks for example ( and one for parity, an other for dparity, and two spare)? On the 20 disks the aggregate for the system AND for the data?
The only solution if i want to have a perfect redundancy (eg, if i disconnect the two links iSCSI from one controller) is using ONTAP 8.3 and ADP?
And a last question, can you tell me how i can know if i can move from 7-mode to clustered ONTAP and upgrade to ONTAP 8.3?
Thank you and sorry for my "broken English".
You can technically rebuild using Clustered ONTAP 8.2.2, but is there a reason you'd want to run this older software (released in November 2014)? There have been quite a number of big performance gains in the newer releases, as well as the normal bug fixes.
And yes, in your config you will have:
2 parity disks
20 data disks
ADP will create small partitions across the data and parity disks, but the impact on the overall capacity will be much smaller than having dedicated root aggregates like 7-mode requires.
And yes, with cDOT you will have more redundancy than 7-mode.
As for the details on how to perform the upgrade/rebuild, I highly suggest you take advantage of the Upgrade Advisor in MyAutoSupport on the NetApp Support website. It can guide you to the exact steps needed. Also note that you will need the have a Cluster Base license key, which may have been generated when you bought your systems, but if not you can request it from your NetApp sales team.
I don't think upgrade advisor has mode change information.
We have kb's covering the procedures, but on the new kb platform I can't tell who has access. You may need to log in or you may need a partner to get to these:
They reference 8.3, but the same applies to 9.