2018-02-16 09:25 AM
I am testing a FAS8200 with 2 Shelves of 12*10 TB NL-SAS and 1 shelf with 12*3.8TB SSD.
I have simulated my sizing with synergy:
All NL-SAS partitionned root-data for 2 root aggregates and 2 data aggregates.
With SSD,I want to dedicate
I use the last version 9.3.
Installation is done on the two shelves NL-SAS with option 9/9a/9b.
After cluster creation, I have connected the SSD shelf. And all SSD are now partitionned root-data-data.
They appears "unknow".
I have assign the 3 disks for storage pool and I can't create it, because disk are not "spare" but shared. :'(
And for my SSD aggregate, I haved assigned data1 and data2 on the same node and when I try to create the aggregate with -simulation, Two rg are proposed :'(.
How can i prevent SSD shelf to be partitionned?
If SSD shelf is partitionned, how to force the creation of storage pool?
Solved! SEE THE SOLUTION
2018-02-16 03:35 PM
Can you please copy and paste the output of the -simulate command to create the SSD partition?
To create a storage pool from partitioned SSDs, the drives will need to be unpartitioned first. Due to the potential risk this presents to systems with data present, we prefer customers perform this under supervision of our support organisation. If you're working with a NetApp or Partner SE to evaluate this sytem, they can also access the instructions to do so.
2018-02-18 12:39 AM
For the moment there are no data on the Netapp, we are working on the deploiement and we can do some test.
I also think this should not happen.
I repeat I don't want to partition the SSD drive.
I repeat, my installation process was:
And when i enter storage disk show -partition-owner ==> SSD disk appears partitionned (root-data-data).
I never enter any command to partition the SSD shelf and I dont want it.
2018-02-18 05:16 PM
Thank you for the clarification of the steps you have taken.
My guess is that if the system was shipped with the NL-SAS and the SSD drives, we may have partitioned the SSDs as part of factory configuration, however as they were not connected while you ran 9a/9b, the system partitioned the NL-SAS drives instead. When they were then connected, the ownership was already set between your two nodes, so the extra partitions came online.
As I said earlier, we prefer that manual unpartitioning in ONTAP is performed by services partners, NetApp PS, or under supervision of our support center. In "diag" mode, you would set the ownership of all partitions on the SSD to the same node, then in the node shell run "disk unpartition" on the SSDs you wish to unpartition. If you are a services partner, this is detailed in this KB article
Once the SSDs are unpartitioned, you would be able to use them to create native aggregates and/or storage pools.
Hope this helps!
2018-02-18 11:41 PM
My guess is that if the system was shipped with the NL-SAS and the SSD drives, we may have partitioned the SSDs as part of factory configuration
Quoting https://fieldportal.netapp.com/download?file=doc%2FB41MXmgpRUuuplF2ZaTA_ADP%20SE%20Presentation_v3.0.1.pptx&dl=1 (how can I obtain direct link to field portal document?): "When both HDDs and SSDs are configured during initialization, only the HDDs are provisioned. SSDs remain unpartitioned for use as Flash Pool cache". Either this document is wrong or system was ordered as SSD-only with second shelf as loose delivery. BTW if document is correct, it means that there was no need to unplug SSD during initialization - it would already set it up exactly as desired.
2018-02-19 01:51 AM
If the original poster is interested in how the system got this way, our support centre may be able to assist through a review of serial number history (in the original post, they say they are testing this system - if it is not factory fresh, such as via a NetApp/Partner demo pool, all bets are off on how the disks arrived), but in general we are more interested in fixing problems rather than explaining them, unless due to the impact of the problem a Root Cause Analysis is warranted, which does not seem the case here.
Although we have auto setup rules, ADP is actually relatively flexible - you can unpartition or even manually partition drives (can, not should - as long as it ends up in the right state, that would be the focus. When the system contains 10TB drives, which may take up to 24 hours or more to erase, getting it right is important, but it seems that is the case here.
While that fieldportal document is relatively widely available to partners, it is marked as confidential, so as an employee, I can't comment on it directly here.
2018-02-19 10:47 PM - edited 2018-02-19 10:49 PM
Sorry Guys, I forgot to update the post.
I found the solution yesterday with the maintenance mode, a message appears during a "disk show":
Feb 19 08:19:47 [NODEST02:diskown.releasingMismatchedReservation:notice]: ownership calculation is releasing reservation on disk 0b.00.1P1 (S/N 97F0A00QT0KENP001) as it is owned by unowned (ID 4294967295).
The ID was an unknow Netapp.
After force disk assign and destroy old offline aggr0 and aggr0(1), I was able to unpartition disk.
I was not familiar with ADP on SSD and i had some doubt.
It was a good exercice .