EF & E-Series, SANtricity, and Related Plug-ins

How long does it take to initialize the NL-SAS 6TB and 12TB?


Hi all,


We have to zeronig E-series.


It is necessary to delete the data of the following structure of a customer.

NL-SAS 6TB x 360

E5660 + DE6660 x 5


NL-SAS12TB x 72

E2812 + DE212C x 2
E2812 + DE212C x 2


I only know how to execute "initilize" one by one from the management GUI.


How long does it take to initialize the NL-SAS 6TB?
(On E5660)

How long does it take to initialize the NL-SAS 12TB?
(On E2812)



Is there any other way? Is there a good way to do it all together?

If there is a way to initialize the disks in a lump,
How many hours will it take to finish the initial stage of all disks?
(For example, it takes 2 weeks to initialize one by one (by the method of Q1 and Q2.).
However, if you follow the method of Q3, will it end in a week? )


Best Regards,





I don't deal with E-SERIES products (hopefully someone from e-series would reply to all your specific queries), but it seems like 'Initialization' (term used in e-series) is nothing but 'writing zeroes' as we use in ONTAP. Just different wording I am guessing. In ONTAP we 'zero disk' before it could be used to create/add to Aggregate and in E-SERIES it is to add/create a 'volume group'?


There is a KB, it does not mention any category of product, so I am guessing these numbers could be applicable to any products, as it depends on the rotational speed, write transfer speed, capacity and load on the array.


But, b'cos the KB is using term 'disk zero' I guessing it is written with ONTAP in context. Anyway, you can have a look, there are estimated Hours mentioned for different disk types including NL-SAS for its given capacity. I don't know if it applies to your product. However, once you begin initialization you will come to know if it matches the estimation given in the KB or not.

Found one KB on speeding up initialization: (No  idea how much difference it makes)
For E-Series products, is it acceptable to disable Data Assurance during initialization to speed up initialization times?




Hi Ontapforrum,


Thank you for your reply.


Yes, We have to create DDP(Dynamic Disk Pool) or VG(Volume Group).

(This time I created DDP in our E-Series)

And next create Volume(LUN) on the DDP.

Then we can initilize for Volume(LUN).


I didn't know these KBs.
[FAQ: How long does it approximately take for disk zeroing?]

We can see the "difference in performance between different disk types".


[For E-Series products, is it acceptable to disable Data Assurance

during initialization to speed up initialization times?]

We can disable the DA function of Volume to speed up initialization.


I have new questions here.
If each Volume is initialized one by one from the GUI, how many can be processed in parallel efficiently?
Is it 6 or 8 or 12?


Best Regards,


Best Regards,






To help you answer your questions, could you clarify what you are ultimately trying to achieve? For example:


- Clear configuration from drives in an existing storage array to allow them to be moved to a different storage array?
- Securely erase secure-enabled drives such as FDE or FIPS?
- Write all zeroes on the entirety of each drive?


Depending on what your end goal is, we will need to follow specific requirements and procedures.

To clear configuration from drives in an existing storage array to allow them to be moved to a different storage array, I recommend
1. Delete the volumes and volume group/dynamic disk pool that the drives belong to.
2. If the drives are secure-enabled, securely erasure the drives using the procedure linked below.
2. Once the drives are in an Optimal, Unassigned state, simply unseat the drives without fully removing them.
3. Give the drives two minutes to fully spin down.
4. Move the drives to the other array. If you can, try to have the source and destination arrays be on the same controller firmware release.
5. After moving the drives to the destination array, a stagger reboot of the controllers would be a good idea to clear any stale data.


If you only want to secure erase the drives, follow Steps 1 and 2 from the above procedure.


For writing all zeroes to the entirety of each drive, E-Series doesn't have a built-in write-zeroes/drive sanitize function. (The previously linked KB discussing disk zeroing is an ONTAP specific KB.) If you want to write all zeroes to the drives, you would have to create a RAID 0 volume group and volume and have a host-side application write zeroes to the volume. If you put the drives in several smaller volume groups as opposed to one large volume group, your application might be able to write to several volumes at once which would make the process faster.


Just another note, the previously linked KB regarding disabling Data Assurance to speed up initialization is specifically discussing volume initialization which erases volume data but does not fully zero it. This does not secure erase it either. After a volume initialization, the volume keeps its WWN, host assignments, allocated capacity, and reserved capacity settings. 

Team NetApp


Thank you for your explanation.
this is a great information.

We need to dispose of the equipment we have introduced to our customers in the future.
Customers requested us to prevent data leakage.
I was asked to erase the data.

We can handle Zeroing and Sanitizing in case of FAS.
However, this time it is the configuration of FlexArray.
FAS8060 recognizes E5660 and E2812 as a shelf,
The LUN on the E series is recognized as a FAS disk.

There is a limit to erasing from FAS (erasing is only for physical disks), so I am trying to erase data on the E series.

Method 1 is Santricity Storage Manager> Hardware> Disk>
Advanced> Initilize ..
I tried to run it, but I can't. (It is grayed out and cannot be selected)

Same issue appeared


Method 2 is Santricity Storage Manager> Volume> Adavnced>
Initilize ..
We could start job. However, the E5660 has 360 volumes
It takes a lot of time to manually delete 1 Volume at a time.
work cost becomes expensive, so it is not realistic work.
It will be a work content that will be lost. Therefore, efficiently
I'm looking for a way to perform data deletion.

Problem with Method 1 "(Initilized ..) is grayed out"
I would like to know if there is a way to improve.
This is the disk initialization (data erasing work) on the E series.
(Like a FAS disk sanitize)
I recognize that it can be said to be work.
(I do not care about the granularity and accuracy level of data deletion this time.)

When dealing with method 2, it is not realistic to work one by one
Therefore, I think that there is no choice but to implement it collectively.
(For example, assuming that one process takes 10 hours,
It takes 36 days if 10 can be erased a day. )

You can also understand how to write data (zero) from another server to the E series.
However, after deleting data from FAS, FAS unmounts E-series. When the E series is mounted from the server, write processing (zero) is executed.
You need to calculate the time. However, currently server specifications and
The validity of simultaneous parallel processing is not understood.
Therefore, can not tell the sales manager how many hours you can finish the workload.
(Is it one week, one month, or three months?)

In short, We would like to know an estimate of working time.


Best Regards,




To be clear, the drive initialization function on E-Series does not write all zeroes to the drive. Instead, the function simply removes the configuration information from a drive that has already been moved from one array to a new array. You will not be able to initialize a drive in its native array.


See the Initialize (format) drive page in the E-Series and SANtricity 11 Documentation Center for more information.


The drive initialize function will only work on drives that meet all of the following criteria:

  • have been moved from one array to another
  • have pre-existing volume configuration information from old array
  • have been marked as FAILED or INCOMPATIBLE state by the new array automatically


To re-iterate, E-Series does not have a function to write all zeroes to a drive.


If you want to delete only the volume configuration information prior to moving the drives, delete the volumes and volume groups/dynamic disk pools that reside on those drives while they are in the native array. This option completes instantly but only removes the pointers to the old volume data. The old data remains on the drives until a new volume is configured and new data is written to the drives.

If you have security enabled on secure-capable drives, you can securely erase the drives which removes the drives' security key making their data undecipherable, but again, they must be secure-capable AND have that security feature enabled. The secure erase function completes instantly as well because it simply throws the security key away so the data on the drives is undecipherable. See the Erase secure-enabled drives page for more information.


You mentioned doing a volume initialization, but this will only delete "the block indices, which causes unwritten blocks to be read as if [emphasis added] they are zero-filled (the volume appears to be completely empty)." The key term is "as if" because the volume initialization does not truly write all zeroes to the volume's space on the drives. See the Initialize volumes page for more information.

Thank you pointing out the other forum post. I will respond to that thread as well.


Team NetApp