Talk and ask questions about NetApp FAS and AFF series unified storage systems. Talk with other members how to optimize these powerful data storage systems.
Talk and ask questions about NetApp FAS and AFF series unified storage systems. Talk with other members how to optimize these powerful data storage systems.
I was wondering is there an ADVANTAGE of having 4 shelves full with 1.8 TB SAS disks compared to 1 shelves with 15.3 TB SSD disks? I can imagine about the advantages of the 15.3TB SSD shelf, as less power costs, less broken disks, more space left in the cabinet. And I assume faster data throughput? So I am wondering are there also disadvantages other than investing in new disks again? Regards, Maurice
... View more
HI everyone, My folder backup snapshot copies are suddenly lost, I monitor them almot every day and I notice sometime my snapshot copy can go from 45 copies yesterday to only 1 copy today. The storage is enough, only almost 60% full. Already change the reservation snaphot to 10% for all folders.
... View more
Hi All,
We have FAS8020 (2controllers) with oneDS246 disk shelf and one DS4246 Disk shelf and 4 SSD's.
ADP can be configured on this platform?
Please give your valuable feedback in urgent basis.
Thanks,
Siddaraju
... View more
Looking for the matrix or HCL that will tell me what ontap version the FAS8040 will support (highest level). I know I saw it once before but for the life of me cant find it.
... View more
Hello, I'm trying to do a DoD 'disk sanitize start' command on a number of old disk shelves we have to ready them to be sold. We have a company policy that requires those drives to be DoD wiped before leaving our possession. I have a FAS8200 lab controller I'm using that's been upgraded to 9.7P10 and trying to wipe in maintenance mode. Drives and shelf models are different among the stack, but that hasn't been a problem in the past. I have been able to wipe several shelves already (I'm doing 4x 24 drive shelves at a time), but for some reason the current stack I'm trying to wipe is not wanting to cooperate. Every time I run a 'disk sanitize start -p 0x55 -p 0xaa -r -c 3 disk_name' command, it starts the wipe (as verified by a 'disk sanitize status' command), but when I queue up a 2nd drive, it gives me a string like the one below... *> disk sanitize start -p 0x55 -p 0xaa -r -c 3 1d.41.19 Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.22 Shelf 56 Bay 22 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U58A] UID [500605BA:0068A5E4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.16 Shelf 56 Bay 16 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6JT8A] UID [500605BA:0067B070:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.10 Shelf 56 Bay 10 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6VU5A] UID [500605BA:00687CC4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.15 Shelf 56 Bay 15 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U0PA] UID [500605BA:0067AFEC:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.4 Shelf 56 Bay 4 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6UAAA] UID [500605BA:0067B004:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.3 Shelf 56 Bay 3 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U6LA] UID [500605BA:00686F48:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.23 Shelf 56 Bay 23 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T7ZA] UID [500605BA:0068BB8C:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.11 Shelf 56 Bay 11 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6XG6A] UID [500605BA:0067AB54:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.12 Shelf 56 Bay 12 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6VREA] UID [500605BA:0068B0A4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.20 Shelf 56 Bay 20 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG59K9A] UID [500605BA:0067AE9C:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.7 Shelf 56 Bay 7 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T8AA] UID [500605BA:0067C624:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.2 Shelf 56 Bay 2 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T8HA] UID [500605BA:00687D04:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.13 Shelf 56 Bay 13 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U8KA] UID [500605BA:0068AC4C:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.21 Shelf 56 Bay 21 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6TZRA] UID [500605BA:0068AF80:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.8 Shelf 56 Bay 8 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6XE1A] UID [500605BA:0067AAF0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.9 Shelf 56 Bay 9 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T8DA] UID [500605BA:0068B098:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss. I run a 'disk sanitize status' command and nothing comes up after attempting to wipe the 2nd drive. The 1st drive then shows as FAILED when running a 'disk show -v' command. Running a 'disk unfail' command doesn't unfail it. In looking at different KB's and discussions, I tried to do a label wipe, wipev1 and makespare of the drive, but that doesn't fix it. Same thing, able to start the sanitize process on 1 drive, but then all fail when trying to sanitize a 2nd drive. Running a 'aggr status -r' also shows the same output as above. Please help!
... View more