Hello, I'm going to migrate from 8.2.1 cluster mode 4 node controller to new equipment, what are some ways to migrate from 8.2.1 to 9.15 os? 7 mode to cdot snapmirror is only available in 7 mode and there is no 8.2.1 information in the matrix of cdot to cdot snapmirror
... View more
Hello,
I have source volume 5 TB size, 3TB are Data Space and 2 TB are Snap Space. I would like to create a Destination Volume, let assume that it doesn't have any Snap on the destination. Can Destination volume be at Size of 3 TB like the Source Data Space or it must be the same full Size of the Source, 5 TB?
Many Thanks !
Joel
... View more
Anyone have steps on how to prep a previously used (NetApp) disk with the v7.2.4 ONTAP ?
I am looking for the steps to relabel replacement "used" disks, under ONTAP 7.2.4. (I know, I know!)
Sure enough, I lost two disks in two separate arrays (all within a 4 day time period!)
The Controller shut itself down before I could fly out to replace the disks.
I swapped out the disks with "refurbished" ones from ServerSupply, and went into Maintenance Mode.
Performed "disk assign all" and restarted.
I ran "aggr status -f" because all of the new disks complain about not having any valid labels!
*> aggr status -f Thu Apr 4 20:31:14 GMT [raid.assim.disk.nolabels:error]: Disk 0c.35 Shelf 2 Bay 3 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WM0PEC] has no valid labels. It will be taken out of service to prevent possible data loss. Thu Apr 4 20:31:14 GMT [raid.assim.disk.nolabels:error]: Disk 0b.59 Shelf 3 Bay 11 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WEGPHC] has no valid labels. It will be taken out of service to prevent possible data loss. Thu Apr 4 20:31:15 GMT [raid.assim.disk.nolabels:error]: Disk 0b.54 Shelf 3 Bay 6 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WHSSPC] has no valid labels. It will be taken out of service to prevent possible data loss. Thu Apr 4 20:31:15 GMT [raid.assim.disk.nolabels:error]: Disk 0b.42 Shelf 2 Bay 10 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WKVY2C] has no valid labels. It will be taken out of service to prevent possible data loss. Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0c.35 Shelf 2 Bay 3 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WM0PEC] has bad label. Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0b.59 Shelf 3 Bay 11 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WEGPHC] has bad label. Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0b.54 Shelf 3 Bay 6 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WHSSPC] has bad label. Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0b.42 Shelf 2 Bay 10 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WKVY2C] has bad label.
Broken disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- bad label 0b.42 0b 2 10 FC:B - FCAL 15000 272000/557056000 274845/562884296 bad label 0b.54 0b 3 6 FC:B - FCAL 15000 272000/557056000 274845/562884296 bad label 0b.59 0b 3 11 FC:B - FCAL 15000 272000/557056000 274845/562884296 bad label 0c.35 0c 2 3 FC:A - FCAL 15000 272000/557056000 274845/562884296 No root aggregate or root traditional volume found. You must specify a root aggregate or traditional volume with "aggr options <name> root" before rebooting the system.
I don't know what the commands should be to force a new label, my Pool is Pool0, my owner is file01.
Not sure what else it is looking for.
IMPORTANT - I AM IN MAINTENANCE MODE, and unable to work, so a quick response would be very welcome!
Thanks everyone!
... View more
We are on ONTAP version 9.14.1P8, after we enabled the anti-ransomware on CIFS volumes, we started receiving suspected long file list every day, we know they should be non-ransomware related. My questions is how we can safely determine they can be skipped. Are there any generic rules or utilities we can use for this purpose? Otherwise nobody wants to set the files as false positive. Please share your experience if any. thanks in advance !
... View more
Apologies if I get the terminology all wrong. I've got two netapp filers configured as a cluster. It's ancient hardware, I migrated most of the servers hosted on there in Azure so we didn't bother renewing the support contract. One of the disks died a while ago but we had a spare which kicked in. And it ran without redundancy. As luck would have it, just before the migration project was finished another disk died and that bought down everything. I'm assuming the disk dying bought it down, and that no other hardware has failed. I would have thought the 2nd cluster would have taken over but it didn't. The status on the node says; The takeover cannot be initiated because the storage failover is disabled. I'm guessing that because the disk died in the 1st node, that bought it down. So I thought if I replaced the disk then assigned it to the downed node it would come into life. However, I can only assign the unsigned disk to the node it can see. I can't get to node 1 at all So, I think my only option is to try and force the failover mode to the 2nd node in the cluster. FASCLUS1::> cf status Takeover Node Partner Possible State Description -------------- -------------- -------- ------------------------------------- FASCLUS1-01 ARBFASCLUS1-02 - Unknown FASCLUS1-02 ARBFASCLUS1-01 false Waiting for FASCLUS1-01, Takeover is not possible: Partner node halted after disabling takeover I think my only option would be to run cf forcetakeover from 02 node. Is this a good idea? Anything else I can try?
... View more