hello guys: i would like to ask about the method of divide vlan on Brocade switches,or whether it is said that not all switches of Brocade support vlan Thank you
... View more
Hello, We have a FAS2650 and a FAS8300 (just joined onto the cluster several days ago). 28 of 32 volumes have been moved to the new unit. The professional service person told us he would return next Friday to do some clean up of the old FAS2650. I noticed a non-compliant warning regarding the licenses this morning. Would this warning cause any issues to services (so far haven't witnessed any)? Trying to get a hold of the professional services but couldn't (he is probably flying). Also running "system license show" only return licensing for our FAS2650. TT
... View more
I am working on adding a bunch of new disks to my current cluster. I have gotten to the point of all my disks are in the spare pool and ready to be added to my aggrs. The general plan is to change the max raid group size from 8 to 14. The aggrs in question have two raid groups with 8 disks each. aggr1_03 "/aggr1_03/plex0/rg0 (block)","/aggr1_03/plex0/rg1 (block)" aggr1_04 "/aggr1_04/plex0/rg0 (block)","/aggr1_04/plex0/rg1 (block)" and example this is aggr1_03 (edited so it would not be so long) disk raid-group ----- ---------- 2.0.2 rg0, 2.0.3 rg0, 2.0.4 rg0, 2.0.5 rg0, 2.1.1 rg0, 2.1.2 rg0, 2.1.3 rg0, 2.1.4 rg0 2.0.6 rg1, 2.0.7 rg1, 2.0.8 rg1, 2.0.9 rg1, 2.1.5 rg1, 2.1.6 rg1, 2.1.7 rg1, 2.1.8 rg1 So I go to simulate adding a bunch of disks to one of the aggregates and get this First Plex RAID Group rg1, 14 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- data 2.3.6 FSAS 3.63TB 3.64TB data 2.3.7 FSAS 3.63TB 3.64TB data 2.3.8 FSAS 3.63TB 3.64TB data 2.3.9 FSAS 3.63TB 3.64TB data 2.3.10 FSAS 3.63TB 3.64TB data 2.3.11 FSAS 3.63TB 3.64TB RAID Group rg2, 14 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 2.4.0 FSAS - - parity 2.4.1 FSAS - - data 2.4.2 FSAS 3.63TB 3.64TB data 2.4.3 FSAS 3.63TB 3.64TB data 2.4.4 FSAS 3.63TB 3.64TB data 2.4.5 FSAS 3.63TB 3.64TB data 2.4.6 FSAS 3.63TB 3.64TB data 2.4.7 FSAS 3.63TB 3.64TB data 2.6.0 FSAS 3.63TB 3.64TB data 2.6.1 FSAS 3.63TB 3.64TB data 2.6.2 FSAS 3.63TB 3.64TB data 2.6.3 FSAS 3.63TB 3.64TB data 2.6.4 FSAS 3.63TB 3.64TB data 2.6.5 FSAS 3.63TB 3.64TB RAID Group rg3, 6 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 2.6.6 FSAS - - parity 2.6.7 FSAS - - data 2.6.8 FSAS 3.63TB 3.64TB data 2.6.9 FSAS 3.63TB 3.64TB data 2.6.10 FSAS 3.63TB 3.64TB data 2.6.11 FSAS 3.63TB 3.64TB As you can see it adds 6 disks to rg1, created another group called rg2 and filled with 14 disks, then created another group called rg3 and added 6. The result would be rg0,rg1,rg2,rg3 with 8,14,14,6 disks respectively. It was my understanding that ontap would add 6 disks to rg0, add 6 disks to rg1, and then create another group called rg2 and use up my last 14 spares; so I would end up with rg0,rg1,rg2 each have 14 disks. Why did I get the results I did on the simulate?
... View more
Hello, everyone. I encountered some problems in the process of upgrading the microcode of the storage. During the upgrade, after I initiated the takeover, the storage did not restart as expected. However, after the restart and loading, the storage was loaded into the LOADER, and then I checked the takeover status on the peer controller. In this case, can I use "boot_ontap" to start this controller? What is the cause of this problem?
... View more
Hello everyone, I would like to know if it is possible to update ONTAP on one of the nodes via the command line in Maintenance Mode.
One of the cluster nodes is running version 9.10.1P11 and the other is running version 9.13.1P9 and I need both to be running the same version.
Thank you
... View more