min@30.30.30.72's password: SP gecici-2552-02> SP gecici-2552-02> SP gecici-2552-02> SP gecici-2552-02> system console Type Ctrl-D to exit. SP-login: admin Password: ***************************************************** * This is an SP console session. Output from the * * serial console is also mirrored on this session. * ***************************************************** testcls::> testcls::> testcls::> testcls::> storage f failover firmware testcls::> storage f failover firmware testcls::> storage failover show show show-giveback show-takeover testcls::> storage failover show Takeover Node Partner Possible State Description -------------- -------------- -------- ------------------------------------- testcls-01 testcls-02 true Connected to testcls-02 testcls-02 testcls-01 true Connected to testcls-01 2 entries were displayed. testcls::> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_testcls_01 953.8GB 46.24GB 95% online 1 testcls-01 raid_dp, normal aggr0_testcls_02 953.8GB 46.24GB 95% online 1 testcls-02 raid_dp, normal 2 entries were displayed. testcls::> run -node testcls-0 testcls-01 testcls-02 testcls::> run -node testcls-01 Too many users logged in! Please try again later. testcls::> run -node testcls-02 Type 'exit' or 'Ctrl-D' to return to the CLI testcls-02> options disk disk.abort_threshold.enable on (value might be overwritten in takeover) disk.asup_on_mp_loss on (value might be overwritten in takeover) disk.auto_assign on (value might be overwritten in takeover) disk.auto_assign_policy default (value might be overwritten in takeover) disk.latency_check.enable on (value might be overwritten in takeover) disk.latency_check.enter_maint on (value might be overwritten in takeover) disk.maint_center.allowed_entries 1 (value might be overwritten in takeover) disk.maint_center.enable on (value might be overwritten in takeover) disk.maint_center.max_disks 84 (value might be overwritten in takeover) disk.maint_center.spares_check on (value might be overwritten in takeover) disk.powercycle.enable on (value might be overwritten in takeover) disk.reassign_ssd.enable off (value might be overwritten in takeover) testcls-02> SP gecici-2552-02> system console Type Ctrl-D to exit. testcls::> testcls::> testcls::> node autosupport coredump environment external-cache halt hardware internal-switch modify power reboot rename root-mount run run-console show show-discovered virtual-machine testcls::> node run run run-console testcls::> node run run run-console testcls::> node run -node testcls-0 testcls-01 testcls-02 testcls::> node run -node testcls-0* - -command -reset testcls::> node run -node testcls-0* -command options disk.auto_assign off 2 entries were acted on. Node: testcls-01 You are changing option disk.auto_assign, which applies to both members of the HA configuration in takeover mode. This value must be the same on both HA members to ensure correct takeover and giveback operation. Node: testcls-02 You are changing option disk.auto_assign, which applies to both members of the HA configuration in takeover mode. This value must be the same on both HA members to ensure correct takeover and giveback operation. testcls::> system service service-processor services testcls::> system service-processor show IP Firmware Node Type Status Configured Version IP Address ------------- ---- ----------- ------------ --------- ------------------------- testcls-01 SP online true 2.11 - testcls-02 SP online true 2.11 30.30.30.72 2 entries were displayed. testcls::> system service service-processor services testcls::> system service service-processor services testcls::> system servicep Error: "servicep" is not a recognized command testcls::> system service-processor reboot-sp -node testcls-0 testcls-01 testcls-02 testcls::> system service-processor reboot-sp -node testcls-0 testcls-01 testcls-02 testcls::> system service-processor reboot-sp -node testcls-0* Note: If your console connection is through the SP, it will be disconnected. Do you want to reboot the SP ? {y|n}: y login as: admin admin@30.30.30.72's password: SP gecici-2552-02> SP gecici-2552-02> SP gecici-2552-02> SP gecici-2552-02> system console Type Ctrl-D to exit. testcls::> testcls::> testcls::> system service service-processor services testcls::> system service-processor show IP Firmware Node Type Status Configured Version IP Address ------------- ---- ----------- ------------ --------- ------------------------- testcls-01 SP online true 2.11 - testcls-02 SP online true 2.11 30.30.30.72 2 entries were displayed. testcls::> run -node testcls-0 testcls-01 testcls-02 testcls::> run -node testcls-0 testcls-01 testcls-02 testcls::> run -node testcls-02 Type 'exit' or 'Ctrl-D' to return to the CLI testcls-02> disk show -n disk show: No unassigned disks testcls-02> aggr status -s Pool1 spare disks (empty) Pool0 spare disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- Spare disks for block checksum spare 0a.01.16 0a 1 16 SA:B 0 SAS 10000 1142352/2339537408 1144641/2344225968 spare 0a.01.19 0a 1 19 SA:B 0 SAS 10000 1142352/2339537408 1144641/2344225968 spare 0a.01.21 0a 1 21 SA:B 0 SAS 10000 1142352/2339537408 1144641/2344225968 spare 0a.01.22 0a 1 22 SA:B 0 SAS 10000 1142352/2339537408 1144641/2344225968 spare 0a.01.23 0a 1 23 SA:B 0 SAS 10000 1142352/2339537408 1144641/2344225968 spare 0a.01.4 0a 1 4 SA:B 0 SSD N/A 190532/390209536 190782/390721968 spare 0a.01.6 0a 1 6 SA:B 0 SSD N/A 190532/390209536 190782/390721968 spare 0a.01.9 0a 1 9 SA:B 0 SSD N/A 190532/390209536 190782/390721968 spare 0a.01.10 0a 1 10 SA:B 0 SSD N/A 190532/390209536 190782/390721968 spare 0a.01.11 0a 1 11 SA:B 0 SSD N/A 190532/390209536 190782/390721968 spare 0a.01.15 0a 1 15 SA:B 0 SSD N/A 190532/390209536 190782/390721968 spare 0a.01.17 0a 1 17 SA:B 0 SSD N/A 190532/390209536 190782/390721968 spare 0a.01.18 0a 1 18 SA:B 0 SSD N/A 190532/390209536 190782/390721968 testcls-02> SP gecici-2552-02> system console Type Ctrl-D to exit. testcls::> testcls::> testcls::> aggr create -aggregate Aggregate testcls::> aggr create -aggregate sas -diskcount Number Of Disks testcls::> aggr create -aggregate sas -diskcount 5 - -chksumstyle -diskrpm -disksize -disktype -diskclass -mirror -pool -node -maxraidsize -raidtype -simulate -snaplock-type -encrypt-with-aggr-key testcls::> aggr create -aggregate sas -diskcount 5 -m -mirror -maxraidsize testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 - -chksumstyle -diskrpm -disksize -disktype -diskclass -mirror -pool -node -raidtype -simulate -snaplock-type -encrypt-with-aggr-key testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 --no Error: "--no" was not expected. Please specify -fieldname first. testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 --no Error: "--no" was not expected. Please specify -fieldname first. testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 -node testcls-0 testcls-01 testcls-02 testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 -node testcls-02 - -chksumstyle -diskrpm -disksize -disktype -diskclass -mirror -pool -raidtype -simulate -snaplock-type -encrypt-with-aggr-key testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 -node testcls-02 Info: The layout for aggregate "sas" on node "testcls-02" would be: First Plex RAID Group rg0, 5 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 1.1.4 SSD - - parity 1.1.6 SSD - - data 1.1.9 SSD 186.0GB 186.3GB data 1.1.10 SSD 186.0GB 186.3GB data 1.1.11 SSD 186.0GB 186.3GB Aggregate capacity available for volume use would be 502.3GB. Do you want to continue? {y|n}: testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 -node testcls-02 - -chksumstyle -diskrpm -disksize -disktype -diskclass -mirror -pool -raidtype -simulate -snaplock-type -encrypt-with-aggr-key testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 -node testcls-02 -disk -diskrpm -disksize -disktype -diskclass testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 -node testcls-02 -diskrpm 5400 7200 10000 15000 testcls::> aggr create -aggregate sas -diskcount 5 -maxraidsize 5 -node testcls-02 -diskrpm 10000 Info: The layout for aggregate "sas" on node "testcls-02" would be: First Plex RAID Group rg0, 5 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 1.1.16 SAS - - parity 1.1.19 SAS - - data 1.1.21 SAS 1.09TB 1.09TB data 1.1.22 SAS 1.09TB 1.09TB data 1.1.23 SAS 1.09TB 1.09TB Aggregate capacity available for volume use would be 2.94TB. Do you want to continue? {y|n}: t Do you want to continue? {y|n}: y [Job 27] Job succeeded: DONE testcls::> aggr create -aggregate sas -diskcount 8 -maxraidsize 8 -node testcls-02 Error: command failed: An aggregate already uses sas as name testcls::> aggr create -aggregate ssd -diskcount 8 -maxraidsize 8 -node testcls-02 Info: The layout for aggregate "ssd" on node "testcls-02" would be: First Plex RAID Group rg0, 8 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 1.1.4 SSD - - parity 1.1.6 SSD - - data 1.1.9 SSD 186.0GB 186.3GB data 1.1.10 SSD 186.0GB 186.3GB data 1.1.11 SSD 186.0GB 186.3GB data 1.1.15 SSD 186.0GB 186.3GB data 1.1.17 SSD 186.0GB 186.3GB data 1.1.18 SSD 186.0GB 186.3GB Aggregate capacity available for volume use would be 1004GB. Warning: No suitable spare disks are available for a potential future coredump operation. Press to page down, for next line, or 'q' to quit... Do you want to continue? {y|n}: y [Job 28] Job succeeded: DONE. Warning: No suitable spare disks are available for a potential future coredump operation. testcls::> aggr show show show-auto-provision-progress show-cumulated-efficiency show-efficiency show-resync-status show-scrub-status show-space show-spare-disks show-status testcls::> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_testcls_01 953.8GB 46.24GB 95% online 1 testcls-01 raid_dp, normal aggr0_testcls_02 953.8GB 46.24GB 95% online 1 testcls-02 raid_dp, normal sas 2.94TB 2.94TB 0% online 0 testcls-02 raid_dp, normal ssd 1004GB 1004GB 0% online 0 testcls-02 raid_dp, normal 4 entries were displayed. testcls::> Jun 14 07:45:00 [testcls-02:monitor.globalStatus.critical:EMERGENCY]: There are no enough spare disks. Disk shelf fault. testcls::> testcls::> testcls::> aggr show show show-auto-provision-progress show-cumulated-efficiency show-efficiency show-resync-status show-scrub-status show-space show-spare-disks show-status testcls::> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_testcls_01 953.8GB 46.24GB 95% online 1 testcls-01 raid_dp, normal aggr0_testcls_02 953.8GB 46.24GB 95% online 1 testcls-02 raid_dp, normal sas 2.94TB 2.94TB 0% online 0 testcls-02 raid_dp, normal ssd 1004GB 1004GB 0% online 0 testcls-02 raid_dp, normal 4 entries were displayed. testcls::> storage aggregate relocation s show start testcls::> storage aggregate relocation start -node testcls-0 testcls-01 testcls-02 testcls::> storage aggregate relocation start -node testcls-02 testcls::> network interface show Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster testcls-01_clus1 up/up 169.254.97.97/16 testcls-01 e0e true testcls-01_clus2 up/up 169.254.52.227/16 testcls-01 e0f true testcls-02_clus1 up/up 169.254.129.70/16 testcls-02 e0e true testcls-02_clus2 up/up 169.254.182.108/16 testcls-02 e0f true testcls cluster_mgmt up/up 30.30.30.120/24 testcls-01 e0a false testcls-01_mgmt1 up/up 30.30.30.119/24 testcls-01 e0a false testcls-02_mgmt1 up/up 30.30.30.118/24 testcls-02 e0M true 7 entries were displayed. testcls::> network port show show show-address-filter-info testcls::> network port show Node: testcls-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default down 1500 auto/- - e0a Default Default up 1500 auto/100 healthy e0b Default Default up 1500 auto/100 healthy e0e Cluster Cluster up 9000 auto/10000 healthy e0f Cluster Cluster up 9000 auto/10000 healthy Node: testcls-02 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- e0M Default Default up 1500 auto/1000 healthy e0a Default Default down 1500 auto/- - e0b Default Default down 1500 auto/- - e0e Cluster Cluster up 9000 auto/10000 healthy e0f Cluster Cluster up 9000 auto/10000 healthy 10 entries were displayed. testcls::> testcls::> testcls::> testcls::> storage aggregate relocation s show start testcls::> storage aggregate relocation s Error: Ambiguous command. Possible matches include: storage aggregate relocation show storage aggregate relocation start testcls::> set d Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y testcls::*> storage aggregate relocation s show start testcls::*> storage aggregate relocation start -node testcls-0 testcls-01 testcls-02 testcls::*> storage aggregate relocation start -node testcls-02 -destination testcls-0 testcls-01 testcls-02 testcls::*> storage aggregate relocation start -node testcls-02 -destination testcls-0 testcls-01 testcls-02 testcls::*> storage aggregate relocation start -node testcls-02 -destination testcls-01 -aggregate-list s sas ssd testcls::*> storage aggregate relocation start -node testcls-02 -destination testcls-01 -aggregate-list sas,s sas ssd testcls::*> storage aggregate relocation start -node testcls-02 -destination testcls-01 -aggregate-list sas,ssd -ndo-controller-upgrade true false testcls::*> storage aggregate relocation start -node testcls-02 -destination testcls-01 -aggregate-list sas,ssd -ndo-controller-upgrade true Warning: Aggregate relocation will not change home ownership of an aggregate which is owned by the source node during an NDO controller upgrade. This parameter should be used only while performing NDO controller upgrade. Are you performing NDO controller upgrade? {y|n}: y Info: Run the storage aggregate relocation show command to check relocation status. testcls::*> storage storage storage-service testcls::*> storage aggregate re reallocation relocation remove-stale-record rename restrict resynchronization testcls::*> storage aggregate relocation show Source Aggregate Destination Relocation Status -------------- ---------- ----------- ----------------- testcls-01 - - Not attempted yet testcls-02 sas testcls-01 Done ssd testcls-01 In progress 3 entries were displayed. testcls::*> storage aggregate relocation show Source Aggregate Destination Relocation Status -------------- ---------- ----------- ----------------- testcls-01 - - Not attempted yet testcls-02 sas testcls-01 Done ssd testcls-01 In progress 3 entries were displayed. testcls::*> storage aggregate relocation show Source Aggregate Destination Relocation Status -------------- ---------- ----------- ----------------- testcls-01 - - Not attempted yet testcls-02 sas testcls-01 Done ssd testcls-01 In progress 3 entries were displayed. testcls::*> storage aggregate relocation show Source Aggregate Destination Relocation Status -------------- ---------- ----------- ----------------- testcls-01 - - Not attempted yet testcls-02 sas testcls-01 Done ssd testcls-01 In progress 3 entries were displayed. testcls::*> storage aggregate relocation show Source Aggregate Destination Relocation Status -------------- ---------- ----------- ----------------- testcls-01 - - Not attempted yet testcls-02 sas testcls-01 Done ssd testcls-01 In progress 3 entries were displayed. testcls::*> storage aggregate relocation show Source Aggregate Destination Relocation Status -------------- ---------- ----------- ----------------- testcls-01 - - Not attempted yet testcls-02 sas testcls-01 Done ssd testcls-01 Done 3 entries were displayed. testcls::*> cluster ha show High Availability Configured: true High Availability Backend Configured (MBX): true testcls::*> cluster ha modify -configured false Warning: This operation will unconfigure cluster HA. Cluster HA must be configured on a two-node cluster to ensure data access availability in the event of storage failover. Do you want to continue? {y|n}: y Notice: HA is disabled. testcls::*> storage storage storage-service testcls::*> storage failover modify -node testcls-0 testcls-01 testcls-02 testcls::*> storage failover modify -node testcls-0* -enabled false 2 entries were modified. testcls::*> Jun 14 08:32:00 [testcls-02:monitor.globalStatus.critical:EMERGENCY]: Controller failover of testcls-01 is not possible: Controller Failover takeover disabled. There are no enough spare disks. Disk shelf fault. login as: admin admin@30.30.30.72's password: SP gecici-2552-02> SP gecici-2552-02> SP gecici-2552-02> SP gecici-2552-02> SP gecici-2552-02> system console Type Ctrl-D to exit. testcls::*> testcls::*> testcls::*> testcls::*> storage storage storage-service testcls::*> storage failover show show show-giveback show-takeover testcls::*> storage failover show Takeover Node Partner Possible State Description -------------- -------------- -------- ------------------------------------- testcls-01 testcls-02 false Node owns partner's aggregates as part of the nondisruptive controller upgrade procedure. Takeover is not possible: Storage failover is disabled testcls-02 testcls-01 false Connected to testcls-01, Takeover is not possible: Storage failover is disabled 2 entries were displayed. testcls::*> system node ha halt hardware testcls::*> system node halt -node testcls-0 testcls-01 testcls-02 testcls::*> system node halt -node testcls-02 testcls::*> system sh show shutdown testcls::*> system show fi Error: the value "fi" is invalid for type testcls::*> system show -fields systemid node systemid ---------- ---------- testcls-01 0537062551 testcls-02 0537062842 2 entries were displayed. testcls::*> testcls::*> testcls::*> testcls::*> system node ha halt hardware testcls::*> system node halt -node testcls-0 testcls-01 testcls-02 testcls::*> system node halt -node testcls-02 - -reason -inhibit-takeover -dump -skip-lif-migration-before-shutdown -ignore-quorum-warnings -skip-epsilon-transition-before-shutdown -ignore-strict-sync-warnings -power-off testcls::*> system node halt -node testcls-02 - -reason -inhibit-takeover -dump -skip-lif-migration-before-shutdown -ignore-quorum-warnings -skip-epsilon-transition-before-shutdown -ignore-strict-sync-warnings -power-off testcls::*> system node halt -node testcls-02 Warning: This operation will cause node "testcls-02" to be marked as unhealthy. Unhealthy nodes do not participate in quorum voting. If the node goes out of service and one more node goes out of service there will be a data serving failure for the entire cluster. This will cause a client disruption. Use "cluster show" to verify cluster state. If possible bring other nodes online to improve the resiliency of this cluster. Do you want to continue? {y|n}: y SP-login: Terminated . Uptime: 2h1m5s System halting... ▒ Phoenix SecureCore(tm) Server Copyright 1985-2008 Phoenix Technologies Ltd. All Rights Reserved BIOS version: 8.3.0 Portions Copyright (c) 2008-2014 NetApp, Inc. All Rights Reserved CPU = 1 Processors Detected, Cores per Processor = 2 Intel(R) Xeon(R) CPU C3528 @ 1.73GHz Testing RAM 512MB RAM tested 18432MB RAM installed 256 KB L2 Cache per Processor Core 4096K L3 Cache Detected System BIOS shadowed USB 2.0: MICRON eUSB DISK BIOS is scanning PCI Option ROMs, this may take a few seconds... ................... Boot Loader version 4.3 Copyright (C) 2000-2003 Broadcom Corporation. Portions Copyright (C) 2002-2014 NetApp, Inc. All Rights Reserved. CPU Type: Intel(R) Xeon(R) CPU C3528 @ 1.73GHz LOADER-B> LOADER-B> LOADER-B> LOADER-B>