ONTAP Hardware
ONTAP Hardware
I have a FAS2040 and a DS4243. The FAS2040 has 12 1TB SATA drives in it and the DS4243 has 24 450 GB SAS drives. I plan to connect the DS4243 to a physical switch for my vSphere 5 environment. However the SATA drives will be CIF shares for video archiving, users My Documents and various files so I dont want to connect that to the physical switch that the DS4243 shelf is connected to. What is the best way to do this? Can I connect two nics from the FAS2040 into my core and run it like that? What is a best practice?
Thanks
Solved! See The Solution
Hi Robert
It looks like you have three distinct issues here:
1) Only one controller showing up in System Manager/is HA working?
2) Accessing SAS disks in DS4243 shelf
3) Assigning all SATA disks to controller S1 and all SAS disks to controller S2
My thoughts...
1) I don't have much experience with HA issues, normally It Just Works so here's a few things to check:
Has setup been run on S2? Does it have a VIF configured?
Command line tools you can use:
Run license on both and verify you have a cluster license. If not, add it and reboot.
Run cf status and see what the result is
Have you enabled "partner" interfaces on the VIFs on both controllers?
Normally I would use the same VIF name on both controllers (vif0) to keep things simple
Run ifconfig <vifname> and the result should include partner <vif-name-on-other-controller> (not in use)
If it doesn't run ifconfig <vifname> partner <vif-name-on-other-controller> to fix this
Run cf enable to enable HA
Did you run setup on S2 before discovering S1 in System Manager? If not, try removing S1 from system manager and adding it back in.
You can download the HA Configuration Checker tool from http://support.netapp.com/NOW/download/tools/cf_config_check/
2) What shelf ID have you set on the DS4243? The internal disks will be using shelf "0".
Use these commands to verify the SAS disks are visible from both controllers
storage show shelf - you should see two shelves
storage show disk -x - you should see disks on 0c.00.x (internal) and 0c.yy.x (DS4243) with different sizes (847 GB SATA and 418 GB SAS IIRC).
3) Once 1 and 2 are sorted out you can go onto getting the SAS disks working on S2
The trick here is to migrate the root volume off the SATA drives so you can assign them (SATA) to S1. You need one root volume per controller, the disk type itself doesn't matter. In your case a single aggregate on each controller containing the root and data volumes will be fine.
First, assign all the SAS disks to S2
Then create a new aggregate on S2 including 23 SAS disks using something like this:
aggr create aggr1 -r 23 23@418
Which breaks down as:
aggregate name = aggr1
RAIDgroup size (-r) = 23 for 21 data disks and 2 parity disk
Number of disks = 23 x 450 GB (418 GB in ONTAP)
(Note: using -r 23 means if you want to add new disks to this aggr later on you will need a new RAID group and therefore two more parity disks).
Once you've done that run aggr status -r aggr1 to see the new aggregate and its disks
Then create a new volume on aggr1 of at least 16 GB and space guarantee=volume. Make this your root volume - there are plenty of tutorials on the net on how to do this. Destroy the old vol0 and rename the new vol to vol0 once this is complete.
After creating the new root volume you can destroy aggr0 on S2 and assign all the SATA disks to S1. Use disk assign <disk> -s unowned on S2 to remove the disk so it can be assigned to S1, for example:
On S2 disk assign 0c.00.6 -s unowned
On S1 disk assign 0c.00.6
On S1 aggr add aggr0 -d 0c.00.6 to add the new disk to the existing aggregate
Hope this helps. What method are you going to use to present your data to vSphere?
Cheers
Adam
Hi Robert
You can't connect the DS4243 to a switch, it needs to be connected to the FAS2040 filer which will then present its disks to the network. If the 2040 is a HA pair you will need to run a SAS cable from the DS4243 to each controller board in the 2040 - have a look for the NetApp Universal SAS and ACP Cabling Guide document or the FAS2040 System Installation and Setup poster for more info.
Good practise is to create Virtual Interfaces (also called Interface Groups) consisting of two or more physical NICs on your FAS2040. You could create one VIF for CIFS and another for vSphere (NFS protocol or iSCSI) and attach them to separate physical switches, or create a single four-port VIF attached to a single switch stack and use VLAN tagging to separate the traffic into vSphere and CIFS. Just depends on your environment really. VLANs gives you some flexibility because if you want to add more services on different networks later you don't neccessarily need more physical interfaces. Checkout the Network Management Guide in the Data ONTAP documentation bundle for more info.
Hope this helps, let me know if you have any further questions.
Cheers
Adam
Hi,
If I read your question correctly, a very similar scenario has been already discussed here:
https://communities.netapp.com/message/47474#47474
Regards,
Radek
Thanks all for the great advice. I now have both the FAS2040 and the DS4243 powered up. I created a vif on the FAS2040 for e0a and e0b and created an etherchannel port on my Cisco switch. I can access the FAS2040 through the web interface and the System Manager. I then connected to the console of the other controller and went through the setup. In System Manager. I can only see the one controller, the first one which is SATA drives. I cannot see the 24 SAS drives on the DS4243. There is a root aggregate on the 2040 which has 3 drives and one spare, however I have 12 SATA drives in the 2040. How do I go about creating an aggregate and ustilizing my SATA drives in the best way possible? Should the root aggregate be seperate and what size should it be? I want to use RAID-DP. Would 11 drives and 1 spare be a good configuration for the SATA drives? How do I get the SAS drives configured to where I can see them in System Manager and configure the shelf? Any help would be great, thanks again. I am going to use the SAS drives for a vSphere 5 environment.
Hi Robert
Have you physically connected both controllers to the DS4243?
Would 11 drives and 1 spare be a good configuration for the SATA drives?
You will need at least 3 drives assigned to controller B to host the root aggregate and root volume. If you run aggr show -r on both controllers it will show which disks are currently assigned to the controller and the aggregate. You will probably see that each controller is currently using 3 disks on the internal disks If you want to assign all your SAS disks to one controller and all your SATA disks to the other you will need to:
1. Ensure both controllers can see all disks
2. Assign all SAS disks to controller B
3. Create a new aggregate on SAS on controller B
4. Create a new root volume on new aggregate on controller B - the procedure to do this is documented in various places including communities.netapp.com
5. Unassign all SATA disks from controller B and assign to controller A
At this point you can expand your aggregate on controller A to use 11 SATA disks.
Hi akw_white,
Thanks for the reply. I have both controllers connected to the DS4243. Here is where I am. I am using On Command System Manager and the console. I named one controller S1 (which is the SATA drives) and the other controller is named S2 (which is the SAS drives). In On Command System Manager I have a tab for S1 but no tab for S2. I think it shows up this way because its in a HA configuration (?) Eight of the drives are owned by S1 and the rest are owned by S2, that is the way I received the unit from NetApp. All of the SAS drives are unassigned at this point. I tried to assign all of the remaining SATA drives owned by S2 to S1 to have all the SATA drives on S1. However when I try to do this via the console it tells me that S2 owns the drives and cannot be changed. Do I need two root aggregates and root volumes, one for the SATA and the other for SAS? For the SAS shelf using your instructions above that would give me 21 drives available for use? I want to use RAID-DP on each controller. I am new to NetApp but I am trying really hard to learn and appreciate your time in answering my questions.
Here is my volume status on S1:
S1> vol status -r
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0c.00.1 0c 0 1 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
parity 0c.00.3 0c 0 3 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
data 0c.00.5 0c 0 5 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
data 0c.00.9 0c 0 9 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
data 0c.00.8 0c 0 8 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
data 0c.00.7 0c 0 7 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
Pool1 spare disks (empty)
Pool0 spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0c.00.10 0c 0 10 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
spare 0c.00.11 0c 0 11 SA:A 0 SATA 7200 847555/1735794176 847884/1736466816
Partner disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 0c.00.2 0c 0 2 SA:A 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.0 0c 0 0 SA:A 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.4 0c 0 4 SA:A 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.6 0c 0 6 SA:A 0 SATA 7200 0/0 847884/1736466816
Volume status from S2:
S2> vol status -r
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0c.00.0 0c 0 0 SA:B 0 SATA 7200 847555/1735794176 847884/1736466816
parity 0c.00.2 0c 0 2 SA:B 0 SATA 7200 847555/1735794176 847884/1736466816
data 0c.00.4 0c 0 4 SA:B 0 SATA 7200 847555/1735794176 847884/1736466816
Pool1 spare disks (empty)
Pool0 spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0c.00.6 0c 0 6 SA:B 0 SATA 7200 847555/1735794176 847884/1736466816
Partner disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 0c.00.11 0c 0 11 SA:B 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.10 0c 0 10 SA:B 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.9 0c 0 9 SA:B 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.8 0c 0 8 SA:B 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.5 0c 0 5 SA:B 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.7 0c 0 7 SA:B 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.3 0c 0 3 SA:B 0 SATA 7200 0/0 847884/1736466816
partner 0c.00.1 0c 0 1 SA:B 0 SATA 7200 0/0 847884/1736466816
Hi Robert
It looks like you have three distinct issues here:
1) Only one controller showing up in System Manager/is HA working?
2) Accessing SAS disks in DS4243 shelf
3) Assigning all SATA disks to controller S1 and all SAS disks to controller S2
My thoughts...
1) I don't have much experience with HA issues, normally It Just Works so here's a few things to check:
Has setup been run on S2? Does it have a VIF configured?
Command line tools you can use:
Run license on both and verify you have a cluster license. If not, add it and reboot.
Run cf status and see what the result is
Have you enabled "partner" interfaces on the VIFs on both controllers?
Normally I would use the same VIF name on both controllers (vif0) to keep things simple
Run ifconfig <vifname> and the result should include partner <vif-name-on-other-controller> (not in use)
If it doesn't run ifconfig <vifname> partner <vif-name-on-other-controller> to fix this
Run cf enable to enable HA
Did you run setup on S2 before discovering S1 in System Manager? If not, try removing S1 from system manager and adding it back in.
You can download the HA Configuration Checker tool from http://support.netapp.com/NOW/download/tools/cf_config_check/
2) What shelf ID have you set on the DS4243? The internal disks will be using shelf "0".
Use these commands to verify the SAS disks are visible from both controllers
storage show shelf - you should see two shelves
storage show disk -x - you should see disks on 0c.00.x (internal) and 0c.yy.x (DS4243) with different sizes (847 GB SATA and 418 GB SAS IIRC).
3) Once 1 and 2 are sorted out you can go onto getting the SAS disks working on S2
The trick here is to migrate the root volume off the SATA drives so you can assign them (SATA) to S1. You need one root volume per controller, the disk type itself doesn't matter. In your case a single aggregate on each controller containing the root and data volumes will be fine.
First, assign all the SAS disks to S2
Then create a new aggregate on S2 including 23 SAS disks using something like this:
aggr create aggr1 -r 23 23@418
Which breaks down as:
aggregate name = aggr1
RAIDgroup size (-r) = 23 for 21 data disks and 2 parity disk
Number of disks = 23 x 450 GB (418 GB in ONTAP)
(Note: using -r 23 means if you want to add new disks to this aggr later on you will need a new RAID group and therefore two more parity disks).
Once you've done that run aggr status -r aggr1 to see the new aggregate and its disks
Then create a new volume on aggr1 of at least 16 GB and space guarantee=volume. Make this your root volume - there are plenty of tutorials on the net on how to do this. Destroy the old vol0 and rename the new vol to vol0 once this is complete.
After creating the new root volume you can destroy aggr0 on S2 and assign all the SATA disks to S1. Use disk assign <disk> -s unowned on S2 to remove the disk so it can be assigned to S1, for example:
On S2 disk assign 0c.00.6 -s unowned
On S1 disk assign 0c.00.6
On S1 aggr add aggr0 -d 0c.00.6 to add the new disk to the existing aggregate
Hope this helps. What method are you going to use to present your data to vSphere?
Cheers
Adam
Hi Adam,
I did run setup on S2. I have one vif on S1 for e0a and e0b. I don't believe that I enabled partner on them but I will. I did run setup on S2 before discovering S1 in System Manager. The shelf ID for the DS4243 is 02. I will follow your instructions and sort out the disks by assigning all SAS to S2 and all SATA to S1.
I have S1's vif connected to LAN switch ports that are etherchanneled. This is the SATA drives and will be used for user Home directories, file shares etc. S2 is not connected to anything yet, but the plan is to connect it to 2 Cisco switches and 2 R710'sfor vSphere. The 710's have 8 NICS each. I plan to use iSCSI.
I won't be onsite to try all this until Monday but I will post my results as soon as I do next week. Thank you very much for all of your help and taking the time to answer my questions.
Hi Robert
Please run the commands shown above and the ones aborzenkov mentioned and let us know what sort of result you get.
Paste output of
cf status
disk show -v
from both heads.
Robert Reynolds wrote:
I tried to assign all of the remaining SATA drives owned by S2 to S1 to have all the SATA drives on S1. However when I try to do this via the console it tells me that S2 owns the drives and cannot be changed. Do I need two root aggregates and root volumes, one for the SATA and the other for SAS?
To achieve what you want, you need
If these filers are new and do not have any user data yet, another possibility would be to reassign disks the way you want them in maintenance mode and simply reinstall both heads. Would be valuable experience as well
Hi Robert
Getting back to your original question about networking, for the HA pair to work you will need to configure two VIFs on each controller
vif0 on S1 - "connected to LAN switch ports that are etherchanneled. This is the SATA drives and will be used for user Home directories, file shares etc" assign the IP address for these services to this vif, set its partner to vif0 (on S2)
vif1 on S1 - "connect it to 2 Cisco switches and 2 R710'sfor vSphere" do not assign an IP address to this vif,
vif0 on S2 - - "connected to LAN switch ports that are etherchanneled. This is the SATA drives and will be used for user Home directories, file shares etc" do not assign an IP address to this vif, set its partner to vif0 (on S1)
vif1 on S1 - "connect it to 2 Cisco switches and 2 R710'sfor vSphere" assign the IP address for the iSCSI target to this vif,
This way if S1 goes down, the services it provides will be available via vif0 on S2. If S2 goes down, the services it provides will be available via vif1 on S1. You will need to cable and configure the VIFs on both controllers in order to enable service continuity.
There was a recent thread on the best way to set up iSCSI, I will see if I can find it.
Message was edited by: Adam White
Hi All,
Here are the results of the commands you have reccommended to run. I did notice that I did not pair the VIF and that I did not name it VIF0 which I want to do. Is there a way to rename it or just destroy it and create a new one? Also what is the command to assign the disks as per your post above. Thanks for the help.
S1> storage show shelf
Shelf name: 0c.shelf0
Channel: 0c
Module: A
Shelf id: 0
Shelf UID: 50:0c:0f:f0:0d:bd:1a:3c
Shelf S/N: N/A
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[IN0 ] OK 7 3.0 0 0 0 0 0 4
[IN1 ] OK 7 3.0 0 0 0 0 0 4
[IN2 ] OK 7 3.0 0 0 0 0 0 4
[IN3 ] OK 7 3.0 0 0 0 0 0 4
[OUT0] UNUSED 0 NA 0 0 0 0 0 1
[OUT1] UNUSED 0 NA 0 0 0 0 0 1
[OUT2] UNUSED 0 NA 0 0 0 0 0 1
[OUT3] UNUSED 0 NA 0 0 0 0 0 1
[ 0 ] OK 7 3.0 0 0 0 0 0 8
[ 1 ] OK 7 3.0 0 0 0 0 0 8
[ 2 ] OK 7 3.0 0 0 0 0 0 8
[ 3 ] OK 7 3.0 0 0 0 0 0 8
[ 4 ] OK 7 3.0 0 0 0 0 0 8
[ 5 ] OK 7 3.0 0 0 0 0 0 8
[ 6 ] OK 7 3.0 0 0 0 0 0 8
[ 7 ] OK 7 3.0 0 0 0 0 0 8
[ 8 ] OK 7 3.0 0 0 0 0 0 8
[ 9 ] OK 7 3.0 0 0 0 0 0 8
[ 10 ] OK 7 3.0 0 0 0 0 0 8
[ 11 ] OK 7 3.0 0 0 0 0 0 8
Shelf name: PARTNER.shelf0
Channel: PARTNER
Module: B
Shelf id: 0
Shelf UID: 50:0c:0f:f0:0d:bd:1a:3c
Shelf S/N: N/A
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[IN0 ] OK 7 3.0 0 0 0 0 0 2
[IN1 ] OK 7 3.0 0 0 0 0 0 2
[IN2 ] OK 7 3.0 0 0 0 0 0 2
[IN3 ] OK 7 3.0 0 0 0 0 0 2
[OUT0] UNUSED 0 NA 0 0 0 0 0 1
[OUT1] UNUSED 0 NA 0 0 0 0 0 1
[OUT2] UNUSED 0 NA 0 0 0 0 0 1
[OUT3] UNUSED 0 NA 0 0 0 0 0 1
[ 0 ] OK 7 3.0 0 0 0 0 0 9
[ 1 ] OK 7 3.0 0 0 0 0 0 9
[ 2 ] OK 7 3.0 0 0 0 0 0 9
[ 3 ] OK 7 3.0 0 0 0 0 0 9
[ 4 ] OK 7 3.0 0 0 0 0 0 9
[ 5 ] OK 7 3.0 0 0 0 0 0 9
[ 6 ] OK 7 3.0 0 0 0 0 0 9
[ 7 ] OK 7 3.0 0 0 0 0 0 9
[ 8 ] OK 7 3.0 0 0 0 0 0 9
[ 9 ] OK 7 3.0 0 0 0 0 0 9
[ 10 ] OK 7 3.0 0 0 0 0 0 9
[ 11 ] OK 7 3.0 0 0 0 0 0 9
Shelf name: 0d.shelf2
Channel: 0d
Module: A
Shelf id: 2
Shelf UID: 50:05:0c:c1:02:02:a9:1a
Shelf S/N: SHJ0000000065CE
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[SQR0] OK 7 3.0 0 0 0 0 0 2
[SQR1] OK 7 3.0 0 0 0 0 0 2
[SQR2] OK 7 3.0 0 0 0 0 0 2
[SQR3] OK 7 3.0 0 0 0 0 0 1
[CIR4] EMPTY 7 NA 0 0 0 0 0 0
[CIR5] EMPTY 7 NA 0 0 0 0 0 0
[CIR6] EMPTY 7 NA 0 0 0 0 0 0
[CIR7] EMPTY 7 NA 0 0 0 0 0 0
[ 0 ] OK 7 3.0 0 0 0 0 0 2
[ 1 ] OK 7 3.0 0 0 0 0 0 2
[ 2 ] OK 7 3.0 0 0 0 0 0 2
[ 3 ] OK 7 3.0 0 0 0 0 0 2
[ 4 ] OK 7 3.0 0 0 0 0 0 2
[ 5 ] OK 7 3.0 0 0 0 0 0 2
[ 6 ] OK 7 3.0 0 0 0 0 0 2
[ 7 ] OK 7 3.0 0 0 0 0 0 2
[ 8 ] OK 7 3.0 0 0 0 0 0 2
[ 9 ] OK 7 3.0 0 0 0 0 0 2
[ 10 ] OK 7 3.0 0 0 0 0 0 2
[ 11 ] OK 7 3.0 0 0 0 0 0 2
[ 12 ] OK 7 3.0 0 0 0 0 0 2
[ 13 ] OK 7 3.0 0 0 0 0 0 2
[ 14 ] OK 7 3.0 0 0 0 0 0 2
[ 15 ] OK 7 3.0 0 0 0 0 0 2
[ 16 ] OK 7 3.0 0 0 0 0 0 2
[ 17 ] OK 7 3.0 0 0 0 0 0 2
[ 18 ] OK 7 3.0 0 0 0 0 0 2
[ 19 ] OK 7 3.0 0 0 0 0 0 2
[ 20 ] OK 7 3.0 0 0 0 0 0 2
[ 21 ] OK 7 3.0 0 0 0 0 0 2
[ 22 ] OK 7 3.0 0 0 0 0 0 2
[ 23 ] OK 7 3.0 0 0 0 0 0 2
[SIL0] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL1] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL2] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL3] DIS/UNUSD 7 NA 0 0 0 0 0 0
Shelf name: PARTNER.shelf2
Channel: PARTNER
Module: B
Shelf id: 2
Shelf UID: 50:05:0c:c1:02:02:a9:1a
Shelf S/N: SHJ0000000065CE
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[SQR0] OK 7 3.0 0 0 0 0 0 2
[SQR1] OK 7 3.0 0 0 0 0 0 2
[SQR2] OK 7 3.0 0 0 0 0 0 2
[SQR3] OK 7 3.0 0 0 0 0 0 2
[CIR4] EMPTY 7 NA 0 0 0 0 0 0
[CIR5] EMPTY 7 NA 0 0 0 0 0 0
[CIR6] EMPTY 7 NA 0 0 0 0 0 0
[CIR7] EMPTY 7 NA 0 0 0 0 0 0
[ 0 ] OK 7 3.0 0 0 0 0 0 2
[ 1 ] OK 7 3.0 0 0 0 0 0 2
[ 2 ] OK 7 3.0 0 0 0 0 0 2
[ 3 ] OK 7 3.0 0 0 0 0 0 2
[ 4 ] OK 7 3.0 0 0 0 0 0 2
[ 5 ] OK 7 3.0 0 0 0 0 0 2
[ 6 ] OK 7 3.0 0 0 0 0 0 2
[ 7 ] OK 7 3.0 0 0 0 0 0 2
[ 8 ] OK 7 3.0 0 0 0 0 0 2
[ 9 ] OK 7 3.0 0 0 0 0 0 2
[ 10 ] OK 7 3.0 0 0 0 0 0 2
[ 11 ] OK 7 3.0 0 0 0 0 0 2
[ 12 ] OK 7 3.0 0 0 0 0 0 2
[ 13 ] OK 7 3.0 0 0 0 0 0 2
[ 14 ] OK 7 3.0 0 0 0 0 0 2
[ 15 ] OK 7 3.0 0 0 0 0 0 2
[ 16 ] OK 7 3.0 0 0 0 0 0 2
[ 17 ] OK 7 3.0 0 0 0 0 0 2
[ 18 ] OK 7 3.0 0 0 0 0 0 2
[ 19 ] OK 7 3.0 0 0 0 0 0 2
[ 20 ] OK 7 3.0 0 0 0 0 0 2
[ 21 ] OK 7 3.0 0 0 0 0 0 2
[ 22 ] OK 7 3.0 0 0 0 0 0 2
[ 23 ] OK 7 3.0 0 0 0 0 0 2
[SIL0] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL1] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL2] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL3] DIS/UNUSD 7 NA 0 0 0 0 0 0
S1>
S1> storage show disk -x
DISK SHELF BAY SERIAL VENDOR MODEL REV
-------- --------- --------------- -------- ---------------- ----
0c.00.0 0 0 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.1 0 1 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.2 0 2 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.3 0 3 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.4 0 4 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.5 0 5 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.6 0 6 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.7 0 7 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.8 0 8 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.9 0 9 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.10 0 10 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0c.00.11 0 11 WD-xxxxxxxxxxxx NETAPP X298_WVULC01TSSS NA00
0d.02.0 2 0 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.1 2 1 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.2 2 2 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.3 2 3 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.4 2 4 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.5 2 5 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.6 2 6 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.7 2 7 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.8 2 8 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.9 2 9 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.10 2 10 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.11 2 11 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.12 2 12 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.13 2 13 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.14 2 14 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.15 2 15 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.16 2 16 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.17 2 17 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.18 2 18 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.19 2 19 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.20 2 20 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.21 2 21 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.22 2 22 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
0d.02.23 2 23 J1Xxxxxx NETAPP X411_HVIPC420A15 NA01
S1> cf status
Cluster disabled.
S1> disk show -v
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
0c.00.5 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.3 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.9 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.1 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.4 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0c.00.7 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.6 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0c.00.10 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.0 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0c.00.11 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.2 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0d.02.4 Not Owned NONE J1Xxxxxx
0d.02.14 Not Owned NONE J1Xxxxxx
0d.02.23 Not Owned NONE J1Xxxxxx
0d.02.17 Not Owned NONE J1Xxxxxx
0d.02.10 Not Owned NONE J1Xxxxxx
0d.02.8 Not Owned NONE J1Xxxxxx
0d.02.2 Not Owned NONE J1Xxxxxx
0d.02.9 Not Owned NONE J1Xxxxxx
0d.02.1 Not Owned NONE J1Xxxxxx
0d.02.12 Not Owned NONE J1Xxxxxx
0d.02.15 Not Owned NONE J1Xxxxxx
0d.02.16 Not Owned NONE J1Xxxxxx
0d.02.11 Not Owned NONE J1Xxxxxx
0d.02.0 Not Owned NONE J1Xxxxxx
0d.02.6 Not Owned NONE J1Xxxxxx
0d.02.3 Not Owned NONE J1Xxxxxx
0d.02.20 Not Owned NONE J1Xxxxxx
0d.02.19 Not Owned NONE J1Xxxxxx
0d.02.7 Not Owned NONE J1Xxxxxx
0d.02.5 Not Owned NONE J1Xxxxxx
0d.02.18 Not Owned NONE J1Xxxxxx
0d.02.22 Not Owned NONE J1Xxxxxx
0d.02.13 Not Owned NONE J1Xxxxxx
0d.02.21 Not Owned NONE J1Xxxxxx
0c.00.8 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
S2> Tue Apr 3 15:16:53 GMT [S2: console_login_mgr:info]: root logged in from console
S2> storage show shelf
Shelf name: PARTNER.shelf0
Channel: PARTNER
Module: A
Shelf id: 0
Shelf UID: 50:0c:0f:f0:0d:bd:1a:3c
Shelf S/N: N/A
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[IN0 ] OK 7 3.0 0 0 0 0 0 4
[IN1 ] OK 7 3.0 0 0 0 0 0 4
[IN2 ] OK 7 3.0 0 0 0 0 0 4
[IN3 ] OK 7 3.0 0 0 0 0 0 4
[OUT0] UNUSED 0 NA 0 0 0 0 0 1
[OUT1] UNUSED 0 NA 0 0 0 0 0 1
[OUT2] UNUSED 0 NA 0 0 0 0 0 1
[OUT3] UNUSED 0 NA 0 0 0 0 0 1
[ 0 ] OK 7 3.0 0 0 0 0 0 8
[ 1 ] OK 7 3.0 0 0 0 0 0 8
[ 2 ] OK 7 3.0 0 0 0 0 0 8
[ 3 ] OK 7 3.0 0 0 0 0 0 8
[ 4 ] OK 7 3.0 0 0 0 0 0 8
[ 5 ] OK 7 3.0 0 0 0 0 0 8
[ 6 ] OK 7 3.0 0 0 0 0 0 8
[ 7 ] OK 7 3.0 0 0 0 0 0 8
[ 8 ] OK 7 3.0 0 0 0 0 0 8
[ 9 ] OK 7 3.0 0 0 0 0 0 8
[ 10 ] OK 7 3.0 0 0 0 0 0 8
[ 11 ] OK 7 3.0 0 0 0 0 0 8
Shelf name: 0c.shelf0
Channel: 0c
Module: B
Shelf id: 0
Shelf UID: 50:0c:0f:f0:0d:bd:1a:3c
Shelf S/N: N/A
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[IN0 ] OK 7 3.0 0 0 0 0 0 2
[IN1 ] OK 7 3.0 0 0 0 0 0 2
[IN2 ] OK 7 3.0 0 0 0 0 0 2
[IN3 ] OK 7 3.0 0 0 0 0 0 2
[OUT0] UNUSED 0 NA 0 0 0 0 0 1
[OUT1] UNUSED 0 NA 0 0 0 0 0 1
[OUT2] UNUSED 0 NA 0 0 0 0 0 1
[OUT3] UNUSED 0 NA 0 0 0 0 0 1
[ 0 ] OK 7 3.0 0 0 0 0 0 9
[ 1 ] OK 7 3.0 0 0 0 0 0 9
[ 2 ] OK 7 3.0 0 0 0 0 0 9
[ 3 ] OK 7 3.0 0 0 0 0 0 9
[ 4 ] OK 7 3.0 0 0 0 0 0 9
[ 5 ] OK 7 3.0 0 0 0 0 0 9
[ 6 ] OK 7 3.0 0 0 0 0 0 9
[ 7 ] OK 7 3.0 0 0 0 0 0 9
[ 8 ] OK 7 3.0 0 0 0 0 0 9
[ 9 ] OK 7 3.0 0 0 0 0 0 9
[ 10 ] OK 7 3.0 0 0 0 0 0 9
[ 11 ] OK 7 3.0 0 0 0 0 0 9
Shelf name: PARTNER.shelf2
Channel: PARTNER
Module: A
Shelf id: 2
Shelf UID: 50:05:0c:c1:02:02:a9:1a
Shelf S/N: SHJ0000000065CE
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[SQR0] OK 7 3.0 0 0 0 0 0 2
[SQR1] OK 7 3.0 0 0 0 0 0 2
[SQR2] OK 7 3.0 0 0 0 0 0 2
[SQR3] OK 7 3.0 0 0 0 0 0 1
[CIR4] EMPTY 7 NA 0 0 0 0 0 0
[CIR5] EMPTY 7 NA 0 0 0 0 0 0
[CIR6] EMPTY 7 NA 0 0 0 0 0 0
[CIR7] EMPTY 7 NA 0 0 0 0 0 0
[ 0 ] OK 7 3.0 0 0 0 0 0 2
[ 1 ] OK 7 3.0 0 0 0 0 0 2
[ 2 ] OK 7 3.0 0 0 0 0 0 2
[ 3 ] OK 7 3.0 0 0 0 0 0 2
[ 4 ] OK 7 3.0 0 0 0 0 0 2
[ 5 ] OK 7 3.0 0 0 0 0 0 2
[ 6 ] OK 7 3.0 0 0 0 0 0 2
[ 7 ] OK 7 3.0 0 0 0 0 0 2
[ 8 ] OK 7 3.0 0 0 0 0 0 2
[ 9 ] OK 7 3.0 0 0 0 0 0 2
[ 10 ] OK 7 3.0 0 0 0 0 0 2
[ 11 ] OK 7 3.0 0 0 0 0 0 2
[ 12 ] OK 7 3.0 0 0 0 0 0 2
[ 13 ] OK 7 3.0 0 0 0 0 0 2
[ 14 ] OK 7 3.0 0 0 0 0 0 2
[ 15 ] OK 7 3.0 0 0 0 0 0 2
[ 16 ] OK 7 3.0 0 0 0 0 0 2
[ 17 ] OK 7 3.0 0 0 0 0 0 2
[ 18 ] OK 7 3.0 0 0 0 0 0 2
[ 19 ] OK 7 3.0 0 0 0 0 0 2
[ 20 ] OK 7 3.0 0 0 0 0 0 2
[ 21 ] OK 7 3.0 0 0 0 0 0 2
[ 22 ] OK 7 3.0 0 0 0 0 0 2
[ 23 ] OK 7 3.0 0 0 0 0 0 2
[SIL0] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL1] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL2] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL3] DIS/UNUSD 7 NA 0 0 0 0 0 0
Shelf name: 0d.shelf2
Channel: 0d
Module: B
Shelf id: 2
Shelf UID: 50:05:0c:c1:02:02:a9:1a
Shelf S/N: SHJ0000000065CE
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[SQR0] OK 7 3.0 0 0 0 0 0 2
[SQR1] OK 7 3.0 0 0 0 0 0 2
[SQR2] OK 7 3.0 0 0 0 0 0 2
[SQR3] OK 7 3.0 0 0 0 0 0 2
[CIR4] EMPTY 7 NA 0 0 0 0 0 0
[CIR5] EMPTY 7 NA 0 0 0 0 0 0
[CIR6] EMPTY 7 NA 0 0 0 0 0 0
[CIR7] EMPTY 7 NA 0 0 0 0 0 0
[ 0 ] OK 7 3.0 0 0 0 0 0 2
[ 1 ] OK 7 3.0 0 0 0 0 0 2
[ 2 ] OK 7 3.0 0 0 0 0 0 2
[ 3 ] OK 7 3.0 0 0 0 0 0 2
[ 4 ] OK 7 3.0 0 0 0 0 0 2
[ 5 ] OK 7 3.0 0 0 0 0 0 2
[ 6 ] OK 7 3.0 0 0 0 0 0 2
[ 7 ] OK 7 3.0 0 0 0 0 0 2
[ 8 ] OK 7 3.0 0 0 0 0 0 2
[ 9 ] OK 7 3.0 0 0 0 0 0 2
[ 10 ] OK 7 3.0 0 0 0 0 0 2
[ 11 ] OK 7 3.0 0 0 0 0 0 2
[ 12 ] OK 7 3.0 0 0 0 0 0 2
[ 13 ] OK 7 3.0 0 0 0 0 0 2
[ 14 ] OK 7 3.0 0 0 0 0 0 2
[ 15 ] OK 7 3.0 0 0 0 0 0 2
[ 16 ] OK 7 3.0 0 0 0 0 0 2
[ 17 ] OK 7 3.0 0 0 0 0 0 2
[ 18 ] OK 7 3.0 0 0 0 0 0 2
[ 19 ] OK 7 3.0 0 0 0 0 0 2
[ 20 ] OK 7 3.0 0 0 0 0 0 2
[ 21 ] OK 7 3.0 0 0 0 0 0 2
[ 22 ] OK 7 3.0 0 0 0 0 0 2
[ 23 ] OK 7 3.0 0 0 0 0 0 2
[SIL0] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL1] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL2] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL3] DIS/UNUSD 7 NA 0 0 0 0 0 0
S2>
S2> storage show disk -x
DISK SHELF BAY SERIAL VENDOR MODEL REV
-------- --------- --------------- -------- ---------------- ----
0c.00.0 0 0 WD-WMAW30046366 NETAPP X298_WVULC01TSSS NA00
0c.00.1 0 1 WD-WMAW30015832 NETAPP X298_WVULC01TSSS NA00
0c.00.2 0 2 WD-WMAW30076410 NETAPP X298_WVULC01TSSS NA00
0c.00.3 0 3 WD-WMAW30013290 NETAPP X298_WVULC01TSSS NA00
0c.00.4 0 4 WD-WMAW30014663 NETAPP X298_WVULC01TSSS NA00
0c.00.5 0 5 WD-WMAW30015444 NETAPP X298_WVULC01TSSS NA00
0c.00.6 0 6 WD-WMAW30015189 NETAPP X298_WVULC01TSSS NA00
0c.00.7 0 7 WD-WMAW30020138 NETAPP X298_WVULC01TSSS NA00
0c.00.8 0 8 WD-WMAW30015240 NETAPP X298_WVULC01TSSS NA00
0c.00.9 0 9 WD-WMAW30015460 NETAPP X298_WVULC01TSSS NA00
0c.00.10 0 10 WD-WMAW30083252 NETAPP X298_WVULC01TSSS NA00
0c.00.11 0 11 WD-WMAW30083281 NETAPP X298_WVULC01TSSS NA00
0d.02.0 2 0 J1XS83MN NETAPP X411_HVIPC420A15 NA01
0d.02.1 2 1 J1XT80VN NETAPP X411_HVIPC420A15 NA01
0d.02.2 2 2 J1XLTJBN NETAPP X411_HVIPC420A15 NA01
0d.02.3 2 3 J1XSP9KN NETAPP X411_HVIPC420A15 NA01
0d.02.4 2 4 J1XLSEPN NETAPP X411_HVIPC420A15 NA01
0d.02.5 2 5 J1XLS5SN NETAPP X411_HVIPC420A15 NA01
0d.02.6 2 6 J1XMTAJN NETAPP X411_HVIPC420A15 NA01
0d.02.7 2 7 J1XRMUJN NETAPP X411_HVIPC420A15 NA01
0d.02.8 2 8 J1XS80DN NETAPP X411_HVIPC420A15 NA01
0d.02.9 2 9 J1XRLVHN NETAPP X411_HVIPC420A15 NA01
0d.02.10 2 10 J1XEYT4N NETAPP X411_HVIPC420A15 NA01
0d.02.11 2 11 J1XP5KSN NETAPP X411_HVIPC420A15 NA01
0d.02.12 2 12 J1XPB1XN NETAPP X411_HVIPC420A15 NA01
0d.02.13 2 13 J1XP58JN NETAPP X411_HVIPC420A15 NA01
0d.02.14 2 14 J1XEYN9N NETAPP X411_HVIPC420A15 NA01
0d.02.15 2 15 J1XPB4LN NETAPP X411_HVIPC420A15 NA01
0d.02.16 2 16 J1XLT8AN NETAPP X411_HVIPC420A15 NA01
0d.02.17 2 17 J1XLSBJN NETAPP X411_HVIPC420A15 NA01
0d.02.18 2 18 J1XLSUJN NETAPP X411_HVIPC420A15 NA01
0d.02.19 2 19 J1XLSWWN NETAPP X411_HVIPC420A15 NA01
0d.02.20 2 20 J1XLSV5N NETAPP X411_HVIPC420A15 NA01
0d.02.21 2 21 J1XLS60N NETAPP X411_HVIPC420A15 NA01
0d.02.22 2 22 J1XEYN4N NETAPP X411_HVIPC420A15 NA01
0d.02.23 2 23 J1XPAYAN NETAPP X411_HVIPC420A15 NA01
S2> cf status
Cluster disabled.
S2> disk show -v
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
0c.00.1 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.3 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.11 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.0 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0c.00.2 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0c.00.4 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0c.00.6 S2 (142255951) Pool0 WD-xxxxxxxxxxxx
0c.00.7 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.8 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.9 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.10 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0c.00.5 S1 (142256056) Pool0 WD-xxxxxxxxxxxx
0d.02.9 Not Owned NONE J1Xxxxxx
0d.02.15 Not Owned NONE J1Xxxxxx
0d.02.6 Not Owned NONE J1Xxxxxx
0d.02.11 Not Owned NONE J1Xxxxxx
0d.02.0 Not Owned NONE J1Xxxxxx
0d.02.19 Not Owned NONE J1Xxxxxx
0d.02.18 Not Owned NONE J1Xxxxxx
0d.02.7 Not Owned NONE J1Xxxxxx
0d.02.22 Not Owned NONE J1Xxxxxx
0d.02.5 Not Owned NONE J1Xxxxxx
0d.02.4 Not Owned NONE J1Xxxxxx
0d.02.14 Not Owned NONE J1Xxxxxx
0d.02.13 Not Owned NONE J1Xxxxxx
0d.02.21 Not Owned NONE J1Xxxxxx
0d.02.17 Not Owned NONE J1Xxxxxx
0d.02.8 Not Owned NONE J1Xxxxxx
0d.02.2 Not Owned NONE J1Xxxxxx
0d.02.23 Not Owned NONE J1Xxxxxx
0d.02.16 Not Owned NONE J1Xxxxxx
0d.02.10 Not Owned NONE J1Xxxxxx
0d.02.1 Not Owned NONE J1Xxxxxx
0d.02.12 Not Owned NONE J1Xxxxxx
0d.02.20 Not Owned NONE J1Xxxxxx
0d.02.3 Not Owned NONE J1Xxxxxx
Hi Robert
The good news is that all your disks are visible so you can proceed to aaaign the SAS disks to S2 as described above.
The bad news is that CF is currently disabled. You could try running cf enable on both controlers and see what happens. I recommend setting up the ifgrps as described above with the "partner" settings first.
Best option at this stage may be to download and run HA-Config-Check
Cheers
Adam
Message was edited by: Adam White
Message was edited by: Adam White - HA-config-check
Adam,
I assigned the SAS drives to S2. Now I am on the following step from your recommendation:
------------------------------------------------
Then create a new volume on aggr1 of at least 16 GB and space guarantee=volume. Make this your root volume - there are plenty of tutorials on the net on how to do this. Destroy the old vol0 and rename the new vol to vol0 once this is complete.
After creating the new root volume you can destroy aggr0 on S2 and assign all the SATA disks to S1. Use disk assign <disk> -s unowned on S2 to remove the disk so it can be assigned to S1
------------------------------------------------
I could not locate the proper documentation to create the new root volume, how big should it be? Is 16GB big enough? I did create aggr1 and all the SAS drives are in that aggregate. What are the steps for creating the root volume on aggr1 and destroying aggr0 on S2? Thanks
The minimal size of root volume for different platforms is specified in Storage Management Guide. Additionally, if you try to declare volume as root (vol options root) it will complaint if volume is too small. FlexVol can be then easily extended.