ONTAP Hardware

Best practice way to make SATA drives with CIF shares available on the network

_NETWORX_
6,051 Views

I have a FAS2040 and a DS4243. The FAS2040 has 12 1TB SATA drives in it and the DS4243 has 24 450 GB SAS drives. I plan to connect the DS4243 to a physical switch for my vSphere 5 environment. However the SATA drives will be CIF shares for video archiving, users My Documents and various files so I dont want to connect that to the physical switch that the DS4243 shelf is connected to. What is the best way to do this? Can I connect two nics from the FAS2040 into my core and run it like that?  What is a best practice?

Thanks

1 ACCEPTED SOLUTION

akw_white
6,050 Views

Hi Robert

It looks like you have three distinct issues here:

1) Only one controller showing up in System Manager/is HA working?

2) Accessing SAS disks in DS4243 shelf

3) Assigning all SATA disks to controller S1 and all SAS disks to controller S2

My thoughts...

1) I don't have much experience with HA issues, normally It Just Works so here's a few things to check:

Has setup been run on S2? Does it have a VIF configured?

Command line tools you can use:

Run license on both and verify you have a cluster license. If not, add it and reboot.

Run cf status and see what the result is

Have you enabled "partner" interfaces on the VIFs on both controllers?

     Normally I would use the same VIF name on both controllers (vif0) to keep things simple

     Run ifconfig <vifname> and the result should include partner <vif-name-on-other-controller> (not in use)

     If it doesn't run ifconfig <vifname> partner <vif-name-on-other-controller> to fix this

Run cf enable to enable HA

Did you run setup on S2 before discovering S1 in System Manager? If not, try removing S1 from system manager and adding it back in.

You can download the HA Configuration Checker tool from http://support.netapp.com/NOW/download/tools/cf_config_check/

2) What shelf ID have you set on the DS4243? The internal disks will be using shelf "0".

Use these commands to verify the SAS disks are visible from both controllers

storage show shelf - you should see two shelves

storage show disk -x - you should see disks on 0c.00.x (internal) and 0c.yy.x (DS4243) with different sizes (847 GB SATA and 418 GB SAS IIRC).

3) Once 1 and 2 are sorted out you can go onto getting the SAS disks working on S2

The trick here is to migrate the root volume off the SATA drives so you can assign them (SATA) to S1. You need one root volume per controller, the disk type itself doesn't matter. In your case a single aggregate on each controller containing the root and data volumes will be fine.

First, assign all the SAS disks to S2

Then create a new aggregate on S2 including 23 SAS disks using something like this:

aggr create aggr1 -r 23  23@418

Which breaks down as:

aggregate name = aggr1

RAIDgroup size (-r) = 23 for 21 data disks and 2 parity disk

Number of disks = 23 x 450 GB (418 GB in ONTAP)

(Note: using -r 23 means if you want to add new disks to this aggr later on you will need a new RAID group and therefore two more parity disks).

Once you've done that run aggr status -r aggr1 to see the new aggregate and its disks

Then create a new volume on aggr1 of at least 16 GB and space guarantee=volume. Make this your root volume - there are plenty of tutorials on the net on how to do this. Destroy the old vol0 and rename the new vol to vol0 once this is complete.

After creating the new root volume you can destroy aggr0 on S2 and assign all the SATA disks to S1. Use disk assign <disk> -s unowned on S2 to remove the disk so it can be assigned to S1, for example:

On S2 disk assign 0c.00.6 -s unowned

On S1 disk assign 0c.00.6

On S1 aggr add aggr0 -d 0c.00.6 to add the new disk to the existing aggregate

Hope this helps. What method are you going to use to present your data to vSphere?

Cheers

Adam

View solution in original post

15 REPLIES 15
Public