Subscribe
Accepted Solution

Best practice way to make SATA drives with CIF shares available on the network

I have a FAS2040 and a DS4243. The FAS2040 has 12 1TB SATA drives in it and the DS4243 has 24 450 GB SAS drives. I plan to connect the DS4243 to a physical switch for my vSphere 5 environment. However the SATA drives will be CIF shares for video archiving, users My Documents and various files so I dont want to connect that to the physical switch that the DS4243 shelf is connected to. What is the best way to do this? Can I connect two nics from the FAS2040 into my core and run it like that?  What is a best practice?

Thanks

Re: Best practice way to make SATA drives with CIF shares available on the network

Hi Robert

You can't connect the DS4243 to a switch, it needs to be connected to the FAS2040 filer which will then present its disks to the network. If the 2040 is a HA pair you will need to run a SAS cable from the DS4243 to each controller board in the 2040 - have a look for the NetApp Universal SAS and ACP Cabling Guide document or the FAS2040 System Installation and Setup poster for more info.

Good practise is to create Virtual Interfaces (also called Interface Groups) consisting of two or more physical NICs on your FAS2040. You could create one VIF for CIFS and another for vSphere (NFS protocol or iSCSI) and attach them to separate physical switches, or create a single four-port VIF attached to a single switch stack and use VLAN tagging to separate the traffic into vSphere and CIFS. Just depends on your environment really. VLANs gives you some flexibility because if you want to add more services on different networks later you don't neccessarily need more physical interfaces. Checkout the Network Management Guide in the Data ONTAP documentation bundle for more info.

Hope this helps, let me know if you have any further questions.

Cheers

Adam

Re: Best practice way to make SATA drives with CIF shares available on the network

Hi,

If I read your question correctly, a very similar scenario has been already discussed here:

https://communities.netapp.com/message/47474#47474

Regards,

Radek

Best practice way to make SATA drives with CIF shares available on the network

Thanks all for the great advice. I now have both the FAS2040 and the DS4243 powered up. I created a vif on the FAS2040 for e0a and e0b and created an etherchannel port on my Cisco switch. I can access the FAS2040 through the web interface and the System Manager. I then connected to the console of the other controller and went through the setup. In System Manager. I can only see the one controller, the first one which is SATA drives. I cannot see the 24 SAS drives on the DS4243. There is a root aggregate on the 2040 which has 3 drives and one spare, however I have 12 SATA drives in the 2040. How do I go about creating an aggregate and ustilizing my SATA drives in the best way possible? Should the root aggregate be seperate and what size should it be?  I want to use RAID-DP. Would 11 drives and 1 spare be a good configuration for the SATA drives?  How do I get the SAS drives configured to where I can see them in System Manager and configure the shelf? Any help would be great, thanks again.  I am going to use the SAS drives for a vSphere 5 environment.

Re: Best practice way to make SATA drives with CIF shares available on the network

Hi Robert

Have you physically connected both controllers to the DS4243?

Would 11 drives and 1 spare be a good configuration for the SATA drives?  

You will need at least 3 drives assigned to controller B to host the root aggregate and root volume. If you run aggr show -r on both controllers it will show which disks are currently assigned to the controller and the aggregate. You will probably see that each controller is currently using 3 disks on the internal disks If you want to assign all your SAS disks to one controller and all your SATA disks to the other you will need to:

1. Ensure both controllers can see all disks

2. Assign all SAS disks to controller B

3. Create a new aggregate on SAS on controller B

4. Create a new root volume on new aggregate on controller B - the procedure to do this is documented in various places including communities.netapp.com

5. Unassign all SATA disks from controller B and assign to controller A

At this point you can expand your aggregate on controller A to use 11 SATA disks.

Re: Best practice way to make SATA drives with CIF shares available on the network

Hi akw_white,

Thanks for the reply. I have both controllers connected to the DS4243. Here is where I am. I am using On Command System Manager and the console. I named one controller S1 (which is the SATA drives) and the other controller is named S2 (which is the SAS drives). In On Command System Manager I have a tab for S1 but no tab for S2. I think it shows up this way because its in a HA configuration (?) Eight of the drives are owned by S1 and the rest are owned by S2, that is the way I received the unit from NetApp. All of the SAS drives are unassigned at this point. I tried to assign all of the remaining SATA drives owned by S2 to S1 to have all the SATA drives on S1. However when I try to do this via the console it tells me that S2 owns the drives and cannot be changed. Do I need two root aggregates and root volumes, one for the SATA and the other for SAS? For the SAS shelf using your instructions above that would give me 21 drives available for use? I want to use RAID-DP on each controller. I am new to NetApp but I am trying really hard to learn and appreciate your time in answering my questions. 

Here is my volume status on S1:

S1> vol status -r

Aggregate aggr0 (online, raid_dp) (block checksums)

  Plex /aggr0/plex0 (online, normal, active, pool0)

    RAID group /aggr0/plex0/rg0 (normal)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      dparity   0c.00.1         0c    0   1   SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

      parity    0c.00.3         0c    0   3   SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

      data      0c.00.5         0c    0   5   SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

      data      0c.00.9         0c    0   9   SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

      data      0c.00.8         0c    0   8   SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

      data      0c.00.7         0c    0   7   SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

Pool1 spare disks (empty)

Pool0 spare disks

RAID Disk       Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------       ------          ------------- ---- ---- ---- ----- --------------    --------------

Spare disks for block or zoned checksum traditional volumes or aggregates

spare           0c.00.10        0c    0   10  SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

spare           0c.00.11        0c    0   11  SA:A   0  SATA  7200 847555/1735794176 847884/1736466816

Partner disks

RAID Disk       Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------       ------          ------------- ---- ---- ---- ----- --------------    --------------

partner         0c.00.2         0c    0   2   SA:A   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.0         0c    0   0   SA:A   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.4         0c    0   4   SA:A   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.6         0c    0   6   SA:A   0  SATA  7200 0/0               847884/1736466816

Volume status from S2:

S2> vol status -r

Aggregate aggr0 (online, raid_dp) (block checksums)

  Plex /aggr0/plex0 (online, normal, active, pool0)

    RAID group /aggr0/plex0/rg0 (normal)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      dparity   0c.00.0         0c    0   0   SA:B   0  SATA  7200 847555/1735794176 847884/1736466816

      parity    0c.00.2         0c    0   2   SA:B   0  SATA  7200 847555/1735794176 847884/1736466816

      data      0c.00.4         0c    0   4   SA:B   0  SATA  7200 847555/1735794176 847884/1736466816

Pool1 spare disks (empty)

Pool0 spare disks

RAID Disk       Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------       ------          ------------- ---- ---- ---- ----- --------------    --------------

Spare disks for block or zoned checksum traditional volumes or aggregates

spare           0c.00.6         0c    0   6   SA:B   0  SATA  7200 847555/1735794176 847884/1736466816

Partner disks

RAID Disk       Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------       ------          ------------- ---- ---- ---- ----- --------------    --------------

partner         0c.00.11        0c    0   11  SA:B   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.10        0c    0   10  SA:B   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.9         0c    0   9   SA:B   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.8         0c    0   8   SA:B   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.5         0c    0   5   SA:B   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.7         0c    0   7   SA:B   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.3         0c    0   3   SA:B   0  SATA  7200 0/0               847884/1736466816

partner         0c.00.1         0c    0   1   SA:B   0  SATA  7200 0/0               847884/1736466816

Re: Best practice way to make SATA drives with CIF shares available on the network

Hi Robert

It looks like you have three distinct issues here:

1) Only one controller showing up in System Manager/is HA working?

2) Accessing SAS disks in DS4243 shelf

3) Assigning all SATA disks to controller S1 and all SAS disks to controller S2

My thoughts...

1) I don't have much experience with HA issues, normally It Just Works so here's a few things to check:

Has setup been run on S2? Does it have a VIF configured?

Command line tools you can use:

Run license on both and verify you have a cluster license. If not, add it and reboot.

Run cf status and see what the result is

Have you enabled "partner" interfaces on the VIFs on both controllers?

     Normally I would use the same VIF name on both controllers (vif0) to keep things simple

     Run ifconfig <vifname> and the result should include partner <vif-name-on-other-controller> (not in use)

     If it doesn't run ifconfig <vifname> partner <vif-name-on-other-controller> to fix this

Run cf enable to enable HA

Did you run setup on S2 before discovering S1 in System Manager? If not, try removing S1 from system manager and adding it back in.

You can download the HA Configuration Checker tool from http://support.netapp.com/NOW/download/tools/cf_config_check/

2) What shelf ID have you set on the DS4243? The internal disks will be using shelf "0".

Use these commands to verify the SAS disks are visible from both controllers

storage show shelf - you should see two shelves

storage show disk -x - you should see disks on 0c.00.x (internal) and 0c.yy.x (DS4243) with different sizes (847 GB SATA and 418 GB SAS IIRC).

3) Once 1 and 2 are sorted out you can go onto getting the SAS disks working on S2

The trick here is to migrate the root volume off the SATA drives so you can assign them (SATA) to S1. You need one root volume per controller, the disk type itself doesn't matter. In your case a single aggregate on each controller containing the root and data volumes will be fine.

First, assign all the SAS disks to S2

Then create a new aggregate on S2 including 23 SAS disks using something like this:

aggr create aggr1 -r 23  23@418

Which breaks down as:

aggregate name = aggr1

RAIDgroup size (-r) = 23 for 21 data disks and 2 parity disk

Number of disks = 23 x 450 GB (418 GB in ONTAP)

(Note: using -r 23 means if you want to add new disks to this aggr later on you will need a new RAID group and therefore two more parity disks).

Once you've done that run aggr status -r aggr1 to see the new aggregate and its disks

Then create a new volume on aggr1 of at least 16 GB and space guarantee=volume. Make this your root volume - there are plenty of tutorials on the net on how to do this. Destroy the old vol0 and rename the new vol to vol0 once this is complete.

After creating the new root volume you can destroy aggr0 on S2 and assign all the SATA disks to S1. Use disk assign <disk> -s unowned on S2 to remove the disk so it can be assigned to S1, for example:

On S2 disk assign 0c.00.6 -s unowned

On S1 disk assign 0c.00.6

On S1 aggr add aggr0 -d 0c.00.6 to add the new disk to the existing aggregate

Hope this helps. What method are you going to use to present your data to vSphere?

Cheers

Adam

Re: Best practice way to make SATA drives with CIF shares available on the network

Paste output of

cf status

disk show -v

from both heads.

Re: Best practice way to make SATA drives with CIF shares available on the network

Hi Adam,

I did run setup on S2. I have one vif on S1 for e0a and e0b. I don't believe that I enabled partner on them but I will. I did run setup on S2 before discovering S1 in System Manager. The shelf ID for the DS4243 is 02. I will follow your instructions and sort out the disks by assigning all SAS to S2 and all SATA to S1.

I have S1's vif connected to LAN switch ports that are etherchanneled. This is the SATA drives and will be used for user Home directories, file shares etc. S2 is not connected to anything yet, but the plan is to connect it to 2 Cisco switches and 2 R710'sfor vSphere. The 710's have 8 NICS each. I plan to use iSCSI.

I won't be onsite to try all this until Monday but I will post my results as soon as I do next week. Thank you very much for all of your help and taking the time to answer my questions.

Re: Best practice way to make SATA drives with CIF shares available on the network

Hi Robert

Please run the commands shown above and the ones aborzenkov mentioned and let us know what sort of result you get.