VMware Solutions Discussions

Question about Clustered FAS3140 and gaining access to all disks in a DS4243 Shelf

remingtonpark1
9,219 Views

Hello,

I am currently assisting a client with a new DS4243 Shelf / FAS3140 product install.  The NetAPP installer did a rack/shelf, but didn't have all the required information to configure the SAN at the time of installation.  I apologize in advance, I am a networking guy, but am familiar with EMC/Compellent solutions.. and have yet to work with NetAPP... so I am trying to help complete what is left of this installation.

I have thus completed the CLI based config and assigned IP's and configured the Cluster IP scheme (failover/failback) etc.. which shows to be working fine and without any system errors or warning... all lights green.

My question/problem is:

They have 24 total disks @ 450GB / 408GB useable.  When I try to create an aggregate I am only presented with the remaining 8 Spares + 1 Parity (Non-Double Party Mode).. I am not given any option to utilize the other 12 "Partner" disks.  I have read on these forums that it is possible to utilize ALL disks in the Shelf, however past talking about the possibilities, I cannot find any such instruction or guidance.

If the answer is indeed splitting disk ownership, how do I accomplish that?

This thread discusses assigning the entire disk shelf to one controller.. if that's the case, how does one accomplish such a task?

http://communities.netapp.com/thread/13659

Along with another post asking the same, why are there 12 partners on the same disk shelf:

http://communities.netapp.com/thread/4195

Can anyone help me on this?  I am at a loss.. and aside from simply creating a regular aggregate with the remaining 8 disks and "losing" the other 12 just to accomplish a goal of finishing the setup, I don't know what else to do.  I have also tried support a number of times and they have provided no-help at all on this question.

Thanks guys

10 REPLIES 10

rmharwood
9,162 Views

May need some more info here.

In a cluster you have two controllers and each can "own" or be assigned exclusive access to any disks that it can see. In your screenshot, your 12 "partner" disks are assigned to the other controller, that is why you cannot build an aggregate out of them on the controller you're viewing them from.

A Netapp cluster is usually an active/active pair where each controller has its own storage, as well as the ability to access the other controller's storage in the event of a failover.

Does your second controller have any aggregates?

Un-owned disks can be assigned to either controller and you can unassigned disks from a controller, although this is considered an "advanced" operation.

Richard

remingtonpark1
9,162 Views

Thanks for your reply;

Ok, on each controller, there are 3 disks in aggr0 - I guess the "default" config is to have 1 + 2 parity for the system/default aggregate.. which both controllers show to have their own.. is this correct?

Aisde from that, each controller shows 9 additional "spares" (8+1), and on each controller, the other shows the 9 as Partner disks.

I looked and there are no 'unowned disks' on either controller...

rmharwood
9,162 Views

Yes, each controller has to have its own root volume and that has to be on an aggregate owned by the controller.

So.. each controller is assigned 12 disks out of the 24 available.

What do you want to achieve?

remingtonpark1
9,162 Views

This is what I posted in my initial request;

I am not familiar with NETAPP products.. to me, a shelf of 24 disks should be shown as available in the GUI, and as such I should be able to assign a Volume and LUN accordingly, selecting either SOME or ALL disks to make a partition available for use.

I want to create 1 LARGE Aggregate, and then carve out the Volumes/LUNS for the destination applications.

From the way I see it now, I would have to go to EACH controller, create an Aggregate, create the Volume/LUNs, then share them out and manage them out separately.

My question was, is this how it is supposed to be done?  If so, why?  It seems that each controller should see the shelf in its entirety, not half of it... I am not understanding the conceptual view of this setup model.

I have read where customers are making "all" disks viewable on one controller... is that not optimal?  I don't understand the split UI management theory of the disk shelf... that is my problem.

radek_kubka
9,162 Views

Ok, on each controller, there are 3 disks in aggr0

So usually you just add spares to these default aggregates & that's it - you end up with two equally sized aggregates on both controllers. In some cases you may prefer change disk ownership, so say controller A owns more drives & owns bigger aggregate.

You can add all, but two spares per controller via GUI (recommended for so called Disk Maintenence feature), or you can use CLI to leave just one spare per controller.

Regards,
Radek

remingtonpark1
9,170 Views

Understood, however I want to build one BIG aggregate.. I don't want to have to manage two aggregates.. so if I'm on CNTRL-A and I add disks, I am only able to add the total number of disks for that controller.. I don't understand why I don't see the disks for the entire shelf.. In any normal san, there are two controllers and one or two shelves.. but each controller can see ALL of the disks..

I'm just not understanding the process and procedures behind this.

If the end results are that I HAVE to build two aggregates (one large one per controller) then I guess so be it.

My next question -- (and I confirmed this with NetAPP) with 2 controllers, we have 8 FC ports.. customer is not using a Fiber switch, so we are plugging ESX's directly into the FC HBA's on the controllers.  If my servers are plugged into Controller A, they will see items on Controller B, yes?  My understanding of the product thus far is muddling what I think is correct... I just need someone to say, this is how the product works with 2 controllers.. you either CAN or CANNOT see ALL the disks on one shelf in one Controller UI.. and you either CAN or CANNOT add ALL disks from both controllers to any one Aggregate.

I am trying to explain this the best I can... sorry if it's not clearer.

Thanks again for everyone's help, I appreciate it.

remingtonpark1
9,170 Views

System Status, showing the total number of disks available on that one controller... and this is all I see 'available' when I want to add disks to a new Aggregate..

supinder
9,170 Views

Hi

You have to divide up the disks between the controllers (disk ownership), this does NOT have to be divied up equally.   Each Controller does, however, need a minimum amount of disks for config and boot from (this depends on if you are using RAID4 or RAID DP + I spare disk);  Lets say for example ControllerA has 4 disks (1 Data, 2 Parity and 1 Spare) - the remaining disks (20) will be owned by ControllerB.

You can now create a larger Aggregate on Controller B with 19 disks (remember you have to keep at least 1 spare disk); but you are still limited by the following...

- IF you are using a version of OnTap 7.x.x then you can only create an maximum aggregate size of 16TB (and max size of only 8TB for FAS2020)- no matter how many disks you have available.

- IF you are using OnTap 8(7 Mode) (dont think this is availabe for FAS20XX) then you are limited by the hardware platform

So if you have 1TB disks you can see that not many disks can be added to an aggregate...

Hope this helps

johnbeckner
8,511 Views

This is a old thread, but hopefully someone finds this useful:

1) To reassign disk ownership, you have to first unassign the disks.

This goes over that, but make sure you know exactly what you are doing first, have backups, etc.

Make sure you are not doing the base ONTAP aggregates.

https://communities.netapp.com/docs/DOC-5030

2) Yes, in your setup you have three disks per controller for the ONTAP install, that's the initial hit on usage.

After that, you could easily put them all in one aggregate, you just lose the full active/active controller speed.

Only one controller will be doing the work, which may be OK depending on your workload. Splitting it up

gives you more potential performance that is only lost during a controller failover. But then you lose some space by needing parity disks for each aggregate, and have to manage it more to maintain free space in each aggregate.
If you are going for a simple setup with less demanding I/O, one aggregate of all the free disks (with a spare reserved of course) is the simplest setup. Switching to a two aggregate setup later would be a very big hassle.

3) Consider getting the Netapp System Manager, or try the command line. Either one makes visualizing some of

these things easier, the web filer view will work but is a little less intuitive. Especially if you are networking guy,

the semi guided command line is a good way to drill into and organize the information quickly. Just be

careful when you issue commands, some of them execute immediately with no confirmation step.

4) You said two heads, no fiber switches, direct connect. You did not list the number of servers you are connecting. But you are right, you have 4 fiber connections per controller for a total of 8. And yes, something plugged into controller "2" will be able to see an aggregate currently controlled by controller "1". BUT - the performance is substandard on that. The two heads talk to each other over a "less than full speed" link. So a host only connected to controller "2" has to go through 2, over this link, then through 1 to get to the disk. This is not as fast as going through a switch and talking to the owning controller directly. I can't say how much slower, as in how much bandwidth you lose or latency you add, but it's a degraded setup. Best practice would be to have 4 ESX hosts in your scenario, and hook each of them up to each controller. IE host A has two HBA ports, one to controller 1 and one to controller 2. That way you have direct access to each controller. From there, use a multi pathing driver on the ESX hosts and ALUA on the NetApp (you set it per "initiator" or ESX host when setting them up) so the ESX boxes can figure out which controller is the fastest path to the disk. Or if you made one big aggregate, you could just plug all your Controller 1 connections in first, note them in your ESX servers and tell the multipathing driver to prefer that.

Don't know your ESX version but this is a good overview - notice the "Array Preference" uses ALUA.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1011340

If you are going over 4 ESX hosts and relying on that cross controller connection to access the disks,
I would seriously consider at least a single if not two smaller fiber switches like a Brocade 300.

If you have a bunch of hosts stacked on an ESX platform, you are concentrating a lot of I/O on

each fiber connection. A four way mesh from a pair of switches gives two paths to each controller from each

host and give the traffic some options and chances to load balance. You also get no outage

redundancy if you do it right - a single cable to storage from an ESX box will work but if that

cable or HBA goes flaky you have all those stacked virtuals in trouble then.

radek_kubka
9,170 Views

If the end results are that I HAVE to build two aggregates (one large one per controller) then I guess so be it.

Nope - you can reassign disk ownership, so one of the controllers own the majority of them. See a discussion over here (about 12 drives though):

http://communities.netapp.com/message/32353#32353

Public