Legacy Product Discussions

New FAS 2050 Install

__SBROUSSE2008_15459
11,370 Views

Hello All,

 

I just ordered a FAS 2050 12TB with SATA drives and 2 controllers. I am looking for configuration documentation or anything else that can help me get things up and running. I am replacing a S550.  We are running 2 ESX servers with NFS connection back to the S550.

 

Is their a serial port where I could assign a IP address instead of using the Easy FAS wizard. Then use filerview to finish the config. I would rather do things manually to get a better understanding of what is going on.

 

Any help that could be provided would be greatly appreciated.

 

Scott

1 ACCEPTED SOLUTION

chriskranz
9,989 Views

No problem Scott,

Just remember that if you've got 12 disks in total, that's 6 disks per controller. 2 hot spares, 2 parity (if you stick with the defaults and recommended layout).

I think there was a little miscommunication in some of the posts earlier. You have SATA disks, and the NetApp recommended RAID layout for that is 14 disks. You don't need to tweak this, the defaults will already be on. You also don't really need to make too much consideration to this at the moment either. When you add more disks, the RAID group will simply grow until you reach 14 disks, then it'll create a new RAID group for you. This is all behind the scenes and automated, so don't worry too much about that, just stick with the defaults.

If you think that only have 2 data disks per controller is a bit slim on the storage availability (giving you around 6-700g usable), you can tweak the overheads. You can drop down to 1 hot spare, or single parity if you want. Although this will potentially give you less protection, it will give you more immediate usable storage. Arguably if you have 4 hour parts replacement, then 1 hot spare and 1 parity is more than enough on a smaller system. As you grow, you can convert this later. So if you add more disks in 6 months, you could give yourself an extra hot spare, and then convert the aggregate into RAID-DP (2 parity) at that time.

I'm a big fan of RAID-DP, so I wouldn't drop that unless I was really desperate, but you could quite happily drop one of the hot spares. The only thing you lose there is the disk garage feature. This is quite a cool feature that checks a failed disk for soft errors, reformats it and re-labels the disk as a spare if it is recoverable. A very cool feature on a big system with lots of spindles and potential failures more often (purely by a greater chance), but on a smaller system this is less of a really useful feature. I'd personally go for 1 hot spare and keep the RAID-DP. This will give you 3 data disks on each controller. Nice and easy to grow into in the future aswell. The only thing here is that you'll need to add one of the spares on the command line as the filer view will force you to have 2 hot spares.

Not meaning to confuse you with too much info and too many options, but I want to make sure you're doing the right thing. Now is the time to make sure you get the configuration right, not in 6 months when it's already in production! Remember you can always add new storage and build on a good architecture. But changing the fundamentals down the road or shrinking storage can be quite tricky. NetApp have made the systems very easy to grow into, just make sure you're comfortable with the architecture from the start.

Give me a shout if you need any more pointers or you have any more questions. I work quite a lot with the smaller systems and I work closely with customers to get the most out of them, and it's always dependent on exactly what you want from the system and how you want the storage to be provisioned and layed out. The systems are nice and flexible like that!

Hopefully the above info is helpful to you though...

View solution in original post

39 REPLIES 39

amiller_1
9,162 Views

Yep -- there's a serial port on the back just as you'd like along with an adapter and nice long serial cable that should come in the box.

On first boot, the filer will go directly into the "setup" script -- a quite nice text-driven setup script that covers all the major stuff. Even better, you can run it from the console later if you want just by typing "setup" (you can adjust everything you do in "setup" manually but sometimes it's just nicer to walk back through it).

Be sure to look in TR-3428 for optimal settings on both the NetApp & VMware sides.

__SBROUSSE2008_15459
9,162 Views

Andrew,

Thanks alot this should make things easier for me. I have a couple additional questions listed below:

  1. I see 2 controllers each with it's own console port, do each of these have to be configured seperately or can I configure both from 1 port?
  2. Also, I recieved  3 scsi cables but only have 4 scsi ports: listed as a and b. Do I connect scsi ports a-a together and likewise b-b together for the active-active failover. Any idea what the 3rd cable is for?
  3. Any recommendations for port teaming? Should I put cifs and iscsi on controller 1 teamed nic and put nfs for vmware on the 2nd controller and team the nics?
  4. Also previously I had mentioned 12X1TB drives. I was planning on created 1 aggregate named aggr0 with 12 disks in a raid-dp raid group with 10 volumes. Netapp suggest not adding more then 16 disks to an aggregate. Does the aggregate / raid configuration I am going with sound like a good stable configuration.

I really appreciate your time and effort helping me with this, also please include any other suggestions you may have.

Thanks,

Scott  

amiller_1
9,162 Views

Phew....those are some pretty big questions that deserve quite a bit of discussion.

In brief....

  1. A 2050A is actually 2 separate filer heads in an active/active cluster arrangement. In a failover scenario, one head brings up the other head's identity. This gives the benefit of having the power of both heads but does mean that you have to configure both heads individually and each head needs its own separate set of disks.
  2. Failover is built into the 2050 chassis....cabling doesn't affect it hugely. If you don't have any disks, I'm not sure what you'd want/need the cables for (they're not SCSI cables I'm pretty sure....like fiber jumpers?).
  3. When I'm doing CIFS & iSCSI on a 2050, I usually put in the 2 port add-on card actually and do a 2-NIC team for CIFS and a 2-NIC team for iSCSI/NFS (if using NFS for VMware). In your scenario, splitting CIFS and iSCSI/NFS between the two heads could make sense.
  4. You'll need to dedicate disks to each head -- 6 disks per head if you split them evenly. When you subtract a single hot spare (once again per head), you're down to 5 disks that you can use in a RAID DP arrangement -- meaning you'll have about 2.05 TB usable on each head (after parity disks, WAFL overhead, etc.).
    • Note: you could dedicate all the disks to one head to get more space (6.14 TB usable total with one hot spare)....you'd lose HA at that point of course.

Now, those are very short answers....this could be a MUCH longer discussion and to be horribly honest is one that probably should have happened before the sale. What I'd recommend is seeing if you could have some sit-down time with the partner who sold you the gear to discuss recommendations for setup.

storage_ergo
9,162 Views

1. Yes both ports need to be configured on each controller seperately

2. Not sure about this, I have not worked with SCSI cables before

3. You can setup the MVIF for Multimode for better bandwidth, or a Single mode for Active/Passive configuration and there is also LACP which requires some configuration on the switch side. here is a good link to read about the various VIF's and LACP http://blog.scottlowe.org/2007/06/13/cisco-link-aggregation-and-netapp-vifs/

4. 16 Disk is the sweet spot for RAID DP as far as I understand. You can start with 12 disks but I would reccemend that you choose a RAID group size of 16 so that in the event you do need to add more disk to the aggregate you will have the reccemended RG size.

__SBROUSSE2008_15459
9,300 Views

I appreciate the great answers and I am getting to undertand the complexity of HA storage arrays and NetApp.  Here are a few more questions, and some clarifications on my previous statements.

  1. The scsi cables are for the add on scsi cards for NDMP backup to a Exabyte Tape Library. So now I understand what these are for.
  2. I believe we want to do the Active-Active cluster for data protection reasons.
  3. Does the setup script handle the configuration for both controllers or are they configured first seperately and then run the setup script.
  4. So when I split the 12 disks between the 2 heads meaning 6 disk per head, I am creating 2 aggregates lets call them aggr0 and aggr1. Then create a raid group of Raid-DP in each aggregate including at least 1 spare..
  5. I imagine the raid group should be 16TB in both aggr0 and aggr1 so that I can add disks at a later date. Finally adding my volumes under each raid group.
  6. Also can I team the 2 nics on each controller?

Thanks,

Scott

naveenrk
9,300 Views

Hi Scott,

Here is  answers to your questions

Setup Script  need to run on each controllers to  configuration  each  FAS2050 heads .

Don't use  the  aggregates aggr0  for general storage purpose , since this aggregates contains  filer  root volume (like configurations files ,license , register files ,log files etc)

NICs Teaming is supported .

__SBROUSSE2008_15459
9,300 Views

Naveen,

Thanks for the clarification on running the setup script for each controller, does the script walk through creating the active active cluster.  I'm just trying to undersstand when or where this takes place.

Also thanks for letting me know about aggregate aggr0, I will use another name.

How many aggregates can Ontap support on a Fas 2050.

Thanks,

Scott

lrhvidsten
9,300 Views

From the storage management guide:

You can have up to 100 aggregates (including traditional volumes) on a single storage system.

The maximum number of FlexVol volumes you can have on a single storage system depends on your storage system model:
  • For FAS2020 and FAS200 series, the limit is 200 FlexVol volumes.
    Note: If you bring an aggregate online that causes you to have more than 200 FlexVol volumes for these models, the operation succeeds, but you cannot create more FlexVol volumes, and error messages are sent to the console regularly until you reduce the number of FlexVol volumes to 200 or fewer.
  • For all other models, the limit is 500 FlexVol volumes.
  • For active/active configurations, these limits apply to each node individually, so the overall limits for the pair are doubled.

http://now.netapp.com/NOW/knowledge/docs/ontap/rel7261/html/ontap/smg/provisioning/concept/c_oc_prov_vols-limits.html

__SBROUSSE2008_15459
9,300 Views

Alright

we've booted up the FAS2050 and ran the setup scripts on both controllers.  At present the cluster is disabled on the primary and secondary controller, not sure how to proceed.  Do we create the aggregate first, followed by the RG and FlexVol? Tried creating an aggregate, it said the number of spares is one at least two spares must exist to create a new aggregate.  How do we have the controllers take ownership of their respective disks?

Is there any documentation for us to read that might help?

Scott

Were getting closer:

We went in and assigned the not owned disks ownership to their respective hosts, we then realized that 6 of the 12 disks are allocated to agregate aggr0 and a previous comment said not to use aggr0 as primarty storage because this contains the root filesystem, licensing, logs and many other critical items. It came this way from NetApp. We were planning to create 2 aggregates aggr1 and aggr2 with 6 1TB disks a piece. Am I missing something? What do we need to do here, were not exactly sure.

Thanks,

Scott

chriskranz
8,016 Views

I think the comment of not using aggr0 for production is a poor comment (sorry, no offence meant). On a 2050 you haven't got exactly buckets of disks to play with, and partitioning off 3 of them for a 20g volume is a total waste. On your system, you simply don't have enough disks for 2 aggregates on each head anyway! Expand the existing aggr0 and make the full use of all those spindles. Be careful if you do this on the command line as it doesn't warn you about hot spares! Do it from the filer view and it'll force you to keep 2 hot spares. On each controller, keep the default name of aggr0 as it'll make it easier to reference, and it'll make you understand more that they are 2 independent storage arrays. Perhaps rename the aggr0 to the hostname (aggr rename aggr0 hostname_aggr0) to make it clearer for you to understand.

I'd also recommend reducing the vol0 size, it's always way too big (vol size vol0 20g). You can do this hot, and 20g is plenty big enough for a 2050.

Create as few aggregates as possible, and leverage as many spindles as possible. This will give you the best performance possible, and on a 2050, the spindles are more likely to be your bottleneck on smaller workloads. Much better to make the head unit do all the hard work.

__SBROUSSE2008_15459
8,016 Views

Hi Chris,

I really appreciate your clarification, I keep forgetting that both heads are like two seperate filers. I don't think we need to rename aggr0 on each box as you clarified it quite nicely.

So we plan to shrink vol0 {as you suggested) on each head to 20gig and add the other disks to each head on their respective aggr0 aggregates.

I imagine we would have to adjust the RG size for the additional disks as well as any disks we plan on adding in the future for aggr0 on each head. Is this correct?

Netapp suggests a RG size of 16 as you recieve no performance increase from additional drives added after that. What's your opinion of this?

Although I will not be able to add 16TB to each head without an additional shelf I though it might be easier to setup a RG size of 16 now so it's ready when I do have to add a shelf.  Any opinion on this?

Thanks,

Scott

chriskranz
9,990 Views

No problem Scott,

Just remember that if you've got 12 disks in total, that's 6 disks per controller. 2 hot spares, 2 parity (if you stick with the defaults and recommended layout).

I think there was a little miscommunication in some of the posts earlier. You have SATA disks, and the NetApp recommended RAID layout for that is 14 disks. You don't need to tweak this, the defaults will already be on. You also don't really need to make too much consideration to this at the moment either. When you add more disks, the RAID group will simply grow until you reach 14 disks, then it'll create a new RAID group for you. This is all behind the scenes and automated, so don't worry too much about that, just stick with the defaults.

If you think that only have 2 data disks per controller is a bit slim on the storage availability (giving you around 6-700g usable), you can tweak the overheads. You can drop down to 1 hot spare, or single parity if you want. Although this will potentially give you less protection, it will give you more immediate usable storage. Arguably if you have 4 hour parts replacement, then 1 hot spare and 1 parity is more than enough on a smaller system. As you grow, you can convert this later. So if you add more disks in 6 months, you could give yourself an extra hot spare, and then convert the aggregate into RAID-DP (2 parity) at that time.

I'm a big fan of RAID-DP, so I wouldn't drop that unless I was really desperate, but you could quite happily drop one of the hot spares. The only thing you lose there is the disk garage feature. This is quite a cool feature that checks a failed disk for soft errors, reformats it and re-labels the disk as a spare if it is recoverable. A very cool feature on a big system with lots of spindles and potential failures more often (purely by a greater chance), but on a smaller system this is less of a really useful feature. I'd personally go for 1 hot spare and keep the RAID-DP. This will give you 3 data disks on each controller. Nice and easy to grow into in the future aswell. The only thing here is that you'll need to add one of the spares on the command line as the filer view will force you to have 2 hot spares.

Not meaning to confuse you with too much info and too many options, but I want to make sure you're doing the right thing. Now is the time to make sure you get the configuration right, not in 6 months when it's already in production! Remember you can always add new storage and build on a good architecture. But changing the fundamentals down the road or shrinking storage can be quite tricky. NetApp have made the systems very easy to grow into, just make sure you're comfortable with the architecture from the start.

Give me a shout if you need any more pointers or you have any more questions. I work quite a lot with the smaller systems and I work closely with customers to get the most out of them, and it's always dependent on exactly what you want from the system and how you want the storage to be provisioned and layed out. The systems are nice and flexible like that!

Hopefully the above info is helpful to you though...

sbrousse2008
7,877 Views

Chris,

Thanks for the offer of help.

We've got our CIFs configuration complete along with our LUN setup.  It seems like NFS is more complicated than CIFs setup.  Can you give me some basics on NFS config? ONTAP says we need to edit the etc exports file with a text editor but its not available from the command line or filer view, how do I get access to it?

Scott

chriskranz
7,877 Views

No problem Scott,

The easiest way to edit the exports file (and since you have CIFS setup), is to browse to the c$ share from windows and open it using wordpad. A general tip, don't use Notepad as Unix style linebreaks get interpretted wrong by Windows.

Easier still, it might be best to use the filerview to change the NFS exports. If you've never done it before, it's easier as it prompts you for the options. Just point your web browser at whatever IP you set, then goto the "na_admin" page. Don't forget to re-export after you've made any changes.

Let me know how you get along, I have some example exports files if you need.

sbrousse2008
7,164 Views

Thanks those example export files would be great

Scott

ASWIERZEWSKI
6,221 Views

Do you know what the default username and password is for the FAS2050?

thanks,

Al

sbrousse2008
7,164 Views

We created a volume called /vol/exports, created an NFS export called /vol/exports/datastorefas1.  We have UNIX security for NFS.  WHen I go to add the NFS export into vmware, it says that the server is denying the connection.  We've added the four IP addresses of the vmware kernal into the vmware host section.  Our vmware is already working off our other NFS server.

The error we get when we try to add the NFS into vmware is.

Error during configuration, unable to open the volume vmfs/volumes

Any ideas?

Scott

chriskranz
7,164 Views

http://communities.netapp.com/message/7794

You need to check the security of the volume and the NFS exports. See above thread for more details...

amiller_1
7,164 Views

Also see the NFS section in TR-3428 (actually see all of it ;-)....gives detailed instructions.

http://www.netapp.com/us/library/technical-reports/tr-3428.html

Public