Legacy Product Discussions
Legacy Product Discussions
Hello All,
I just ordered a FAS 2050 12TB with SATA drives and 2 controllers. I am looking for configuration documentation or anything else that can help me get things up and running. I am replacing a S550. We are running 2 ESX servers with NFS connection back to the S550.
Is their a serial port where I could assign a IP address instead of using the Easy FAS wizard. Then use filerview to finish the config. I would rather do things manually to get a better understanding of what is going on.
Any help that could be provided would be greatly appreciated.
Scott
Solved! See The Solution
No problem Scott,
Just remember that if you've got 12 disks in total, that's 6 disks per controller. 2 hot spares, 2 parity (if you stick with the defaults and recommended layout).
I think there was a little miscommunication in some of the posts earlier. You have SATA disks, and the NetApp recommended RAID layout for that is 14 disks. You don't need to tweak this, the defaults will already be on. You also don't really need to make too much consideration to this at the moment either. When you add more disks, the RAID group will simply grow until you reach 14 disks, then it'll create a new RAID group for you. This is all behind the scenes and automated, so don't worry too much about that, just stick with the defaults.
If you think that only have 2 data disks per controller is a bit slim on the storage availability (giving you around 6-700g usable), you can tweak the overheads. You can drop down to 1 hot spare, or single parity if you want. Although this will potentially give you less protection, it will give you more immediate usable storage. Arguably if you have 4 hour parts replacement, then 1 hot spare and 1 parity is more than enough on a smaller system. As you grow, you can convert this later. So if you add more disks in 6 months, you could give yourself an extra hot spare, and then convert the aggregate into RAID-DP (2 parity) at that time.
I'm a big fan of RAID-DP, so I wouldn't drop that unless I was really desperate, but you could quite happily drop one of the hot spares. The only thing you lose there is the disk garage feature. This is quite a cool feature that checks a failed disk for soft errors, reformats it and re-labels the disk as a spare if it is recoverable. A very cool feature on a big system with lots of spindles and potential failures more often (purely by a greater chance), but on a smaller system this is less of a really useful feature. I'd personally go for 1 hot spare and keep the RAID-DP. This will give you 3 data disks on each controller. Nice and easy to grow into in the future aswell. The only thing here is that you'll need to add one of the spares on the command line as the filer view will force you to have 2 hot spares.
Not meaning to confuse you with too much info and too many options, but I want to make sure you're doing the right thing. Now is the time to make sure you get the configuration right, not in 6 months when it's already in production! Remember you can always add new storage and build on a good architecture. But changing the fundamentals down the road or shrinking storage can be quite tricky. NetApp have made the systems very easy to grow into, just make sure you're comfortable with the architecture from the start.
Give me a shout if you need any more pointers or you have any more questions. I work quite a lot with the smaller systems and I work closely with customers to get the most out of them, and it's always dependent on exactly what you want from the system and how you want the storage to be provisioned and layed out. The systems are nice and flexible like that!
Hopefully the above info is helpful to you though...
Yep -- there's a serial port on the back just as you'd like along with an adapter and nice long serial cable that should come in the box.
On first boot, the filer will go directly into the "setup" script -- a quite nice text-driven setup script that covers all the major stuff. Even better, you can run it from the console later if you want just by typing "setup" (you can adjust everything you do in "setup" manually but sometimes it's just nicer to walk back through it).
Be sure to look in TR-3428 for optimal settings on both the NetApp & VMware sides.
Andrew,
Thanks alot this should make things easier for me. I have a couple additional questions listed below:
I really appreciate your time and effort helping me with this, also please include any other suggestions you may have.
Thanks,
Scott
Phew....those are some pretty big questions that deserve quite a bit of discussion.
In brief....
Now, those are very short answers....this could be a MUCH longer discussion and to be horribly honest is one that probably should have happened before the sale. What I'd recommend is seeing if you could have some sit-down time with the partner who sold you the gear to discuss recommendations for setup.
1. Yes both ports need to be configured on each controller seperately
2. Not sure about this, I have not worked with SCSI cables before
3. You can setup the MVIF for Multimode for better bandwidth, or a Single mode for Active/Passive configuration and there is also LACP which requires some configuration on the switch side. here is a good link to read about the various VIF's and LACP http://blog.scottlowe.org/2007/06/13/cisco-link-aggregation-and-netapp-vifs/
4. 16 Disk is the sweet spot for RAID DP as far as I understand. You can start with 12 disks but I would reccemend that you choose a RAID group size of 16 so that in the event you do need to add more disk to the aggregate you will have the reccemended RG size.
I appreciate the great answers and I am getting to undertand the complexity of HA storage arrays and NetApp. Here are a few more questions, and some clarifications on my previous statements.
Thanks,
Scott
Hi Scott,
Here is answers to your questions
Setup Script need to run on each controllers to configuration each FAS2050 heads .
Don't use the aggregates aggr0 for general storage purpose , since this aggregates contains filer root volume (like configurations files ,license , register files ,log files etc)
NICs Teaming is supported .
Naveen,
Thanks for the clarification on running the setup script for each controller, does the script walk through creating the active active cluster. I'm just trying to undersstand when or where this takes place.
Also thanks for letting me know about aggregate aggr0, I will use another name.
How many aggregates can Ontap support on a Fas 2050.
Thanks,
Scott
From the storage management guide:
You can have up to 100 aggregates (including traditional volumes) on a single storage system.
Alright
we've booted up the FAS2050 and ran the setup scripts on both controllers. At present the cluster is disabled on the primary and secondary controller, not sure how to proceed. Do we create the aggregate first, followed by the RG and FlexVol? Tried creating an aggregate, it said the number of spares is one at least two spares must exist to create a new aggregate. How do we have the controllers take ownership of their respective disks?
Is there any documentation for us to read that might help?
Scott
Were getting closer:
We went in and assigned the not owned disks ownership to their respective hosts, we then realized that 6 of the 12 disks are allocated to agregate aggr0 and a previous comment said not to use aggr0 as primarty storage because this contains the root filesystem, licensing, logs and many other critical items. It came this way from NetApp. We were planning to create 2 aggregates aggr1 and aggr2 with 6 1TB disks a piece. Am I missing something? What do we need to do here, were not exactly sure.
Thanks,
Scott
I think the comment of not using aggr0 for production is a poor comment (sorry, no offence meant). On a 2050 you haven't got exactly buckets of disks to play with, and partitioning off 3 of them for a 20g volume is a total waste. On your system, you simply don't have enough disks for 2 aggregates on each head anyway! Expand the existing aggr0 and make the full use of all those spindles. Be careful if you do this on the command line as it doesn't warn you about hot spares! Do it from the filer view and it'll force you to keep 2 hot spares. On each controller, keep the default name of aggr0 as it'll make it easier to reference, and it'll make you understand more that they are 2 independent storage arrays. Perhaps rename the aggr0 to the hostname (aggr rename aggr0 hostname_aggr0) to make it clearer for you to understand.
I'd also recommend reducing the vol0 size, it's always way too big (vol size vol0 20g). You can do this hot, and 20g is plenty big enough for a 2050.
Create as few aggregates as possible, and leverage as many spindles as possible. This will give you the best performance possible, and on a 2050, the spindles are more likely to be your bottleneck on smaller workloads. Much better to make the head unit do all the hard work.
Hi Chris,
I really appreciate your clarification, I keep forgetting that both heads are like two seperate filers. I don't think we need to rename aggr0 on each box as you clarified it quite nicely.
So we plan to shrink vol0 {as you suggested) on each head to 20gig and add the other disks to each head on their respective aggr0 aggregates.
I imagine we would have to adjust the RG size for the additional disks as well as any disks we plan on adding in the future for aggr0 on each head. Is this correct?
Netapp suggests a RG size of 16 as you recieve no performance increase from additional drives added after that. What's your opinion of this?
Although I will not be able to add 16TB to each head without an additional shelf I though it might be easier to setup a RG size of 16 now so it's ready when I do have to add a shelf. Any opinion on this?
Thanks,
Scott
No problem Scott,
Just remember that if you've got 12 disks in total, that's 6 disks per controller. 2 hot spares, 2 parity (if you stick with the defaults and recommended layout).
I think there was a little miscommunication in some of the posts earlier. You have SATA disks, and the NetApp recommended RAID layout for that is 14 disks. You don't need to tweak this, the defaults will already be on. You also don't really need to make too much consideration to this at the moment either. When you add more disks, the RAID group will simply grow until you reach 14 disks, then it'll create a new RAID group for you. This is all behind the scenes and automated, so don't worry too much about that, just stick with the defaults.
If you think that only have 2 data disks per controller is a bit slim on the storage availability (giving you around 6-700g usable), you can tweak the overheads. You can drop down to 1 hot spare, or single parity if you want. Although this will potentially give you less protection, it will give you more immediate usable storage. Arguably if you have 4 hour parts replacement, then 1 hot spare and 1 parity is more than enough on a smaller system. As you grow, you can convert this later. So if you add more disks in 6 months, you could give yourself an extra hot spare, and then convert the aggregate into RAID-DP (2 parity) at that time.
I'm a big fan of RAID-DP, so I wouldn't drop that unless I was really desperate, but you could quite happily drop one of the hot spares. The only thing you lose there is the disk garage feature. This is quite a cool feature that checks a failed disk for soft errors, reformats it and re-labels the disk as a spare if it is recoverable. A very cool feature on a big system with lots of spindles and potential failures more often (purely by a greater chance), but on a smaller system this is less of a really useful feature. I'd personally go for 1 hot spare and keep the RAID-DP. This will give you 3 data disks on each controller. Nice and easy to grow into in the future aswell. The only thing here is that you'll need to add one of the spares on the command line as the filer view will force you to have 2 hot spares.
Not meaning to confuse you with too much info and too many options, but I want to make sure you're doing the right thing. Now is the time to make sure you get the configuration right, not in 6 months when it's already in production! Remember you can always add new storage and build on a good architecture. But changing the fundamentals down the road or shrinking storage can be quite tricky. NetApp have made the systems very easy to grow into, just make sure you're comfortable with the architecture from the start.
Give me a shout if you need any more pointers or you have any more questions. I work quite a lot with the smaller systems and I work closely with customers to get the most out of them, and it's always dependent on exactly what you want from the system and how you want the storage to be provisioned and layed out. The systems are nice and flexible like that!
Hopefully the above info is helpful to you though...
Chris,
Thanks for the offer of help.
We've got our CIFs configuration complete along with our LUN setup. It seems like NFS is more complicated than CIFs setup. Can you give me some basics on NFS config? ONTAP says we need to edit the etc exports file with a text editor but its not available from the command line or filer view, how do I get access to it?
Scott
No problem Scott,
The easiest way to edit the exports file (and since you have CIFS setup), is to browse to the c$ share from windows and open it using wordpad. A general tip, don't use Notepad as Unix style linebreaks get interpretted wrong by Windows.
Easier still, it might be best to use the filerview to change the NFS exports. If you've never done it before, it's easier as it prompts you for the options. Just point your web browser at whatever IP you set, then goto the "na_admin" page. Don't forget to re-export after you've made any changes.
Let me know how you get along, I have some example exports files if you need.
Thanks those example export files would be great
Scott
Do you know what the default username and password is for the FAS2050?
thanks,
Al
We created a volume called /vol/exports, created an NFS export called /vol/exports/datastorefas1. We have UNIX security for NFS. WHen I go to add the NFS export into vmware, it says that the server is denying the connection. We've added the four IP addresses of the vmware kernal into the vmware host section. Our vmware is already working off our other NFS server.
The error we get when we try to add the NFS into vmware is.
Error during configuration, unable to open the volume vmfs/volumes
Any ideas?
Scott
http://communities.netapp.com/message/7794
You need to check the security of the volume and the NFS exports. See above thread for more details...
Also see the NFS section in TR-3428 (actually see all of it ;-)....gives detailed instructions.
http://www.netapp.com/us/library/technical-reports/tr-3428.html