2009-01-04 05:27 AM
I am currently designing a virtualized infrastructure for business critical applications. In researching storage systems I found many people recommending NetApp filers and NFS.
I have looked at product information on netapp.com, read the VMWare & NetApp best practices guide and browsed NOW. However there seems to be a bit of a gap in information between the very high level overview of the netapp.com products pages and the technical details of NOW. I am meeting with NetApp in the next week, but was hoping to clarify a few questions and get some experience from NetApp + VMWare users.
Current storage requirement is only about 2.5TB and I understand de-duplication would likely save me some space. Given the data growth rates of my applications it's very unlikely I will need more than 10TB in the next five years. Looking at the range of NetApp filers it seems something like a 2050 would easily do the job capacity wise. Does a 3100 series filer offer any more functionality other than greater performance and expansion capability?
I envisage having a two controller filer in our local data centre and another filer at our DR site. What should I use to synchronise the local and DR site?
What level of integration is there between VMWare Infrastructure and NetApp filers? I have seen a NetApp blog post about VMWare tools.
Is an NFS licence included with the purchase of current NetApp filers? The technical specification on the 2050s and 3140s I've looked at mention support for NFS, making it sound like it's included, but I have seen several posts on the Net saying you need to pay $13,000 per filer for the NFS licence.
Finally I'd like to suggest that NetApp add more technical information to netapp.com. The product pages don't really explain what the different software does in any detail nor make comparisons between filer models easy. Reading the page on virtualization gives you very little idea of what NetApp & VMWare can do. While NOW appears to be an excellent resource, NetApp could really do with exposing more technical information on its public-facing web site to help newcomers.
2009-01-05 06:56 AM
Thanks for the suggestions and for looking at NetApp! First off, I expect dedupe will save you more than some space, I expect a bunch! The 3100 series of filer is just a newer platform than the 2000 series and one perk is that you could choose to add a Performance Acceleration Module (PAM) which can provide a performance boost in highly deduped environments. I don't believe you can add on in a 2050.
To replicate from the Prod site to the DR site you would need our SnapMirror software. You license this both on the source filer and the target filer. Important to note here that the 2 filers do NOT have to be the same. That it you could easily replicate from a 3100 to a 2000 series box. For that matter you could use SATA at the DR site if you wanted. No extra hardware or software is needed, just SnapMirror and a IP connection.
I'm not sure about your integration question. We have several points of integration. We have a host attach kit which loads on the ESX server and assists with configuring the ESX server to insure the settings are correct for connection to a NetApp array. We have SnapManager for Virtual Infrastructure (SMVI) which will coordinate backup and restores of VMware virtual machines with Virtual Center. If you use Fiber Channel we also have SanScreen which will provide you detailed performance of the storage layer and what sort of load each virtual machine is placing on the storage. We also have a rapid deployment tool for provisioning hundreds or thousands on virtual machines on demand or in a repeatable manner. There is more on the way and I have likely forgotten something. Was there something in particular you were looking for?
The NFS license is not included in the base NetApp controller, you do however get ISCSI, Snapshots and Deduplication included!
As for the website, great feedback. I know the web team is always looking to get more of the information that people want out. Did you find these pages?
That along with the technical library which is available to the public is quite a bit of info.
Thanks again for reaching out and let me know if you have any other questions.
2009-01-05 09:16 AM
Thanks for your reply.
SnapMirror sounds like a useful feature for DR, and it's definitely good to be able to use a smaller device at the DR site (we only need DR capability for some applications). Does it work with VMWare Site Recovery Manager?
On integration I was looking for general information on links between features in VMWare and NetApp. For example, if I snapshot a VM I want it to be consistent and SMVI seems to cover that. Is there any integration between FlexClones and VMWare?
NFS seems like the best option for VMWare on NetApp, so it's a bit of a shame that iSCSI is the default. From my research, NFS seems to take best advantage of the Filer features and is simpler to manage.
I see that ONTAP 7.3 has been released. Does this replace 7.2? Would new Filers use 7.3 or is it too new for production usage?
I have looked at the pages you mention, but they didn't have a lot of information on them. I had missed the library, but I have now found it on the menu and will have a look through it. There's quite a lot of useful information in the blogs and in Tech OnTap, but it's hard to navigate: most of the best content I've found via Google, rather than browsing the NetApp site directly.
The VMWare page you refer to talks about the high-level benefits of NetApp and VMWare but it's all marketing without technical detail. It says things like "Slash your backup and restore times by half." sounds good, but all storage vendors claim things like that; without detail it just sounds like another empty claim. Another example is the functionality active-active controllers provide. I couldn't find anything on the main web site, but finally managed to dig the information out of a set up guide on NOW.
I get the impression NetApp is a real technical company* with innovative technology and staff who are passionate about storage, but most of this is hidden from those who aren't already using NetApp products.This community is an excellent idea, but other parts of the web site don't seem nearly as useful as they could be.
I seem to have drifted off a long way from trying to learn about NetApp products, but hopefully my experience is useful.
*As opposed to a giant box shifter for whom storage is just something they need to supply in order to sell expensive solutions.
2009-01-05 09:45 AM
Snapmirror works great with SRM but for full functionality with SRM you will want Flexclone licensed as well. Here is a great doc on SRM with NetApp.
You are correct about SMVI integrating with VMware to insure the snapshot is consistent. I blogged about that a while back which you might find interesting.
There is no direct integration with Flexclone and VMware yet, that it you cannot create a Flexclone from within Virtual Center but our Rapid Deployment Utility does create Flexclones and then registers the cloned VMs within Virtual Center and leverages VC to customize the VMs. You can also use the GUI in SMVI to create Flexclones from existing snapsgots and mount those cloned volumes to the ESX server of your choice. Likely more integration here in the future.
ONTAP 7.3 does replace 7.2 and I would place into production as it has been out for several months. 7.3.1 isn't out yet but will be quite exciting as it handles Dedupe a little differently (in a very good way). If you start on 7.3 then your move to 7.3.1 will be very easy. It is quite easy either way but easier on a minor release change.
I agree with you on the NFS front. I too wish it was included instead of ISCSI. Alas.
One other thing, I think I noticed a post from you asking about MetroCluster. I know it is a little confusing between MetroCluster and SnapMirror but at very high level, SnapMirror is (usually) asynchronous while MetroCluster is synchronous mirroring. MetroCluster isn't quiote supported with SRM yet while Snapmirror has been since SRM was released.
I am glad that your impression of NetApp folks is what it is. You are right on, with it. One of the reasons this is a great company to work for and work with.
I would be happy to review your design for you via a webex if you like. I can help you make sure yo have all the bits and pieces you need to build a highly available environment.
2009-01-05 10:52 AM
Thanks for your quick and comprehensive reply. I'll take a look at the documents you recommend.
It made me smile that you found my MetroCluster post (on the very similarly designed VMWare forums, both using Clearspace Community at a guess). I should perhaps explain that I'm looking for two different sorts of resiliency. The applications we host don't have much data (~2.5TB at present), and most (with the possible exception of some MySQL) aren't IO intensive. So, why do we want enterprise storage? Resilience and management.
We need a very high level of resiliency, even a few minutes downtime could cost us tens of thousands of dollars. At present this is achived by having at least two servers (with DAS) for every function. As you can imagine this scales badly and is a nightmare to manage. Therefore we've been looking at consolidating our infrastructure using VMWare and shared storage.
In our local data centre we don't want any single points of failure: we want to be able to lose a controller and all its disks and have applications continue. Therefore I imagined we'd need an active-active configuration with all the data mirrored on disks in separate racks. I was trying to work out if this was possible with NetApp Filers, hence asking on the VMWare forums (I hadn't found the NetApp Community site at that point).The two racks would be close together so could be directly connected by fibre. It's quite possible I haven't understood the documentation correctly and that a Metrocluster is not needed for this.
In addition to that we want to replicate data to a remote site for the purposes of disaster recovery (and possibly off-site backups). The remote site is many miles away and we only have IP connectivity to it. So we need a solution that allows us to replicate some data live (probably via DB replication), some data frequently (e.g. hourly) and some data on a daily or weekly basis. From what you've said SnapMirror (and possibly FlexClone) are needed to do this with NetApp Filers.
Thanks for your offer to look at our design. I'll see how we get on with the NetApp account manager who is coming to see us this week
We're also looking at solutions from other storage vendors. Another team in our data centre are using HP fibre channel and are very happy with it, so it'll be interesting to see how a NetApp solution weighs up against that.
2009-03-10 05:20 PM
One important note I'd like to mention about replicating from a 3100 to a 2000 series is that all of the public documents I could find point to the fact that you cannot replicate deduped volumes that are above the volume limits of the 2000's controller. The volume limits for deduping a 3140 is 3TB I believe, and a 2050 is 1TB. That means you will be unable to replicate your 2TB deduped volume. I wish I were wrong about this.
2009-03-11 07:53 AM
FYI -- with ONTap 7.3.1 the 2050 now has a 2 TB deduplicated volume limit while the 3140 goes to 4 TB.
So, while there's still a disparity and for SnapMirror you'd want to limit yourself to the 2050's maximum, a 2 TB maximum is hopefully much less painful than 1 TB. I still personally really like using 2050's as SnapMirror partners at a remote size for 31xx boxes.
With 7.3.1 you can also shrink down a volume to the max dedup volume size and then enable dedup.
2009-03-11 11:56 AM
Excellent! That's good to know. Now if the 2100 series, whenever that comes out, can do 3TB, I won't have to resize any of my volumes. Thanks for the good news, since I'm already on 7.3.1.