Network and Storage Protocols
Network and Storage Protocols
Taking into account a variety of factors, and bundling a few questions together...
Sorry, quite a few areas and questions, but I'm very keen on being able to size VDI accurately and making it hugely scalable. With all the new features, plugins and filer functionality, there is a lot to consider and a lot of different routes that we could potentially go down.
Solved! See The Solution
Wow Chris, That is quite the batch of questions. Sizing is actually very easy, Step 1, You call Abhinav Step 2 Your done!
Actually you are right on, this is a very complex topic and one that is still evolving from project to project as we learn and have have new technologies in our toolbox. I will leave many of the sizing questions to Abhinav and Trey as I suspect they have sized many more environments than I have but I will take a stab at some of your other questions.
You can in fact multipath with NFS. It just doesn't do it automatically and you have to do a little planning up from. How to create redundant load balanced paths with NFS datastores is covered in the NetApp best practice guide TR-3428. You do have to size your datastores such that you don't exceed what you can do across a single link but I think you might be surprised how many VMs can run across a 1GB link! Certainly pNFS and 10G is going to help here.
By sizing do you mean how many do you need? If so, the very cool VDI sizer that Abhinav and Chris Gebhart does factor in how many PAM cards you should factor in based on the working memory set of the images you are using. I am going to have to differ to them to provide the link as I can't find the external link right now.
This is certainly something I can see happening although I haven't made the move on one of my projects yet. Once the PAM card grows in size (and it will!) then this will certainly become more more of a potential solution.
One of the best kept secrets at NetApp helps here. FlexScale allows you to adjust the priority of data volumes and insures your high priority applications receive the disk performance they need. When adding a VDI environment to an existing controller though, you must be very though and understand what the impact of that environment will be to the storage controller. The last project I worked on knew that although they were only starting with 2000 users that the environment would grow so they opted for seperate dedicated controllers that they can tune for VDI and grow if needed.
True, but the good news is that events like boot storms are when the PAM card really shines. You do still have to size for events like these but of that sizing is insuring you have suffcient CPU on the controllers and suffcient fabric bandwidth from the servers to the storage (and enough CPU cycles on the virtualization servers too!) So I do tend to size for the work case scenarios but the difference between worst case and normal state isn't as much as you might expect.
Yes you can dedicate the PAM card to a particular volume or eliminate a volume from being cached on the card.
I'm sure if I understand this question but I think is might be out of my realm. What I will say is that technologies such as ASIS and File Level Flexclone can really help with the business case of the more expensive tiers and improve the performance of the lower cost tiers. I know that likely doesn't help but it's true!
Again, this can be handled by FlexScale. I told you it was a great secret!
Whew. That was a lot for a Sunday afternoon.
Keith
Wow Chris, That is quite the batch of questions. Sizing is actually very easy, Step 1, You call Abhinav Step 2 Your done!
Actually you are right on, this is a very complex topic and one that is still evolving from project to project as we learn and have have new technologies in our toolbox. I will leave many of the sizing questions to Abhinav and Trey as I suspect they have sized many more environments than I have but I will take a stab at some of your other questions.
You can in fact multipath with NFS. It just doesn't do it automatically and you have to do a little planning up from. How to create redundant load balanced paths with NFS datastores is covered in the NetApp best practice guide TR-3428. You do have to size your datastores such that you don't exceed what you can do across a single link but I think you might be surprised how many VMs can run across a 1GB link! Certainly pNFS and 10G is going to help here.
By sizing do you mean how many do you need? If so, the very cool VDI sizer that Abhinav and Chris Gebhart does factor in how many PAM cards you should factor in based on the working memory set of the images you are using. I am going to have to differ to them to provide the link as I can't find the external link right now.
This is certainly something I can see happening although I haven't made the move on one of my projects yet. Once the PAM card grows in size (and it will!) then this will certainly become more more of a potential solution.
One of the best kept secrets at NetApp helps here. FlexScale allows you to adjust the priority of data volumes and insures your high priority applications receive the disk performance they need. When adding a VDI environment to an existing controller though, you must be very though and understand what the impact of that environment will be to the storage controller. The last project I worked on knew that although they were only starting with 2000 users that the environment would grow so they opted for seperate dedicated controllers that they can tune for VDI and grow if needed.
True, but the good news is that events like boot storms are when the PAM card really shines. You do still have to size for events like these but of that sizing is insuring you have suffcient CPU on the controllers and suffcient fabric bandwidth from the servers to the storage (and enough CPU cycles on the virtualization servers too!) So I do tend to size for the work case scenarios but the difference between worst case and normal state isn't as much as you might expect.
Yes you can dedicate the PAM card to a particular volume or eliminate a volume from being cached on the card.
I'm sure if I understand this question but I think is might be out of my realm. What I will say is that technologies such as ASIS and File Level Flexclone can really help with the business case of the more expensive tiers and improve the performance of the lower cost tiers. I know that likely doesn't help but it's true!
Again, this can be handled by FlexScale. I told you it was a great secret!
Whew. That was a lot for a Sunday afternoon.
Keith
Keith has covered it all. I will touch on a few points here. Sizing VDI is a complex process and an easy 2-3 hour discussion.
The VDI sizer is intelligent caching/PAM aware and factors that into sizing. This has been developed with lessons learnt from real world deployments, internal scalability testing, continuous feedback etc. Please contact the NetApp SE or Partners to help you size the customer environment correctly and factor in the savings and performance acceleration achieved as a result of flexclone, dedupe, intelligent caching, PAM. I did a brief whiteboard about the VDI sizer here:
http://www.youtube.com/watch?v=NUNYWXCc_GQ&feature=channel_page
Also, check this blog post by Chris Gebhardt on Intelligent caching/PAM:
Once you know the customer IOPS requirement and read/write ratio (typically 70-75% reads), the VDI sizer will determine what percentage of the IOPS require disk drives. Next it will give options for selecting the type of disk drive and output the # of spindles you need for that disk type. From footprint aspect, since the NetApp storage efficiency capabilities significantly reduce the capacity requirements, performance becomes critical for sizing. With the cost of disks going down and one 15K RPM FC disk can serve much more IOPS than a 7200 RPM SATA disk, currently I still see FC disks for VDI. This will also keep the footprint smaller with fewer spindles.
NetApp has custom sizers for mixed workload. NetApp SEs or partners can definitely help you size in scenarios where there are multiple workloads.
Ultimately it comes down to cost of the solution and providing the best in class end user experience. The best thing to do is present the solution design, associated cost, pros and cons for both the worst case and not so worst case scenarios to the customer and let them decide which way they want to go. The decision will vary from customer to customer.
Feel free to email me (abhinavj@netapp.com). Will be happy to discuss this in lot more details.
Hope this helps.
Regards,
Abhinav
Brilliant, cheers guys! Sorry, I got a bit carried away as it was my first question and put in a lot. Sizing is always tricky, and you are completely right, it can be quite time consuming, but it really pays off to get it right.
I've been to visit customers who have had deployments that weren't sized properly at the start. The overall result is that the end-user and application administrators get the impression that it is the new system that has caused a performance drop, and so they lose faith in the technology. In the future, as soon as there's a problem, the new system instantly gets the blame before any proper troubleshooting is done.
I'm always keen to set the expectations and make sure things are sized thoroughly and properly from the start. This makes sure there is no performance or functional drop to users, if anything they get more!