I've been using the VDI sizing xls which was fantastic - easy to use and great output.
My question is around the actual FAS series to use, the sizer came back with 14disks per head to handle an environment of 1200 VDI seats over NFS with 8 IOPs per VM (300GB 15k disks - sized for performance, not capacity). I understand the amazing benefits of the PAM cards, especially during the 'boot storm' (cool name!) times in VDI environments - and that these cards are only supported from the FAS3140 series up - does this make the FAS2000 series not a great choice for VDI environments of this size? Or could a FAS2050 safely handle an environment of this size - during heavy boot times, or would it be a cost vs funtionality scenario?
To cut to the chase, are PAM cards required to keep the end-user experience a happy one? We all know the last thing you want to do is migrate users from their P workstation to a badly performing VDI environment, even if it is just the first 'everyone booting at 9am' experience which is slow.
<edit> I imagine the amount of ethernet port available between the different systems would be something to consider in NFS environments also?
btw - i'm pretty familiar with NetApp technologies and have been sizing other legacy systems for a while now (exchange, CIFs, server virt), but VDI sizing is quite new to me...
Personally I'd rather steer clear of FAS20x0 for VDI deployment.
Sizing is always some form of estimation & if you mis-calculate something, bigger controler gives you more options to rectify this (e.g. the PAM card can be added to a FAS31x0 when & if you need it, not necesarilly from day 0).
8 IOPS per desktop sound fairly low to me though - I tried to do some measurements on my own laptop whilst doing some regular taks, like Internet browsing, Word, Excel, etc. The numbers I've observerd during this were ranging from 20 to 30 IOPS. On the top of that there can be some other issues though, like virus scanning, boot storms, etc.
Re PAM cards - they are now 'de-dupe aware', which means that if you de-duplicate your desktop images, a single set of blocks kept in cache can server multiple clients requests.
Thanks for the great feedback on VDI sizer. You got it right, end user experience dictates the success of VDI.
For correctly architecting any VDI solution it is very important to get an understanding of the customer performance and capacity requirements upfront before starting the sizing exercise. I blogged about this topic few days back here and here. Also check out the following whiteboard sessions on this topic:
Out of curiosity, how did you get the 8 IOPS per VM requirement? This number will vary from environment from environment and also by user profile. VMware also provides some generic recommendation for heavy and light users in this white paper:
FAS2000 series is definitely a viable choice for VDI for small and mid size deployments. The good news being that intelligent caching is now natively available in Data ONTAP 7.3.1 or higher in every FAS, V-Series, & IBM N Series. This will definitely help in the “boot storm” situations. The Data ONTAP’s native Intelligent Caching is further extended with the use of the PAM cards.
The decision on whether to use PAM in addition to Data ONTAP intelligent caching is based on the amount of deduplicated data and the percentage of reads within the environment. Since the working set of a XP virtual machine is somewhere in the range of 200MB, with a NetApp solution, it would require roughly 2GB of cache to serve the working set for all 1000 VMs, specially to help out in the boot storm situations. But as the users create more data, the amount of deduplicated data will change, thus affecting the cache hit rate. Thus, more cache might be needed if the data becomes more unique (even after running regular deduplication operation on the new data). Again, having good knowledge about the capacity, performance requirements, and user profiles for your customer environment will help determine if the solution architecture warrants the need for PAM cards.
We recommend to always use Data ONTAP 7.3.1 or later for VMware View environments. For environments with greater than 500 virtual desktops per NetApp storage controller, with no clear understanding on the user work profile, and customer doesn’t want any degradation in the end user experience during boot storm situations, we recommend the use of both Data ONTAP caching and at least one PAM module per storage controller.
Check out this excellent blog post by Chris Gebhardt on Intelligent Caching and PAM use case in VDI deployments.
thanks for the great reply Abhinav, thats awesome news regarding that intelligent caching is available in 7.3.1 - I also am a big fan of the increased dedupe volumes sizes in that release across various FAS systems, has helped out with lots of vmware volumes that were just over 1TB on a few 3020s around the place - brilliant!
So the intelligent caching feature will utilise the system memory of the FAS system, and a PAM card will just increase the amount of intelligent caching available to the FAS system to use?
I took the IOPs of 8 from the recommended 'heavy' VDI users under section 2.2 of the VDI sizing tool (light 2-4, medium 5-7, heavy 8-12 and extreme is 13+) is this information not a good guide? I have not received the perf stats from the VMware side of things as yet so I was using the guides, just to get a basic output really. We might be going with a mix of NetApp storage and streaming XenDesktop for the 1200 user base, I've heard that we'd be looking at an worse case IOP of 30 per connection with the streaming feature - I haven't yet had the chance to search to interweb for more info around NetApp and XenDesktop yet, I'll get into that soon - it would be great if you have any tips around this scenario too, I'm pretty sure I'll find some TRs about the place.
Just quickly, if the streaming image is <10GB and I have 16GB PAM cards in each head of the cluster will this mean most, if not all requests will be hosted directly from the intelligent cache living in the PAM reducing disk IO significantly?
I'll start making my way through all the info you provided now!
You are correct about the role of intelligent caching in system memory and PAM.
The IOPS numbers in the sizer are what we are typically seeing in several customer deployments. Our recommendation is to always use the data collection tools mentioned in the NetApp collateral to get the exact customer IOPS requirement. If running of these tools is not possible and customer has limited understating on what to consider, these numbers can be used for estimation. For every unknown, make sure your customer agrees on the assumptions that are being used in sizing.
Good to hear that the NetApp/Citrix solution will be able to meet all the customer requirements.
Out of curiosity, can you elaborate on the use cases for both the NetApp and the streaming solution so that other customers might be able to co-relate this to their environment?
How did you arrive at the 30 IOPS number?
The following XenDesktop on NetApp solution guide should help you in streaming scenario.
I would also point out that just because you have a 10GB image you would not need the whole image cached. You likely noticed that the sizing xls factors in the working set of the desktop image. For example a XP image has a 200-300 mb working set so that all we have to cache in meory or on the PAM to leverage the dedupe aware memory cache.