Great question. Unfortunately if the application was coded badly in the physical world it doesn't work much better in the virtual work. As Rachel mentioned tools like Citrix XenApp help with the delivery and management of these types of applications. From a resource standpoint a few things can however help. If you are using VMware ESX as a hypervisor their memory page sharing can help reduce how much memory is need on the ESX host. That means if the application always loads the same pages into memory then all the same pages for the VDI images will get shared and require less physical memory.
On the storage front our PAM cards will cache commonly used blocks so again if an application is ineffcient and calls the same data blocks repeatedly then out cache will help keep that load from hitting the disks themselves.
Outside of that, you just have to measure the load of these applications and design the environment according insuring that you provide sufficient IO capability to satisfy the needs of the applications in the VDI images.
Mitchell, you have highlighted a well known issue that every organization will face as they plan to roll out VDI. For some customers, there may be 100s of such applications. For any VDI deployment, correctly sizing the solution to achieve the best in class end user experience is very critical. The solution should be able to meet both the performance and capacity requirements.
As Keith mentioned, measuring the load of these applications is very important. There are several ways you can achieve this:
1. Collect the performance data from the existing environment. There are several methods to do this. I blogged about it few days back here:
2. In addition, you could also conduct a POC to get an understanding of the workload requirements in a virtual world.
We have custom VDI Sizers that take the workload requirements as inputs and generate the solution configurations both in terms of type of storage controllers, # of spindles, logical solution layout etc. The Sizers factor in the storage savings and performance acceleration achieved as a result of Intelligent Caching/PAM, Deduplication, FlexClone, and Thin Provisioning.
Check out this excellent blog post by Chris Gebhardt (fellow VDI expert at NetApp) on how Intelligent Caching and Deduplication can help enhance the solution performance and at the same time help you achieve the desired storage efficiency.
Just by using VDI, you get good isolation between users (unlike XenApp for instance where a "nasty app" could slow down the XenApp server for everyone/all applications served from that XenApp server). While on a VM a "nasty app" can still use up to 1 vCPU, it can't go beyond that or directly affect other users.
Resource Pools -- for users that need "nasty apps", you could isolate them or their machines in resource pools (likely using Shares or possibly Limits) to mitigate the impact of said app.
Expansion -- if said "nasty app" is absolutely necessary and VDI does make business sense, you can scale out with VDI very easily -- just add more ESX servers into the cluster and let DRS do its magic.
Side-note: for "nasty app" VDI scenarios, you could do with ESX hosts with more processors but lower clock speed -- RAM of course is dependent on whatever the app needs (TPS being exceedingly helpful here as already noted).