The transition to NetApp MS Azure AD B2C is complete. If you missed the pre-registration, you will be invited to reigister at next log in.Please note that access to your NetApp data may take up to 1 hour.To learn more, read the FAQ and watch the video.Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.
Microsoft Windows Server 2012 Hyper-V includes dynamic and differencing VHDX drive types.
Dynamic disks start as sparse disks and are not populated until the first reads and/or writes occur.
Differencing disks start with a fixed disk as the parent. The differencing disks are only 4KB in size and
grow as the reads and writes occur. We tested with dynamic and differencing VHDX drives and found
the performance to be equivalent to fixed VHDX drives.
I thought I heard someone explaining that NTFS allocations on the parent for many dynamic VHDXs growing simultaneously was a bottleneck in real-world environments. Anyone ever heard of this or did I dream it up?
Also, if performance is now comparable, are there any real reasons to strongly recommend fixed anymore? There is always the "where to deploy thin provisioning" discussion (at VHDX layer and/or LUN / FlexVol layers) but assuming there is a preference for dynamic VHDX is there any reason not do to so?
I also checked and the New-NaVirtualDisk cmdlet even allows creation of dynamic VHDX....
Yes there is a minuscule amount of overhead with Dynamic VHDX, however there is no longer any alignment issues. For this reason the decision to run one over the other is an operational concern. If the organization doesn't monitor, and is slow to respond to change. Then Fixed is still the preferred deployment model, as it carries no risk. If using Fixed the toolkit can be used to still thin at the storage layer. However if the organization does monitor then Dynamic will return the greatest long term ROI. Going forward all FlexPod reference architectures will use Dynamic VHDX as monioring is part of the architecture as built.