VMware Solutions Discussions

Best Practices for SnapManager for SQL and SnapDrive 6.3 with VMDKs

Hi all,

With the introduction of SnapDrive 6.3 and VMDK support, how does this affect the way Ideploy SQL in virtual environments?

I am familiar with SQL data layout using RDMs and LUNs through the Microsoft ISCSI initiator in the VM but how does this affect SQL data layout using VMDKs? Do I just substitute LUNs for VMDKs on NFS volumes using the same logs/database separation? Is the sizing process the same? Is throughout on a VMDK in a NFS volume the same as a LUN?

Any help in this area would be useful, as we will be deploying more and more of these environments using SnapDrive 6.3 and this would really simplify things.


David Brown Senior Virtualisation and Storage Consultant for EACS Limited - NCDA, NCIE




This is a question that is being looked at internally  and our SQL experts should address this in a TR or BPG soon.   I will see if we can get some info posted on this thread.



Is there already a best practice regarding SQL / VMDK / NFS?

Currently i'm with a customer who has 33 SQL servers. If I use the traditional LUN layout i''ll get 132 NFS exports only for SQL.

vSphere 4.1 accepts only 64mounts on an ESX host.

Currently the database vmdk's are mixed with other VM's  in a Datastore.

From my point of view vmdk=lun isn't needed because snapshotting on NFS is way more efficient. A few extra snapshots on a volume shouldn't be a big problem i'll guess.




Although there is no BPG yet I did want to highlight a couple things for you that I discovered with a client. Regardless of how you configure SQL in VMware (RDMs, VMDK, direct to VM) the backup operates very similarly. That is, you can do SQL backups in a few seconds.

The restores however are rather different based on the method you use. In a RDM or Direct to VM model restores are usually Volume level snaprestores which are very fast. In a VMDK model however a volme level SnapRestore is obviously not possible so with NFS a File level SnapRestore is used which is much slower. Important for admins to know.

In a LUN environment or if you place more than one DB in a VMDK then a file copy occurs. Again this is much slower than the Volume level SnapsRestore but you now also have to watch your snapshot growth and the restore will grow your snapshot size significantly.

Don't get me wrong, it's still a great option to have but you really have to discuss the different restore scenarios and implications with the customer and see which model is the best fit for them.

Choice is a good thing!



Why would a file level snaprestore be much slower? WAFL is just moving pointers around, not copying data.


try it and you'll see it...

You are right, WAFL is just managing pointers and blocks (over simplified, sorry WAFL Engineers), but in the case of a SFSR (SingleFileSnaprestore) it has to juggle much more pointers and blocks in the background. In a very large volume with lots of files even the SFSR can take quite a while... I personaly did see it running 1 hour and more.

But if you have a volume with just a few files, it is (can be) very fast!


So, maybe the new NetApp tagline should be, "SnapManager products provide fast backups and not so fast restores" :-). If we promise our customers that they should be investing in Snapmanager products because they can backup and more importantly restore their important data in minutes, then this needs to be guaranteed - or else we all have egg on our face.

This is why it's important to accompany best practices with new features(i.e. the vmdk's within NFS datastores). It sounds like you might be saying that to achieve fast restores using SFSR one should place a single database vmdk within a volume. Additional vmdk's within the same volume will slow the restore. The problem lies where you have many databases - which would mean many volumes/datastores to manage. Also, if you're doing SnapMirror you start running up against replication limits (depending on the controller). You could stagger the replications but then you might be breaking your consistency points.


That restore scenario makes most of our customers stick to RDMs. Althou it should be possible to clone the datastore holding the vmdks, attatch the disks to the vm and then you could do a storage vmotion. But not every deployment has storage vmotion licensed 😞


Hi JJ,

Unfortunately we don't have anything available now but the next SMSQL BPG is in the works and should cover this.  Its not the SMSQL BPG but definitely worth taking a look

Accelerating Development of Microsoft SQL Applications in Heterogeneous Environments


Hi Watan,

Is there any update on the BPG?

We're using SMSQL with .vmdk's on NFS (and will shortly be using SMSP, too). I'm specifically interested in:

- Layout in an NFS environment (.vmdk's sharing volumes; location of C:\ drive vs data vs logs .vmdk)

- .vmdk type (should data disks be independent .vmdk's? - VSC/SMVI will also snapshot the parent volume otherwise; is this OK?)

- Storage vMotion implications (if any)

- General best practice guidance

The technology is great; we just need some BPG advice / guidance, please!




Hi Barney ,

We are currently in process of refresh of the BPG.However we are also going to publish a SMSQL activation guide

which will contain more about configuring the SQL over vmdk environment information.

You need to architect your environment keeping VM files and SQL db files on separate datastores. You would require separate vmdk’s for system db’s, user db’s, log files, temp db and log files.

Keep in mind that you have 8 datastores by default and can reach 64 datastores as maximum. 




Any updates?  The latest SMSQL guide I saw mentions VMDK support, but does not address restore scenarios or best practices.  Ideally, I'd like to see sample configurations and the restore implications of different layouts (same datastores, separate VMDKs,), (RDMs vs VMDKs, etc..)


Please have a look into TR 3941 and TR 3785.Also the new BPG to be released in this month will have contents in respect to VMDK implementation.


Hello.  Is the new BGP available now?  My first attempts to locate it have not been successful.  Thank you.


Hi Abhishek, and thanks for the reply. Please do keep us posted on the BPG.

Kind Regards,



Hi Barney ,

Please look for TR 4003 , TR 3941 and TR 3785.




Here's another thread http://communities.netapp.com/message/55895#55895

Abhishek is our resident expert for SMSQL and will be updating the BPG. 


Thanks Watan; all assistance / guidance is much appreciated.

I guess the BPG will still hold the answers I need though, as the link above mainly refers to "traditional" LUN-based storage rather than .vmdk's. If we apply the same rationale to .vmdk's/datastores as to LUNs/volumes, then Best Practice would be:

Sysdb in it's own volume

Data in it's own volume

Logs in their own volume

SnapInfo in own volume

Since each volume is an NFS export to ESX, we are therefore consuming 4 NFS mounts per SQL server. Given that there is a limitation of 64 NFS mounts on ESX, it would be good to reduce this.

- Is it therefore possible to place sysdb and data in a single volume?

- What about placing Logs and SnapInfo in a single volume?

- Can we sensibly place vmdk's from multiple SQL servers in the same volumes (as long as SMSQL jobs don't run concurrently?).

- If we accept the risk of SQL system loss in the event of losing a volume, can we place all four vmdk's in one datastore? (we have snapmirror as a recovery point in this scenario).

It seems that the major benefits of vmdk's over block storage for SMSQL are space savings and flexibility/mobility of storage. It would be great to have some advice on what's good/sensible practice here.

Please keep us posted on BPG progress.

Thanks again,



Hi Barney,

I'll check with my team to see if they can chime in for your questions and see how the progress is going on the bpg.




Excellent, that would be most handy.


NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.