ONTAP Discussions

VMware and NetApp Best Practice Question FlexVol/Datastore

stuengland
3,323 Views

Vmware and NetApp best practice states that every Datastore should have its own FlexVol

I am struggling with this idea

De-deduplication occurs at a volume level

SnapVault operates at a Qtree level

Lets say for arguments sake that your data structure was like this, to maximise de-deuplication you keep all your OS's in FlexVol and to allow you to manage difference SnapVault schedules for data of varying importance you keep your Production/Devlopment in different Qtree's. Additionally because your machines in dev are likely to have the same name as machines in production you want logical seperation of the hostnames within the VMware datastore

/ aggr1 / Operating Systems(FlexVol) / Production(Qtree)

/ aggr1 / Operating Systems(FlexVol) / Development(Qtree)

/ aggr1 / Operating Systems(FlexVol) / Test(Qtree)

Now, given the above layout if you had one Datasore in VMware you would probably name it "Operating Systems". The problem arises when you try and create a new Virtual Machine. When you create a new VM you are asked which datastore to store the data in, here you could chose "Operating Systems". What you cannot do is chose a subfolder within the datastore into which to place the VMfiles. They simply end up in the root of the Datastore

This means that you cannot seperate your Production and Devlopment data within the same FlexVol because VMware/NetApp best practice suggests that you have one FlexVol per Datastore

I struggle to see how people will abide by this best practice while maximising data effeciency and manageability

Stuart

4 REPLIES 4

stuengland
3,323 Views

Maybe I should change my position on this and ask others in the community for their advice on how I should configure my bottom level FlexVols for VMware data over NFS?

Do you put ALL your VMware data into a single FlexVol?

Do you seperate your data into multiple FlexVols? If so how? and why?

bsti
3,323 Views

Separating your Dev/Test environment from Production into separate volumes gives you no performance benefits unless you also put them in separate aggregates.  Multiple volumes in the same aggregate all use the same disks, so traffic in one volume affects the others as well on the disk level.  There isn't much point in separating them unless you need or just like the logical separation for management reasons, OR if you leverage snapmirror, compression, or some other volume level feature that you want to only affect Dev, Test and not production.

If you're not splitting them into different aggregates, I'd keep them in the same volume to maximize deduplication benefit personally. 

stuengland
3,323 Views

"There isn't much point in separating them unless you need or just like the logical separation for management reasons, OR if you leverage snapmirror, compression, or some other volume level feature that you want to only affect Dev, Test and not production"

This is exactly why I am spening so much time podering on this, I really cant decide whether seperating the data for managment purposes such as mirrors, snap schedules etc is worth the extra headache of seperating the data and why I am wondering what other people do, and if they had the chance to start again what they would do differently

There are thousands of people doing this, I just guess not many of them read these discussions

nisarsayed
3,323 Views

To be honest, I had the same delimma. It is quite hard to see a silver-line recommendation or thump-rule solution. Every IT environment is different in itself. However, in my case, my project, I finally decided to go with TR-3785 (NetApp Solution Guide), published in June 2011. This report applies to Exchange 2010, SQL 2008, and SharePoint 2010 mixed workload on VMware vSphere 4.1/SRM, NetApp unified storage (FC), and Cisco Nexus unified fabric. Which, interesting matched with the project I worked on. Do do not have to follow word my word, but this report provide you a better understanding. I use this report as an reference and also as an proof of evidence if/when someone challenge my approach and design.

Thanks.

Public