ONTAP Discussions

Best Practice For VMware Storage on NetApp

sanman2304
3,422 Views

I am a best practice kind of guy, but am also willing to try new things if the technical theory is proven beneficial.  So I'm reaching out to the community on this one.  I follow TR-3749 when setting up VMware storage on NetApp implementing the 3 standard NFS volumes to place the VM's on, 1 for the operating system\application directories, 1 for the ESXi swap files, and 1 for the VM page files.

I recently had a customer mention creating a fourth to separate out the application directories.  He heard this somewhere, that it will allow for more efficient dedup of the OS volume since the application directories and data (outside of SQL and Exchange) are not in there.  I told him the deduplication percentage goes up because there are less total blocks divided by blocks dedupped in the equation.  I'm not buying into this.  As long at the OS is aligned dedup should find all duplicate blocks within the volume no matter how much data is in there, correct?  I see the added volume (Windows partition) being unnecessary and just an extra item to manage and a possible point of failure.  Am I correct in thinking this way or separating out the application binaries will all for better dedup?

5 REPLIES 5

ekashpureff
3,422 Views

Sanman -

My motivation for keeping application data on seperate volumes is to segregate it for rates of change and snapshots.

Think of Oracle running on VMs. It's best practice to split out logs, binaries and datafiles on seperate volumes.


I hope this response has been helpful to you.

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
Fastlane NetApp Instructor and Independent Consultant
http://www.fastlaneus.com/ http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

thomas_glodde
3,422 Views

san,

your assumption is correct, i agree with eugene tho. splitting out application data makes snapshotting & change rates better and a dedup (incremental)run has less overall data to crawl.

Kind regards

Thomas

aronk
3,422 Views

Agreed.  The few times I recomend consolidated data are usually very large infrastructures where they approch the datastore limits, or SRM deployments where you have more to consider about VM storage locations and layouts.

sanman2304
3,422 Views

Hey all, thanks for the discussion.  I see Eugene's point and I understand separating the data and logs for large enterprise applications like Oracle, SQL, Exchange, catalog files for Backup Exec server as I do that now (we run SQL and Exchange).  I'm talking more about the Program Files folder say installing an antivirus management server or MS OCS or a client server application that uses a separate SQL server for data storage.  Creating a D:\ drive in the VM for say 5GB to install a 700MB program seem a little much.  I hear you too Thomas, about snapshotting and rate changes, but all it seems you're doing is splitting up the snapshot from one volume into two volumes so the snapshot job will run a little faster.  I assume you would still be snapshotting the volume housing the VM OS correct?  Now Aron, I would think in a large infrastructure you would separate data out more to reduce the snapshots and time it takes to run or for SRM to have smaller and more volumes replicating instead of huge volumes.  So what it comes down to is separating the data out for reduced time of snapshots and deduplication jobs.

ekashpureff
3,422 Views

Sanman -

I think you've summed it up.

Enterprise apps data - use dedicated storage (LUNs/Volumes)

Small apps - throw it your c: drive.


I hope this response has been helpful to you.

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
Fastlane NetApp Instructor and Independent Consultant
http://www.fastlaneus.com/ http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

Public