ONTAP Discussions

To share datastores or to not...

ccolht
2,482 Views

Looking at the various reference designs for FlexPod, it seems that I should use 1 (or few) NFS store(s) for all customers' C drives and separate vfilers and datastores for each customer's data. . It would also seem to imply one shared vfiler for the OS disks and another for each customer's data. One of the benefits of NFS is the high number of vms it supports right? But this puts all the customers together, at least at the C drive level. Often that's all there is. Then there's DR. Replicating the data vfiler and its voluimes is straight forward. But I can't just failover the entire NFS volume without moving everybody.

Here's what I'm thinking:

Each customer has 1 vfiler on the primary filer (using as many interfaces as necessary)

OS drives served off 1 or more NFS datastores as required for performance and capacity (some may be dev and not need dr support for example)

Data drives served off LUNS and/or NFS as required

At the DR site, this is replicated as needed.

Management of all the customer's assets is centralized and it is easier to keep track of disk, proc and net usage. Snapmirrors from primary to DR are done at vfiler0 level since the vfilers can't see their DR compliment.

Does this make sense? What is best practice?

3 REPLIES 3

ccolht
2,482 Views

I was hoping for some feedback on this so I'm popping this back up the stack. Been doing more contemplation on it as well.

Large NFS datastore with lots of customers:

Pros

  1. Easy storage management

  2. Efficient use of disk space

  3. Able to use SMVI for vm backups

  4. Fewer logical storage networks

  5. Able to use RCU (maybe not on vfilers?)

Cons

  1. Lumps all customer OS drives together
  2. Makes DR more complicated, requiring dedicated volumes/datastores per customer (so much for Pro #2)

Individual NFS vfilers

Pros

  1. Complete separation and consolidation of customers' assets
  2. Allows vfiler dr or migration of everything in one process
  3. Easier management of customer disk usage

Cons

  1. Less efficient use of disk space
  2. Limited to 64 NFS datastores, might as well use luns since they are smaller
  3. Can't use SMVI unless the vfilers are exposed to the SMVI server
  4. Can't use RCU? Does it work with vfilers anyway?
  5. More logical storage networks (vlan sprawl)

Can anybody think of more attributes for either?

There is a hybrid approach that uses one NFS server and individual volumes/datastores for each customer. This allows different snapmirror/DR specs per customer requirements but DR is more complicated because this involves 2 vfilers. It has most of the advantages of the first option and only the first 2 cons of the second option. Also could just use vfiler0 for this and use FC. Since there aren't many vms in each datastore luns make more sense than NFS no?

i3_nheusel
2,482 Views

I don't know how many tenants or VM's you expect to host but the hybrid solution sounds best to me.  At a minimum I would want to keep customer VMDK storage seperated out to manage snapshots and replication more effectively.  Also if you will be doing storage chargeback, keeping customer data segmented makes it easier to track.

I would avoid FC LUNs in a VMware environment.  Won't be as space-efficient for snapshots or dedupe vs. NFS, easier to grow/shrink capacity on-demand, more guests per data store.

In a true SMT environment wouldn't you have seperate vSphere clusters per tenant?  This will alleviate pressure on the ESX 64 data store limit (the NetApp volume limit per controller is 500 last I checked).

ccolht
2,482 Views

Right now I am taking the hybrid approach so each customer has avolume dedicated for their OS vmdks served from one nfs vfiler and each customer has a vfiler serving their iscsi luns and cifs/nfs shares for data. It's a compromise but one that I can live with for now. Only the esx hosts can talk to the OS vfiler so no customer can see another customer's vmdks. There's more potential exposure for the data since all the iscsi networks have to pass through the esx hosts to the customers' vms.

We don't have a lot of client vms yet but we are hoping for hundreds! We are following the reference design for Flexpod which calls for one vCenter server for them all. Using vShield and Cisco 1000v, we can isolate them. In fact, if you get the config wrong, the vms can't talk to anybody.

I can added each customer's volumes to a group in opsman and get reports on space usage even when they are scattered across vfilers but right now we are not charging specifically for the OS space. The standard vm includes 40GB and the odd conversions and migrations have averaged around that. We sell space in 1TB increments for data which comes from the customer's vfiler so that is easy to manage...sort of. Wish there was a way to setup quotas on a group of volumes. I could give every customer their own aggregate...where did I put all those 9GB drives?

Public