A great way to audit your configuration is to use the VMware vSphere plug-in, the NetApp VSC. It will audit an dcorrect your current settings and ensure they are set to best practices (as defined in TR-3749). We make the tools to make your life easier. Vaughn Stewart Director, NetApp
... View more
Eric, Its easiest to start with what everyone is familiar with, cloning from a VMware template within vCenter using native VMware functionality. With this method the VM homedir is created, VMX & VMDK template files are copies, and registered as a VM. The copy offload VAAI capability assists, by offloading the copy from the host to the array (which might actually be array to host to array). When deploying a VM with the provisioning and cloning capability of the VSC2 (aka the RCU) NFS & VMFS are a bit different. With VMFS the VM is cloned using VMware's tool and the proceedure is followed by a NetApp dedupe operation. This process for a single VM results in automatted storage savings. Its a tad bit beeter when deploying VMs in bulk, like with VDI. In this situation the datastore is built and then cloned, resulting in mass VM cloning. VAAI applies here as it does without the NetApp plug-in. With NFS the VM is cloned using NetApp's single file clone. The operation takes about 2 seconds to complete, the IO is offloaded from the host, and the VM is storage efficent as the cloning results in a pre-deduplicated VM. I should mention if the template or source VM is not on the NFS datastore, then there is a process where the source VM is cloned to the datastore, so the first one may take longer to complete then subsequent VMs. There's more coming down the pipe relative to vStorage APIs for both VMFS & NFS which will continue to refine these processes. If you need more info, see these blog posts: http://blogs.netapp.com/virtualstorageguy/2010/03/vmware-admins-are-storage-admins---vstorage-integration-part-2.html http://blogs.netapp.com/virtualstorageguy/2009/08/vmworld-2009-storage-integration-sneak-peek.html I hope this info helped. Vaughn
... View more
Yes - Please use VMs deployed via RCU (FlexClone) for produciton use cases. You should understand that Data ONTAP is a pointer based storage array. Snapshot backups, File Clones, & Data Deduplicaiton are all based on the same technology. Pointers for backup, zero cost provisioning, and storage efficencies over the life cycle of a data set. If you have any additional thoughts arounf the use of RCU, you will find loads of discussion and comments here: http://blogs.netapp.com/virtualstorageguy/ Cheers, Vaughn
... View more
Dan Below are my replies to your quesitons... 1. I beleive you may reding into the document a bit. Unless implicitly stated, one should be able to proceed using default array configuraiton settings. Please understand it is easier to write a document where we cover what to change versus covering every option whcih we do not change. 2. use the default volume settings unless the provisioning tool, in your case SnapDrive, states to modify the settings. 3. This question is best answered with condieration around the goal of the implementation. In the physical realm many like a single LUN in a single FlexVol. With RDMs in VMware there are advantages and disadvantages to deploying in this manner. 4. vswap is a single datastore connected to by all of the nodes in a cluster. it is a central cluster resource. 5. Please do not deploy VMs accrross both nodes of the controller. As for pagefile datastores, one is recommended for every produciton datastore. See TR-3428 or 3749 for these details.
... View more
Daniel, TR-3749v2 is released: http://media.netapp.com/documents/tr-3749.pdf Many of your quesiton are answered within. If you deploy a thin LUN, I'd suggest it go into a thin FlexVol, & VMFS LUNs should have 0 fractional reserve (per the TR). Snapshot reserves shoudl be set to 0. No volume changes required, outside of no_atime_update with NFS, autogrow, and auto snap delete. All of these data points are in the TR, of you other questions they are not relevant, and thus they are not addressed in the TR. BTW - I'm not a fan of thin FlexVols for NFS datastores, but that's your call. Vaughn
... View more
"in the configuration your sugesting I would have two isolated NIC's on the FAS node, each with it's own IP in the same subnet as the one for the 2 VMkernels, and I would then use those 2 targets on the iSCSI vSphere initiator?" -- YES -- But in that case, wouldn't I loose any kind of failover on each FAS node? I mean, if one of the FAS node NIC's fails, I would loose all access to that target IP since the surviving node NIC wouldn't serve the failed IP... -- NO -- Controller failover is in ONTAP and each single link has a failover partner defined within the array. Path availability in ESX is handled by the Round Robin Path Selection Policy. "But here's the catch, in the VMWARE "iSCSI San Configuration Guide" there's the following statement: "The NetApp storage system only permits one connection for each target and each initiator. Attempts to make additional connections cause the first connection to drop. Therefore, a single HBA should not attempt to connect to multiple IP addresses associated with the same NetApp target."" -- This doc needs clarification -- Each NetApp IP address is a target, and as such this is how we support Multi-TCP sessions with iSCSI. See more at: http://blogs.netapp.com/virtualstorageguy/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html http://media.netapp.com/documents/tr-3749.pdf Cheers, Vaughn Stewart NetApp & vExpert
... View more
I need a bit more info... If you are nly doing vSphere iSCSI on these arrays there is a simple soluiton. Enable multiple TCP sessions for iSCSI in your ESX/ESXi hosts. The NetApp links are just standard, non-EtherChanneled links (aka no VIFs). The vSphere NMP will handle I/O load balancing and path resiliency via the RR PSP. The key here is if you need access to the FAS by your public (non-storage) network. You will need to either add ports for management or other access, or allow your prod network to route into your storage network. Cheers, Vaughn
... View more
If you plan to use iSCSI with vSphere I would recommend you enable support for multple TCP sessions. In this mode all links from the ESX/ESXi hosts will have path resilency handled by the RR PSP. The storage side will be dictate by the capabilities of your switches. If your switch provides a means of 'Multi-Switch Link Aggregation' as like the Nexus Virtual Port Channels, the Catalyst 3750 Cross-Stack EtherChannel, or the Catalyst 6500 w/ VSS 1440 Multi-Chassis EtherChannel or wether you have traditional (aka 'dumb') Ethernet switches. Once you have identified your switching capabilities you can implement EtherChannel (or VIFs) on the NetApp. I'd suggest LACP is your switch supports it. With 'Multi-Switch Link Aggregation'you create a single LACP from storage to multiple ports on the switches. With traditional Ethernet switches you will create Single Mod VIFs (or active / passive) links across the switches. More info is available in TR-3749 (vSphere) and TR-3428 (VI3). Note: we are in the middle of a major rewrite of R-3749 which will be avialable on January 26th, 2010 (as a part of our press release). Cheers, Vaughn Stewart
... View more