Uncovering NetApp SnapManager 2.1 for Hyper-V for Windows

by vinith Former NetApp Employee on ‎2014-12-02 06:59 AM

 

This blog posts provides insight into NetApp’s SnapManager for Hyper-V and highlights key features in our newly released SnapManager for Hyper-V 2.1.  

 

SnapManager for Hyper-V provides a solution for data protection, backup and recovery for Microsoft® Hyper-V virtual machines (VMs) running on Data ONTAP® with NetApp storage systems. You can perform application-consistent and crash-consistent dataset backups according to protection policies set by your backup administrator. You can also restore VMs from these backups. Reporting features enable you to monitor the status of and get detailed information about your backup and restore jobs.

 

NetApp SnapManager for Hyper-V (SMHV) addresses the resource utilization problem typically found in virtual environments by leveraging NetApp Snapshot technology. Using this technology reduces the CPU and network load on the host platforms and drastically reduces the time required for backups to complete. SMHV can be quickly installed and configured for use in Hyper-V environments, saving valuable time during backups, allowing quick and efficient restorations, and reducing administrative overhead.

 

Backups, restores, and disaster recovery (DR) can place a demanding overhead on Hyper-V virtual infrastructure. NetApp SMHV simplifies and automates the backup process by leveraging the underlying NetApp Snapshot and Snap Restore® and rapid, granular restore and recovery of VMs and their associated datasets.

 

 

New features in SMHV 2.1  

  • Support for clustered Data ONTAP 8.3 & MetroCluster

Clustered Data ONTAP 8.3 enhances availability and performance for business-critical applications, simplifies upgrades, deployments and transitions, increases efficiency for entry systems, provides more granular data management for server virtualization, delivers greater SAN scaling and manageability, offers CIFS and NFS enhancements and improves performance of flash storage.

 

MetroCluster is the solution that provides continuous availability for critical applications for which you can never afford planned or unplanned downtime.

 

 

mcc.png

 

 

With highly virtualized infrastructures running hundreds of mission-critical applications, the enterprise would likely be severely affected if all of those applications became unavailable simultaneously. In that case, it is the infrastructure that is mission critical, requiring zero data loss and recovery within minutes rather than hours.

 

MetroCluster addresses the need to provide continuous data availability beyond the data center (or beyond the cluster). MetroCluster enables business continuity and continuous availability beyond the data center, protecting you from events that are beyond the control of the IT organization, such as natural disasters (fires, floods, hurricanes) and site impacting failures (network outage, power loss, unrecoverable corruption).

 

With MetroCluster, your organization remains up and running by leveraging the synchronously replicated copy at the secondary site. MetroCluster consists of two Data ONTAP clusters that synchronously replicate to each other.

 

The minimum configuration for MetroCluster is a disaster recovery group that consists of one HA pair at each site, for a total of four nodes (controllers). Each cluster is an active-active HA pair, so all nodes serve clients at all times. This solution can stretch across city-wide or metro-wide deployments up to a maximum distance of 200 km. This capability enables a level of availability that goes beyond the high-availability features of a local cluster, which makes MetroCluster a highly versatile solution.

 

MetroCluster is an active-active solution, meaning that all nodes in each cluster actively serve data to applications, and data can be read from both the primary and secondary clusters, a feature that can also improve read performance.

 

 

Some of the advantages of using a MetroCluster solution:-

 

  • Zero data loss—never lose a transaction.
  • Zero planned and unplanned downtime—whether caused by an IT event or by an external event, such as a hurricane, flood, or loss of communications.
  • Set-it-once simplicity—with no external devices or host-based configuration.
  • Zero change management—once it’s set up, all changes on one side are automatically replicated on the other side.
  • 50% lower cost and complexity compared to other solutions—including host-based ones. This includes lower software acquisition cost and cost of ownership of the solution due to its easy-to-manage architecture—again, there are no external devices, capacity-based licenses, or ongoing configuration management. What makes the cost-efficiency story even stronger is the added benefit of storage efficiency and the integration with server virtualization.
  • Seamless integration with storage efficiency, backup (Snap Vault), DR (SnapMirror), and NDO and non-FAS storage (via Flex Array storage virtualization software)—since they are all built in to the Data ONTAP operating system.
  • Supports both SAN and NAS—simultaneously, which is noteworthy since most competitive solutions support only SAN protocols.
  • Hypervisor and application integration—Seamless integration for multiple hypervisor solutions as Hyper-V, VMware.

 

 

  • Support for the validation of virtual machines during the creation or modification of a dataset

 

 

The Validate Dataset check box is selected by default. SnapManager for Hyper-V checks for any configuration errors in all VMs during the creation or modification of a dataset. Once the VM’s in the dataset are validated, the below message shows up in validation page

 

mcc.png

 

Once an invalid configuration is deleted for virtual machines part of dataset the below error gets thrown.

 

mcc.png

 

Refer to the Best Practise Guide for SMHV 2.1 for integrated architecture and implementations of NetApp® SnapManager for Hyper-V.

 

I hope that you have enjoyed this blog entry and have found this information helpful. 

 

Warning!

This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.

In accordance to our Code of Conduct and Community Terms of Use DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information
  • Copyrighted materials without the permission of the copyright owner

Files and content that do not abide by the Community Terms of Use or Code of Conduct will be removed. Continued non-compliance may result in NetApp Community account restrictions or termination.