Tech ONTAP Articles

Virtualizing Business-Critical Apps with Data ONTAP 8 Cluster-Mode

Tech_OnTap
2,900 Views

Vaughn Stewart
Director and Virtualization Evangelist

It’s always informative to watch trends emerge within the NetApp customer base. Over the past 18 months or so we have witnessed a significant increase in the number of initiatives to virtualize business-critical applications. An overarching goal of these initiatives is to extend the benefits of the private cloud to these applications in order to improve business agility while enhancing application availability. Often the lynchpin of the overall success of these efforts is the choice one makes at the infrastructure layer.

One of the most overlooked advancements made possible by the virtual infrastructure is its ability to help you standardize data center operations. By abstracting the consumption of compute resources from the hardware layer, you receive the benefit of a software-driven data center, including the ability to dynamically assign more computing resources to business-critical applications during peak workloads and the ability to perform workflow automation and infrastructure orchestration.

Traditional storage infrastructures are unable to deliver this level of resource agility. With the introduction of the Data ONTAP® 8 architecture operating in Cluster-Mode, NetApp has made a tremendous advancement in bringing a software-defined storage infrastructure to reality. This allow you to standardize server and storage operations and dynamically allocate resources; these capabilities can truly benefit virtualized business-critical applications.

What Is a Business-Critical Application?

So what constitutes a business-critical application? Most would likely include the usual suspects: Microsoft® Exchange, Microsoft SQL® Server, Microsoft SharePoint®, Oracle® Database, Oracle applications, and SAP®. A broader definition would define a business-critical application as any application that one relies upon to an extent that the loss of service can be measured in its impact on productivity, customer satisfaction, and revenue obtainment. Based on this definition, business-critical applications exist in many forms. Some are shrink-wrapped, while others are less well known. Many are multitier, some are industry or market centric, and many have significant customization.

Although we can’t track the rate at which every business-critical application is virtualized, we do have some data points for some of the "off-the-shelf" enterprise apps.

Figure 1) Percent of business-critical application instances already running on VMware® (Source: VMware customer survey, January 2010 and June 2011).

Requirements for Business-Critical Apps

When it comes to virtualizing business-critical applications, there are three "must-have" requirements:

  • Availability must be equal to or greater than that of the same application running in a physical environment.
  • Performance must also be greater than or equal to that seen in the physical space. Concerns include that the virtualization layer may add performance overhead; if it does, you need to know how to compensate.
  • Manageability of operational functions such as application backup and restore, disaster recovery, data migration, and so on must be available to operate on massive amounts of data.

These requirements pertain equally to both servers and storage. Today’s hypervisor platforms are more than capable of meeting the resource requirements of the most demanding of application workloads. They also provide high availability for applications that either don’t or can’t adequately provide it natively.

Hypervisors also provide data management mechanisms to nondisruptively migrate datasets in the event of resource constraints or infrastructure refreshes, but this approach to data management is rather reactive in nature. When a nonoptimal condition occurs—like a storage performance issue—you may need to "weather the storm" until the data migration completes and resources are available to address the change in workload. These mechanisms may also introduce a number of other "downstream" issues that impact important areas like replication, data restores, storage savings, and more.

Cluster-Mode Addresses Needs of Business-Critical Apps

With the release of Data ONTAP 8 Cluster-Mode, NetApp introduced a revolutionary storage platform, one that is designed to support the requirements of any virtualized workload deployed in a private cloud. Cluster-Mode transforms the traditional 2-node NetApp storage cluster into a colossal 24-node storage infrastructure. This new clustering capability provides a single point of storage management while enabling massive capacity and performance scaling.

Data ONTAP is the first storage platform that abstracts data access and management capabilities from the hardware, creating a storage platform with hypervisor-like capabilities that can be controlled through software interfaces without disruption. NetApp accomplishes this through a storage profile mechanism we call a Vserver. This lets you dynamically assign storage resources on demand, without downtime, reconfiguration, or any of the negative effects that result when you rely on "brute force" copy mechanisms for data management.

Cluster-Mode takes all the capabilities that NetApp is well known for—application awareness; Snapshot-based backups and replicas; the industry’s broadest set of storage efficiency technologies, including deduplication, thin provisioning, compression, and space-efficient clones; and proven availability and reliability—and extends them into a new paradigm for delivering storage services.

Cluster-Mode transforms your storage infrastructure into one that is:

  • Immortal. Data is always available and accessible in the event of hardware-based events such as maintenance, data migration, upgrades, and technology refreshes.
  • Infinite. Cluster storage resources can scale from a few terabytes to 50 petabytes and from thousands to well over a million IOPS. This capability can reside in multiple name spaces or a single logical name space.
  • Intelligent. Advanced data management, designed especially for managing data at scale, delivers a new set of nondisruptive capabilities to streamline operations.

These capabilities work in conjunction with the capabilities of leading hypervisors to support your efforts to virtualize business-critical applications.

Immortal Infrastructure

Business-critical applications naturally have a 24/7 service-level requirement. All the off-the-shelf applications I mentioned above have high-availability options and mechanisms built in, in large part because high availability has been missing from the infrastructure they run on. As a first line of defense against downtime, you should definitely investigate when it’s appropriate to deploy these built-in capabilities and use them when it’s cost effective to do so.

In addition, your infrastructure needs to be highly available. For data storage this means more than just redundant I/O paths and redundant hardware components. NetApp provides a number of key technologies that protect and extend availability. Building on a foundation based on the proven reliability of NetApp® HA pairs and RAID-DP® technology, these technologies protect your data and allow you to keep important datasets online and available virtually forever.

Nondisruptive operations. Because a storage cluster is composed of multiple nodes and the access point is a software profile, you can nondisruptively move massive, multi-VM workloads between storage controllers and/or disk drive types within the cluster. This means no more operational outages for hardware maintenance, asset retirement, hardware refreshes, and so on.

Replication integrated with business-critical apps. NetApp replication technology provides deep integration with Exchange, SQL Server, SharePoint, Oracle, and SAP so that data replicated to a disaster recovery site is in a consistent state so services can be restored rapidly.

Infinite Capacity and Performance

The capacity needs of business-critical applications can increase rapidly while performance requirements may vary dramatically between peak and nonpeak periods. Cluster-Mode gives you the tools you need to meet your capacity and performance requirements without wasting resources or leaving expensive hardware sitting idle.

Dynamic scaling. Because of the flexibility created by the storage profile abstraction, you can dynamically assign resources to meet the requirements of each particular workload and reallocate those resources elsewhere when they are no longer needed.

Being able to dynamically change the storage resources (capacity and IOPS) allocated to a business-critical application makes it easier to grow with an application through its lifecycle. You can move an application from development and test into production, through peak periods, and ultimately into retirement.

The ability to burst and contract storage resources on demand makes possible new ways of working. The ability to reassign compute and storage resources in unison without reconfiguring the environment creates a more dynamic and more efficient cloud infrastructure.

Instant adaptation to workload changes. Cloud environments need to adapt to unforeseen changes in application workloads. NetApp developed Virtual Storage Tiering to deliver better dynamic response for such events. Flash Cache is a modularly expandable controller cache for hot, random read data. Flash Pool combines SSDs with spinning disks, resulting in a hybrid FAS array best suited for random reads and writes. Flash Accel extends the value of VST into the vSphere® hypervisor to deliver the fastest I/O for latency-sensitive applications.

Together these technologies create an on-demand performance tier that protects the responsiveness of business-critical applications in the face of unexpected spikes in activity.

Intelligent Management

The unparalleled information mobility provided by Cluster-Mode makes your storage infrastructure transparent. Data is free to move about the cluster based on user demands—business critical or not. As the resources consumed by an application increase, critical management capabilities such as migration, backup, and replication scale with it.

In a hypervisor data migration (unassisted by storage), data movement happens at the server level. Data is read block by block from the original location to the server and then written to the new location. By comparison, data movement in a NetApp cluster proceeds at storage speeds across a dedicated, high-speed cluster interconnect. As your cluster grows, the available bandwidth to support data movement activities grows with it. The results speak for themselves.

At VMworld 2012, NetApp founder Dave Hitz was joined by Luke Norris of PeakColo, who explained how Cluster-Mode gives the company the agility to move tenant VMs en masse. In one case, a PeakColo customer with a 30-VM Oracle environment suspected it was having a storage performance problem.

PeakColo transparently and nearly instantly migrated the workload from SATA to SSD. As a result, the customer was able to determine that the performance issue was a code problem and not in the storage layer. The ability to rule out storage in such a short timeframe allowed the customer to shift the focus back to its application developers in short order.

On-disk recovery points. With the impact that massive data growth is having on business-critical applications, just moving to faster and faster methods of data transfer to meet backup windows is no longer the answer. The game is to have recovery points local to your primary disk and to let the storage array automatically replicate data to another storage target. Cluster-Mode provides this using NetApp Snapshot™ copies and SnapMirror® replication technology, providing an integrated data protection approach built for scale and based on proven storage efficiencies. Today’s 1TB VM can grow to become a 10TB VM without changing the backup window or extending the time needed for replication. The NetApp SnapManager® suite of products integrates these technologies closely with Exchange, SQL Server, SharePoint, Oracle, and SAP so backup and replication are not only fast, they're fully application consistent.

Transparent growth. The Cluster-Mode storage platform expands, without any changes to hosts, by simply adding additional nodes to the cluster. Expansion does not require the new nodes to be the same hardware model as existing nodes. You can mix the latest NetApp platforms with the hardware already in your cluster and retire older platforms without ever taking data offline. The ability to make these changes without disruption is built into Cluster-Mode.

Identify and correct misaligned virtual machines. A common issue that plagues all storage platforms is the misalignment of partitions within VMs. The NetApp Virtual Storage Console (VSC) plug-in for VMware vCenter™ provides an optimize and migrate capability that identifies and corrects alignment problems nondisruptively.

Delegate control to application owners. Cluster-Mode gives you the ability to delegate control over some or all of the functions within each storage profile to application owners. Application owners gain agility and are in a better position to address day-to-day demands quickly.

Conclusion

A virtual infrastructure for business-critical applications requires a higher level of availability, performance, and manageability. The combination of modern hypervisors and Data ONTAP 8 Cluster-Mode allows you to standardize your approach to operations and create a software-defined data center that is reliable, dynamically scalable, easier to manage, and more efficient than a traditional infrastructure. As a result, an application environment virtualized with this technology exceeds the availability of physical environments while providing the same or better performance (without leaving valuable resources sitting idle in off-peak periods) and a level of manageability that is simply not possible in nonvirtualized environments.

The revolutionary design of NetApp Data ONTAP 8 Cluster-Mode provides benefits that parallel those of a server hypervisor to address the needs of advanced virtualization and cloud computing. The ability to abstract data management and access from the hardware provides unparalleled performance, capacity, and agility for your business-critical applications.

Vaughn is a director of Cloud Computing and the "virtualization evangelist" at NetApp. He represents NetApp on the Open Virtualization Alliance, publishes "The Virtual Storage Guy" blog, and is coauthor of the recently published book Virtualization Changes Everything. He has a patent pending, is recognized by VMware as a vExpert, and holds several industry certifications.

Please Note:

All content posted on the NetApp Community is publicly searchable and viewable. Participation in the NetApp Community is voluntary.

In accordance with our Code of Conduct and Community Terms of Use, DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information (PII)
  • Copyrighted materials without the permission of the copyright owner

Continued non-compliance may result in NetApp Community account restrictions or termination.

Replies

Great read! I love how Vaughn is able to take something technical and explain it in a way that is easy to understand and highlights the business benefits. Every NetApp customer should consider Clustered ONTAP as a way to support their critical applications.

Public