For MSBs, Eliminating Downtime is Now More Critical Than Ever

Posted by Mike McNamara


Across all types and sizes of business and organizations, mitigating the disruptions of downtime has become an increasing imperative. But for MSBs and smaller enterprise organizations, downtime can be particularly painful. Tasked with meeting similar needs as larger organizations, but with smaller staffs and tighter budgets, MSBs are finding that recovering from downtime is more challenging, more time consuming and more costly than ever before.


Planned downtime accounts for nearly 90 percent of all outages while unplanned downtime is responsible for only 10 percent. The 10 percent of downtime that results from a natural disaster or unforeseen catastrophe can be more expensive, the 90 percent of planned downtime is far more disruptive, with greater impact on business cycles and operations.


So what is the alternative if a component or storage controller has failed, a network interface card or chip needs to be replaced, an operating system needs to be updated, or an older system no longer has the horsepower to contend with the increased operational requirements of a growing business?


At NetApp, we utilize clustering technology and a feature called Data Motion for Volumes. This is how it works: Say you have a cluster of a FAS310, a FAS2220 and a FAS3240. Within that cluster you would have three systems, with each system having two controllers, for a total of six nodes. Within those six nodes, it would be possible to move data between those systems based on what is required of the application.


Perhaps the FAS310 in that cluster needs to be taken offline and upgraded. Since the FAS3140 is part of the cluster, we are able to efficiently conduct a volume move, remove the data that was on the FAS3140 and distribute some to the 2200 and some to the 3240. We could then take the 3140 offline, bring in a new system and move the data back on to it. We also can take advantage of the fact that we can now do load balancing with the systems in the cluster, moving some data non-disruptively from the 2200 to the 3240 for faster performance.


It doesn’t matter what the protocol is, what the workload is or even what storage system is being used—whether it is NetApp’s FAS2000 series up through the FAS3000 to the FAS6000. If the customer has storage from other companies, we can use our V series in a cluster, allowing them to reuse the investment they have already made and take advantage of NetApp’s storage efficiency benefits, with the added flexibility and resiliency that a cluster brings to bear.


To the users, and the organization, the entire process would be undetectable. Throughout it, they would still be getting full, uninterrupted access to the data—making the unavoidable inefficiency and frustration of even the most carefully planned downtime a thing of the past.

Earlier this week, NetApp announced new models in its FAS3200 Series with the release of the FAS3220 and FAS3250. To learn more about the products and how they help eliminate downtime, you can go to the
formal announcement