By Mike McNamara, NetApp
“Clustered storage” as a term may not be the hottest topic right now in the magazines, blogs, and sites that comprise today’s business technology press. But over the past four or five years, clustered—or scale-out—storage has grown increasingly and consistently more essential to meeting and mitigating the unrelenting challenges of data growth.
As the amount of daily personal and business data usage has grown to near-tsunami levels, utilization of clustered storage has expanded from use in primarily technical applications such as engineering and CAD/CAM simulation to use for such common business applications as Oracle® databases, SAP® business management software, and Microsoft® Exchange. Over the past 12 months, industry analysts have even begun ranking vendors who supply clustering or scale-out storage. And yet clustered storage is still not a ubiquitous part of the data growth conversation.
Maybe that’s the case because it works.
It is hard to generate a lot of heat without a lot of hotly contested debate. And there is little debate that scale-out storage is an ever-more-critical, even integral, component for organizations that rely on data to drive business results. That pretty much means all organizations these days, doesn’t it?
For those new to the topic, clustered storage is a large, highly adjustable pool of storage that can be expanded or minimized as needed, all while appearing to end users as a single system. Storage can be divided and allocated among different users who can utilize the same body of storage while being securely partitioned from each other. Having such a big pool of storage allows data to be moved nondisruptively within that clustered environment for performance balancing, to take a system offline, or simply to perform routine maintenance and upgrades. More than ever before, nondisruptive operation is making the transition from nice to have to need to have.
As year-over-year data growth becomes truly unprecedented, the proliferation of bigger datasets is causing IT departments to struggle to keep up—much less proactively innovate to better meet the organization’s needs. Datasets are not simply growing—they are growing rapidly and consuming storage voraciously. Against a backdrop of such torrential data downpours, simply adding new storage or even upgrading existing storage can have a dramatic impact on a company’s business.
Systems taken down or offline even for a few hours can bury employees in everything from e-mail to sales figures. Half a day without access to critical information can easily cause an organization to miss weekly or even quarterly business goals. And with more organizations operating globally, there are few, if any, true off-hours anymore. Midnight at a home office often falls during critical operating hours in another part of the world. Increasingly, planned downtime, because it’s more common, can be more disruptive than unplanned outages.
As most CIOs readily concede, dialing back is no longer an option. With voluminous datasets the new normal, companies have no choice but to keep pace with the velocity of today’s information. Without the agility and scalability of clustered storage, businesses and consumers alike can anticipate greater delays in services, expending more time and effort to expand storage infrastructures, and having less access to information when it is needed—all of which can and will be reflected in a company’s bottom line.
As the data-deluge juggernaut rolls on, many organizations will soon discover that clustered storage can mean the difference between controlling data and being controlled by it. Clustering continues to offer organizations a highly efficient and effective way to manage the new—and intensifying—realities of this data-deluge era.