Guest blog post by Bryan R. Cote, Sr. Product Manager, Terascala
Gordon E. Moore, co-founder of Intel, made a prediction that the number of transistors on an integrated circuit would double every year. This became known as “Moore’s Law” and has held true over almost five decades. What this has meant for you and me is that the computational power of computers has continued to grow at an astonishing pace; from desktops to smartphones, from laptops to supercomputing clusters, an individual processor can crunch an amazing amount of data, enabling previously unimaginable breakthroughs in medicine, science, and industry.
Today, individual processors are made up of several cores; each of those cores a computing powerhouse unto itself. When combined into clusters of many individual processors, you can bring the computing power of tens of thousands of cores together, to solve a single computing problem. It sounds like a perfect situation if not for the other half of the equation Moore’s Law didn’t address. How do you feed all those data hungry cores with information fast enough that you can get the full benefit of all of that computing power? While today’s information storage solutions haven’t evolved quite as quickly as their processor brethren, solutions have begun to emerge that help address the storage bandwidth shortfall today.
One approach that originated in universities, research labs and at NASA, is known as Parallel File System technology. A Parallel File System puts dynamically expandable pipelines of bandwidth in front of modern storage, allowing those hungry cores to drink from a sea of data using multiple straws as quickly as possible. Utilizing a cluster of servers to manage the traffic to and from storage, administrators can dynamically expand the bandwidth and ensure uninterrupted access to critical data.
Up to now, the downside of Parallel File System solutions was that you had to build them, manage and tune these solutions using not so friendly tools and most often, on your own. In other words, unless you really enjoyed tinkering with these things or could afford to pay the high costs of someone who did, the difficulty of maintaining these solutions often outweighed the short-term benefit.
That’s where solutions like NetApp's HPS Rack come in. Built on top of class leading E series storage and TeraOS™, Terascala’s expertly tuned software stack combining Lustre, the leading Parallel File System, augmented with TeraOS’s comprehensive monitoring and storage analytics, HPS Rack provides high performance computing users a turn-key storage solution with single vendor support. Now, demanding High-Performance Computing users can achieve 10’s of Gigabytes of highly available storage bandwidth with none of the hassle, cost and uncertainty associated with building and maintaining your own solution.
No longer is it just Universities and Labs that need these capabilities. Modern applications such as manufacturing simulation and modeling, bio-tech gene-sequencing solutions, information intelligence and trend analysis, and many others expect this kind of horsepower today. And as these hyper-scale information technologies move out of the lab and into the enterprise, their manageability requirements change. No longer is it a scientist or file system developer maintaining the solution, it’s an IT admin who expects that his high performance storage solutions will co-exist nicely with his enterprise storage tools and training. That’s what you get with HPS Rack.
So, as the scale of computing needs change, and what was considered supercomputing yesterday becomes office-scale computing tomorrow, remember that compute is only part of the equation. Taking full advantage of all of that computing power will require high performance storage solutions that are not only capable of supporting the unprecedented bandwidth needs of emerging science, energy and manufacturing applications, but they need to be incredibly reliable and manageable by ordinary IT users as well.
Bryan R. Cote, Sr. Product Manager, Terascala