Red8 Weighs in: New NetApp All-Flash is Impressive

Glenn Dekhayser Red8We asked NetApp A-Team member Glenn Dekhayser, who’s been involved with NetApp technology for 20 years, for his views on the newest additions to the NetApp All Flash FAS portfolio, the company, and the flash market in general. Glenn is director of engineering for the east region and the national data management practice lead in the office of the CTO at Red8, a leader in delivering innovative cloud and data center infrastructure platforms.


Q:  You’ve been selling the NetApp All Flash FAS (AFF) array line since it first debuted. What do your clients find most appealing about AFF systems?


Obviously, performance, but it’s not just that the performance is great. Going from disk to all-flash, you can have amazing performance.  Sometimes, much more than what clients are used to, up to 100X more. What’s great about NetApp is that you can get that performance, in a non-disruptive way, across all the protocols the clients are used to -- NAS, Windows file sharing, block, SAN, Fibre Channel, iSCSI.

 

NetApp is the only one on the block who can do that. It just eliminates an entire IT problem domain, in that I don’t have to worry about performance any more across any of my protocols.  That’s what my clients have been most impressed with.  Usually, if something went wrong, they would go to the storage first, say if an application was running slow. Now, that’s gone.  We know there is a bucket of performance that the NetApp AFF is providing in the back end, and you’re not going to need to look there first, the issue is further up the stack at the server or the network. That makes things a lot easier, and lets my clients sleep well at night.


Q. NetApp just announced the AFF A700s, which is a compact version of the AFF A700 announced last fall.  What makes these systems stand out competitively?


The performance of both are just ridiculous, the core density, the RAM density. Beyond that, we’re able to put out a platform that no one can touch. Those who would say, “No way it can perform as well as X,” well, you can just throw that out.

 

What excites me is that the hardware architecture is definitely next generation. NetApp has gone beyond the nondisruptive capabilities of its ONTAP software concept now, because we’ll be able to do things like hot swap component modules and hot add cards. From a nondisruptive perspective, I won’t even have to fail over anymore to put a card in, that’s awesome. Not that failing over was disruptive, it wasn’t, but there was always the risk that something outside the NetApp system would not fail over correctly. This even takes that problem domain out.


I mean, NetApp is finding every IT problem domain and just eliminating them one by one, which is exactly what enterprises need for today’s IT, to be up all the time, you don’t have a choice.  When I first started selling NetApp, they were at four 9s and proud of it, then to five 9s, not sure where we will be when this gets out. With this launch, NetApp is taking nondisruptive and enterprise-grade reliability and going two levels above that, that’s a pretty cool concept. And it doesn’t hurt that the system performs like a Ferrari.


Q.  How would you rate our progress in all-flash over the past year?


More than a year ago, NetApp wasn’t in anyone’s conversation about all-flash.  That’s changed -- we’ve been seeing a positive transition over the past year.  NetApp shops tend to take NetApp for granted about all the things they can do, and when they try to replace the functionality NetApp has provided them over the years, they find it’s not that easy to do. They can’t find another vendor who could do all that and do it better than NetApp across all the different protocols.


Most of my clients don’t use NetApp storage for just NAS or just SAN, they use it as a unified storage device. When they think of replacing it with one of the all-flash up and comers, they find the competitors only do one thing. They realize they’d have to bring in multiple solutions to replace it.  So, it is an easy transition to NetApp flash.  


And we’re seeing more than just a refresh to all-flash and bringing in the workloads they had before.  The performance has been so good and the management has been so good, clients are now looking for other workloads that may have been served by special devices to bring into that larger NetApp cluster.


Over the last several months, our experience with AFF is that we’re starting to wipe the floor against other platforms, whether first generation all-flash or hybrid, now that price and density of all-flash is getting closer to the higher performing disk arrays of the past.


One example on the new workloads, we had an Oracle Exadata takeout, unheard of in the past. We replaced this highly tuned, so-called engineered system, by moving the client into an AFF FlexPod solution. This delivered better performance for many things, things our client may not have considered, so we’re taking a fresh look at exception workloads and whether we can bring them in.  With workloads like enterprise-level Oracle suites, these are usually siloed. Businesses are reluctant to put them into a heterogeneous business environment, they get their own silo, their own administrator. But clients can see cost savings not just from pulling it into an existing AFF platform, but operationally, they now can manage it the same way, protect the data the same way, and have their Oracle managers just manage the database and not have to manage another set of hardware.

Q. You’ve been associated with NetApp for a long time. What do you see as our strengths and where do we have more work to do?


I believe that NetApp’s willingness to engage in OpenStack and that community is really going to put NetApp on top over the next couple of years.  Customers are looking to provide a cloud-like application production and development experience on prem.  I think more and more customers are going to start doing that when they realize the true economics of cloud, that it’s not always going to be cheaper doing it on cloud. In some cases, doing it on prem is going to be more economical if you can provide the right service-like platform to do it on, from an automation perspective.  NetApp’s ability, with SolidFire, the newest NetApp APIs and embrace of the OpenStack community, is really going to bear fruit. That’s where the largest enterprises are moving to, they are not going to use just white boxes, that’s not realistic.


On the other side, NetApp was obviously late getting to flash. The old way NetApp used to engineer, with 2.5 years for new features to get to market, was really hurting you. I like the new internal approach for development, with faster release cycles.  This has gotten you into a much more aggressive market position -- #2 in flash from IDC for instance -- so I’m excited where you guys are going to go over the next two years.

 

Q. How are your clients responding to SolidFire?


SolidFire allows us to discuss alternate consumption models with our clients, where they can offer storage as a service that’s simple and guarantees heterogeneous workload performance. They like the separation of the hardware and software parts of the solution, optimizing the whole economics of the storage service model.  Being able to say we can offer this new model gets the customer engaged, they like that we’re thinking along the lines they are even if they aren’t ready for that model.


We have seen a huge uptick in customers looking to leverage DevOps toolsets, and SolidFire is perfect for shops going that route if they have different automation models than what AFF does. Having SolidFire in our portfolio allows us to have a broader business conversation, raise the topic of storage as a service, and it’s good to know that NetApp’s thinking the right way for the future.

 

Q. What future developments in flash get you the most excited?


As the performance and density of flash increases, and we know it will, it’s going to allow for wholesale changes in how applications and operating systems fundamentally utilize storage. In the future, ideas we’ve gotten used to, such as saving a file or even starting an application, are going to be challenged, they may not even exist. In 10 years, imagine the next generation of tech users looking back. They are going to laugh at how we used to boot computers because we’re loading from a hard drive.


If the memory is persistent, we no longer need the hard drive, and so we’ll never have to save again because every time we change a memory bit, by default, it’s saved. This won’t just change the speed of technology or the responsiveness of applications, it’s going to change the very nature of applications and operating systems themselves and all the things we use.  I can’t wait for this new stuff to come out – they’ll be no need for metaphors of a folder or a file or a desktop when we have this new persistent layer of memory.