Parallel NFS Takes the Red Hat Carpet

It seemed like yesterday that I wrote an article about the joint develop work done by NetApp and Red Hat around Parallel NFS (pNFS).  Well, after 11 short months, there is so much more to say. Let’s start with the announcement Red Hat made in February of 2013. Red Hat, with the release of Red Hat Enterprise Linux (RHEL) 6.4 includes a GA client for pNFS. You may ask, what does this mean and why do I care? Well, here me now, and believe me later. You’re going to get a learning in a few minutes (a little shout out to Hans and Franz).

I won’t go into details about what is pNFS. You can find that information in a prior blog post entitled, pNFS: A Scalable Client for Scalable NAS. What I will tell you is that Red Hat and NetApp are the first technology vendors to release an end to end production ready pNFS solution to market for NFS data access.


With the release of RHEL 6.4, you not only get a GA version of the pNFS client, but you now get the added value of Direct I/O support. This new capability in the NFS client offers the added value of bypassing the operating system I/O buffers to write straight through to the networked storage, thereby greatly improving write performance. This is a nice addition for heavy I/O transaction environments, like databases. pNFS plus Direct I/O is pretty sweet.


Now, we have described in previous blogs the pNFS technology and have published a joint solution brief. I’d like to go a little further and describe some of the use cases and business values associated with pNFS and why it matters to you if you are deploying a scale-out NAS environment. And I’m going to highlight the awesomeness that is clustered Data ONTAP to showcase its value.


NOTE: NetApp clustered Data ONTAP is a unified scale-out storage platform with support for both SAN and NAS storage protocols. However, NetApp has opted to deploy pNFS using the NFS storage protocol since data path optimization for SAN traffic is achieved with other standardized technologies common in the block space such as multipath I/O and asymmetric logical unit access or ALUA.

Let’s first review some of the basics of clustered Data ONTAP that are pertinent to a discussion of pNFS. NetApp clustered Data ONTAP is the sexy name for the scale-out capability of Data ONTAP version 8.x. At the core of the technology is all of the NetApp goodness including a unified architecture, storage efficiency features such as thin provisioning, data deduplication, compression, integrated data protection, etc. All the things for which NetApp is known. Advanced storage virtualization is added with the ability to massively scale horizontally beyond a pair of storage controllers as well as the benefit of non-disruptive operations. Each cluster is configured with one or more Storage Virtual Machines or SVMs, which are logical containers that offer a single name space and which look and act as a dedicated storage system to your applications. Each SVM is a virtual container and is composed of logical interfaces or LIFs, local security parameters allowing role based access control or RBAC, and the data volumes
associated with that SVM.


Now with that introduction to clustered ONTAP, let me share some of the benefits of pNFS in scale-out NAS environments, particularly in NetApp clustered storage environments.


Predictable Performance

One of the primary benefits of pNFS is to maintain NFS clients with up to date information on the location of NFS volumes on the storage system. As
you scale your cluster, pNFS helps achieve predictable and consistent performance by enabling parallel data paths to volumes located in different nodes

in the cluster. pNFS enhances the inherent cluster benefits by allowing you to benefit from all the nodes in the cluster and not be bottlenecked by the node on which the client is mounted.


Improved Management

Moving volumes within the NetApp cluster can be done without taking applications offline. We call this nondisruptives operations. pNFS enhances the experience by automatically updating the NFS client with the location of the volumes which reduces management overhead required to optimize data paths by relocating LIFs or logical interfaces to the new controller node. With pNFS and clustered ONTAP, you don’t have to remount volumes when volumes move. You can retain a single mount point with consistent permissions. This means that you don’t have to schedule downtime with application owners during load balancing or maintenance events. And the ease of operation continues even as you scale your environment. 


"Together, Red Hat and NetApp offer industry-leading performance, scalability and efficiency to organizations looking to realize the full value of open source in their business. NetApp helped lead development for parallel NFS technology and has delivered the server component for the first standards-based, end-to-end pNFS solution for higher performing, more predictable storage environments. This latest version of clustered Data ONTAP paired with the Red Hat Enterprise Linux pNFS client with direct I/O exemplifies how our work in the open source community accelerates technology innovation.”

Jim Totton, vice president and general manager, Platform, Red Hat


Last week, NetApp announced our latest version of clustered Data ONTAP, version 8.2. It offers some awesome new functionality, which you can read about here. One thing I will mention is that we expanded our use of pNFS in clustered Data ONTAP with our Infinite Volume feature as well as some additional enhancements making our solution with Red Hat that much more robust. Other OS vendors and solutions will enter the market with pNFS support, but this is the first NFS based end-to-end pNFS solution and it is ready for the big screen. So, roll out the carpet.


Very informative and cob\connective post, I am looking forward for more details on this topic. Keep it up.

Where is Mike Eisler? Haven't seen his name mentioned with pNFS in two years.


Hi Rick,

He's still here. Not sure of his focus right now.