Over 50 NetApp users from various Bay Area companies met in Sunnyvale last week for our first local Tech OnTap Live user group. The session started with three 15-20 minute presentations on enhancing performance and closed with an informal networking session over pizza and beer.
The first speaker - Dave Tanis from NetApp's Core Performance Management team - introduced our newly announced Performance Accelerator Module (PAM). Storage teams often size based on IOP performance, which means they end up with excess capacity. This PCI expansion card simulates three virtual caches to improve performance/cut latency for random read intensive storage workloads. The discussion was fairly technical and referenced things like "Victim Cache" and "Predictive Cache Statistics". Key questions involved write penalties (none), maintaining write data consistency (this is only the read cache; writes go to disk), use in SATA environments, and sizing considerations (we're updating sizers used by NetApp's Field and SE community for Exchange, Database, and custom apps with more to follow).
The next speaker, solutions architect Bikash Choudhury, expanded on PAM with specific results of Perforce testing and benchmark results over NFS vs iSCSI vs FCP. He shared very detailed read and write workload comparison graphs with response times, commit rates, etc. Lots of questions and interest in a future white paper on Perforce qualified architectures.
This event was the first time most attendees had heard of PAM, and among folks I talked to there was a huge amount of interest. In fact, this morning I heard that a user group attendee has already decided to purchase and implement the Perforce solution using the PAM card and NetApp over NFS. But - several people were disappointed that the card is only available for certain newer systems
The final presentation shifted from addressing performance within a single system (PAM) to accelerating performance throughout an entire storage environment (typically HPC) with FlexCache and the Storage Acceleration Appliance. This was hosted by Marty Turner, the FlexCache Technical Marketing Engineer. Marty described how intelligent storage caching can be deployed in two primary use cases: (1) for increasing IOPS to an NFS grid, and (2) to decrease WAN latency by installing a storage cache at a remote site. A quick audience poll suggested most users were using the NFS protocol and could therefore benefit from this technology in one or both use cases. Lots of questions and discussion on both scenarios.
For folks who attended ... anything to add about the sessions? What did you think was most valuable about the evening, and how could it have been even better?