Tech ONTAP Blogs

Driving Product Enhancements Through UX Benchmarking: A Case Study on Iterative Improvement

Rosa_NetApp
NetApp
246 Views

Introduction

In today's competitive landscape, creating user experiences isn't just a nice-to-have—it's a business imperative. As a UX Research Manager, I've witnessed firsthand how systematic measurement of user experience through benchmarking studies can transform product development from subjective development into data-driven science. This article outlines our methodology for leveraging UX benchmarking to drive iterative improvements that result in measurable gains in user success and satisfaction.

 

What is UX Benchmarking?

UX benchmarking is a structured approach to measuring the performance and quality of a user experience against defined metrics. Unlike traditional usability testing that may focus on identifying specific issues, benchmarking establishes quantitative baselines that allow teams to:

  • Measure the current state of the user experience
  • Set clear improvement targets
  • Track progress over time
  • Validate design changes with confidence

 

Our Benchmarking Framework

Our benchmarking methodology follows a cyclical pattern of measurement, design, validation, and re-measurement:

  1. Establish Baseline Metrics: Conduct initial benchmark studies to measure key performance indicators
  2. Analyze Pain Points: Identify areas with the greatest opportunity for improvement
  3. Design Interventions: Create design solutions targeting identified issues
  4. Validate Designs: Test proposed solutions through formative usability studies
  5. Implement Changes: Roll out validated design improvements
  6. Re-benchmark: Measure the same metrics to quantify improvements
  7. Repeat: Continue the cycle for ongoing optimization

 

Case Study: Iterative Improvement Through Benchmarking

 

Phase 1: Baseline Measurement

In early 2024, NetApp Engineering and UX Research partnered to launch a benchmark study to analyze key SAN management activities in one of our management products.  We recruited professionals with no prior NetApp experience. We asked them to show us how they would complete everyday activities, such as provisioning storage, scheduling snapshots/backups, analyzing latency, and troubleshooting connectivity issues.

We learned that people who were new to NetApp had a hard time figuring out how to take ad hoc snapshots, schedule them, and recover from them. Our workflows for completing these tasks had room for improvement. With the metrics captured in the study, we had data that proved participants were not successful with these activities and were frustrated by them.  Beyond metrics, we also had feedback from these participants.  We knew, click by click, what they needed to feel more confident about managing their data.  

 

Phase 2: Design Improvements

With this information, the UX Design teams began working on prototypes to improve the workflows. We conducted workshops and collaborative design reviews to discuss how we could improve the designs.  We knew we had to improve workflows, improve the language used in the UI, minimize clicks to complete an activity, and offer validation to our users to reassure them that they were on the right track.

 

Phase 3: Validation Testing

Once we had prototypes in hand, we were ready to perform interim usability testing.  We recruited more participants to look at our proposed designs and provide feedback.  We knew we had to test the same workflows as the original benchmark so that we had results that we could directly compare and contrast.  The results of that study were very promising – we were on the right track!  We implemented a few more improvements and returned to Engineering to implement the new designs.

 

Phase 4: Implementation and Re-benchmarking

By September of 2024, we were ready to go live with our changes.  This meant that we would soon follow up with our second benchmark to see if we made a positive impact on the snapshot workflows that we worked on.  For this second benchmark, we utilized the same exact methodology as the first – we tested all the same workflows and recruited professionals who had never seen the product before.  What was the result?  We averaged a 95% success rate for our study participants in completing these workflows.  By comparison, in our original study, only one-third of participants could complete these activities.  Participants were much faster in achieving success, and more importantly, they were more confident in their ability to complete the tasks.

 

Benefits Beyond Metrics

While the quantitative improvements are compelling, our benchmarking approach delivers additional benefits:

  • Alignment of Teams: Shared metrics create a common language for product, design, engineering, and business teams.
  • Prioritization of Resources: Data-driven insights help focus efforts on changes with the highest potential impact.
  • Customer-Centricity: Regular benchmarking keeps the focus on user needs throughout the development process.

Even more satisfying to me personally is talking to our customers. The people participating in our UX Research studies recognize they are being heard. They see the changes that they tell us about, and because of that, they are excited to participate. Being part of these improvements is very gratifying to me. I am honored to help advocate for our customers.

 

Conclusion

UX benchmarking transforms the abstract goal of "improving user experience" into a concrete, measurable practice. By establishing clear metrics, testing systematically, and validating improvements, organizations can demonstrate the tangible value of UX investments and drive continuous product improvement.

In our experience, this methodical approach not only results in better products but also creates a more efficient development process where decisions are guided by data rather than opinion. The result is better experiences for users and better outcomes for the business—a true win-win.

 

Comment below if you are interested in participating in future studies.

Public