Measuring the Value of a UCD Process for an HCM and Payroll Company
In Part 1 of this series, JD and her team discussed their UX-measurement initiative for their human capital–management (HCM) company’s enterprise payroll-compliance application, which meant first identifying users’ highest-priority tasks. The team then established a baseline for the current user experience by benchmarking at regular intervals, measuring differences in users’ attitudes, behaviors, and processes, while attempting to accomplish those key tasks.
Collecting several measurable metrics—both after each task and at the end of every benchmark study—has yielded informative insights over the course of several studies. Figure 2 depicts a sample of the metrics the team employed in this multimetric approach, which has allowed them to examine both task-level and overall metrics and gauge the impact of subsequent design iterations and releases on the user experience.
A common question JD has heard when presenting this approach—both internally to her organization and externally—is: Why collect so many UX metrics? Shouldn’t one or two be enough? We’ve come to think of these efforts to measure our software user experience as similar to measuring a virtual experience economy. Much like those economic indicators, our indices of UX metrics can sometimes be leading, lagging, or coincident indicators of the users’ experience, as follows:
- leading UX metrics—Metrics such as success / failure or task ease / perceived difficulty can be early indicators. For example, they might provide a sense of how and why specific features do not provide the intended improvement in supporting users’ most important tasks.
- lagging indicators—Metrics such as Net Promoter Score (NPS), overall satisfaction, System Usability Scale (SUS), and Standardized User Experience Percentile Rank Questionnaire (SUPR-Q), which includes trust and credibility, may help us to better understand users’ sense of increasing value or frustration with the user experience over time.
- coincident indicators—Metrics such as task satisfaction and time on task provide immediate measures of our users’ actual versus perceived experience.
In the same way as for measuring economic indicators, by compiling several UX metrics into indices, we can minimize some of the volatility and confusion that are associated with individual indicators and provide a more reliable measure.
Initially, we had hypothesized that measurable connections between the UX teams’ efforts and the company’s key performance indicators (KPIs) would reflect correlations in similar UX metrics. We assumed that standard UX metrics such as improvements in task time and satisfaction would have the greatest influence on customer KPIs such as NPS, as well as on customer-support contacts. However, while improvements in the users’ experience do seem to impact the company’s performance metrics, JD’s team discovered that the connection is more complex than they had originally anticipated.
While running a series of statistical analyses—including linear regression, logistic regression, and analysis of variance—to uncover significant correlations between numerous metrics, a surprising model emerged. What did it reveal? There is a measurable correlation between a high-quality user experience and customer referrals. The team was excited to discover that task-level metrics such as task success and task ease had the strongest correlation to UX metrics such as SUS and overall satisfaction, as well as to the product’s NPS.
As the team continued to search the data for statistically significant connections between task-level and overall UX metrics and company KPIs, across several comparison studies, they started to think of these connections in terms of the model shown in Figure 3.
To summarize, new releases of the user experience first had to better support users’ ability to complete their top tasks successfully, end to end. Second, if those top tasks felt easier to complete, users were more likely to rate the experience as more satisfying and learnable. Finally, if an experience met these first two conditions, users were more likely to give the product a higher NPS rating.
Research suggests a strong relationship between NPS, revenue, and profits. However, service quality can dramatically impact NPS scores—especially at enterprise organizations where the quality of customer service can play a significant role in the end-to-end user experience. While the UX team has discovered a statistically significant correlation between the task user experience and the NPS for the post-task product experience, it will be necessary to conduct additional research to validate suggested correlations between the NPS for the user experience and that for the company—and, in turn, revenue and profits.
Modeling the Relationship Between User Surveys and Business Metrics at athenahealth
At athenahealth, beginning in January 2016, Aaron’s team conducted monthly perception surveys with users. Having completed more than 50,000 surveys over a two-year period, they combined their data with business metrics and established a clear, statistically significant, positive relationship between users’ perceptions of usability and two business metrics: retention and referrals.
The basic model focused on the same areas as the Service-Profit Chain. The goal is to see whether certain product characteristics such as ease of use can impact the business. However, as shown in the Service-Profit Chain, this is not a direct relationship. The product has many aspects, all of which together create the users’ perception of the product.
The team asked questions about both the ease of use and reliability of the product. The users comment on both of these frequently, so the team hypothesized that these were drivers of users’ overall satisfaction with the product. The statistical model supported this—that is, users’ perceptions of individual aspects of the product—such as ease of use and reliability—in combination predict their overall satisfaction with the product.
The team validated this model statistically, using several regression models—such as multiple linear regression and logistic regression—in combination with mediation analysis. The results of using all of these models were statistically significant, with p<.000001—in part because of the large size of the base datasets. In the model shown in Figure 4, specific numbers from their analyses—including correlation coefficients, R squared, and odds ratios— have been replaced with an X.
To be able to see the relationship between user experience and attrition versus retention, it is critical to reduce the amount of noise by modeling multiple steps. For example, individual users’ ratings of ease of use would not directly predict referrals in a statistical model because there is so much more that goes into someone’s decision to make a referral than just ease of use. Thus, breaking the model into multiple steps helps isolate and remove some of the noise arising from all the other factors that the model doesn’t include—for example, customer support.
athenahealth has used this foundational model to help support strategic decision making regarding user experience—for example, to prioritize where the company should invest more effort, to validate related metrics, and to understand what users perceive as the company’s biggest opportunities.
Adapting This Model to Your Business
Many other companies could adapt the model that we’ve described in this article to their own needs—perhaps by generalizing it slightly or by simplifying it. Our four-step model, which we’re calling the UX-Revenue Chain, and is shown in Figure 5, breaks down the way users think about and make decisions into measurements of four key areas, allowing you to use statistical modeling to explain how user experience impacts business metrics.
Both of the enterprise companies for which we work have found the insights that have surfaced encouraging—not only for our respective UX teams but also for the potential strategic opportunities it has revealed across the entire organization. We’ve derived this model from our collective discoveries. It has inspired discussions across organizational silos, regarding the importance of establishing who a product’s primary users are—as well as their top tasks and workflows—as an essential design and business-strategy approach.
Given both the benefits and challenges inherent in developing a UX-measurement initiative and achieving the expected results, it is important to answer a few initial questions before attempting to employ this approach. The answers to the following questions can help you to determine whether your enterprise—or even a consumer organization—is ready to undertake this kind of initiative:
- What is your organization’s level of UX maturity? What is its data maturity?
- Does your executive leadership support a UX-measurement initiative? How strong is that support?
- Keeping in mind the longitudinal nature of this research, is your executive leadership willing to provide the time and budget for tools and resources to support this kind of strategic initiative?
- Does your UX team currently have the skills, tools, and resources for this kind of endeavor?
- Does your UX team have access to reliable past and current, raw, KPI data to support analysis?
- Does leadership understand how to best utilize the insights from this kind of model in support of strategic decision making?
- What is the best approach for communicating results across your organization to support actionable implementation and strategic execution?
While considering some of these questions before you even embark upon your UX-measurement journey can seem overwhelming, it is essential that you gain alignment and support—both early and at regular intervals—to ensure the ongoing success of the effort.
One final note: Jeff Sauro, founding principal of the quantitative research firm MeasuringU states:
“Making a case for ROI is a good thing to help justify methods that should help the user and ultimately the organization’s bottom line. But don’t overstate or oversell your case. Understand the limits of your data. Both the metrics and methods affect the strength of your case for a return on investment.”
Derfuss, Klaus, Jens Hogreve, Anja Iseke, and Tonnjes Eller, “The Service-Profit Chain: A Meta-Analytic Test of a Comprehensive Theoretical Framework.” Journal of Marketing, May 2017.
Heskett, James L., Thomas O. Jones, Gary W. Loveman, W. Earl Sasser, Jr., and Leonard A. Schlesinger. “Putting the Service-Profit Chain to Work.” July-August 2008. Retrieved December 28, 2018.
Sauro, Jeff, “The One Number You Need to Grow (A Replication).” MeasuringU, December 2018. Retrieved December 28, 2018.
Sauro, Jeff, “10 Metrics to Track the ROI of UX Efforts.” MeasuringU, September 1, 2015. Retrieved December 29, 2018.