Making Sense of Tally Sheets :: UXmatters

Analysis Methods: Grounded-Theory Techniques

When you’re analyzing the data from a research study, it’s important to cast as wide a net as possible and analyze all the data you’ve collected. You don’t want to miss anything. UX researchers typically borrow research techniques from grounded theory—whether knowingly or not—when analyzing data from studies.

Glaser and Strauss originally created these grounded-theory techniques in 1967. The research methodology of grounded theory requires adhering to a set of principles that form the backbone of grounded theory. However, UX researchers use only what techniques they find helpful to their analysis.

For visual data collection (VDC), analysis techniques such as coding and comparative analysis are useful, while the principle of emergence may not play as well. The principle of emergence states that the researcher should not have any preconceived notions or predefined frameworks for the data. Instead, the theory and concepts should emerge from the data.

However, since VDC typically occurs at the design-concept phase of the product development lifecycle, teams might come to the table with lots of assumptions that researchers need to validate or refute. So researchers need to balance an emergent approach toward what they’re seeing with a more brute-force, less time-intensive approach to generating theory. Product teams have time-to-market pressures to deliver functionality. So collect your stakeholders’ hypotheses—that is, theories—up front, then use your research time to develop questions and apply research methods to refute or validate their hypotheses.

Analysis Method: Using Codes

Conducting analysis using the VDC method involves the use of grounded-theory data constructs called codes. These codes, or categories, can either be predefined—what grounded-theory researchers call selective codes, or annotations—or you can define them once the research has begun. The latter are called open codes. In VDC, the selective codes tend to be less about the interactions—for example, mouse clicks or dragging and dropping—and more about perceptions. Figure 1 shows some selective codes.

Figure 1—Common selective codes for studies using the VDC method
Common selective codes for studies using the VDC method

If you’ve read my earlier column “Sensemaking with Annotations,” you’ll recognize that annotations are essentially a set of selective codes for use in future analysis. The notetaker applies these codes during research sessions. Post-session open coding might also be warranted, especially if you’re trying to unravel a set of data that has multiple layers and derive insights from your findings.

Typically, UX researchers do post-session coding while listening to recordings. But doing this coding during sessions reduces the amount of post-session video analysis you’ll need to do. Of course, you still might need to go back and review a session—especially if something was unclear during the session and you didn’t have time to correct it.

Analysis Method: Constant Comparative Analysis

Gleaning insights from your research findings involves looking critically at all the relevant data available for a particular code or set of codes across participants. This is where using aspects of a grounded-theory technique called constant comparative analysis comes in handy.

While this technique involves multiple goals, the key goal for VDC is what researchers call generating theory. But, since we’re only using the technique—not adhering to the grounded-theory methodology—let’s call this generating relationships, which is the ultimate goal. This technique involves assessing various instances of data to identify their relationships to one another. Ultimately, the goal is to develop distinct categories, or codes, with different sets of common properties.

Let’s look at an example of generating relationships using constant comparative analysis.

Example of Constant Comparative Analysis: Factors for Adopting a Web Site

One research goal for Middlepacker.com was to identify the factors that would motivate users to adopt the Web site. When we asked participants to answer this question—which was especially difficult after their seeing the Web site for the first time and not being able to play with it over a longer period of time—they indicated an array of opinions and perspectives. See their responses in Figure 2.

Figure 2—Tally-sheet data for participant responses on site usage
Tally-sheet data for participant responses on site usage

Welcome to the wonderful, gray world of UX research! The first thing you may notice is that there is no clear response to the question: What are the driving factors for adopting this Web site? It would be necessary to sort out the reasons why a participant would adopt the site or would not adopt it. Within each set, look for factors that would motivate users to form an opinion about the site.

Then, compare each data point with the others and ask: Are these reasons the same or different from one another? Participants 2 and 3 indicated that they would use the site. In fact, one could interpret their response as a wholehearted Yes. Participant 2 said he’d use the site because of its unique focus on running expertise. Participant 3 emphasizes low cost: “As long as it’s free!” Are these reasons the same or different? These reasons seem quite different from one another. Therefore, you could say that cost and useful content on running are two factors that would drive people to use Middlepacker.com. You could then use these comments to generate an insight about factors for adoption, as follows:

Participant responses indicate the following potential factors for adopting Middlepacker.com: cost and the usefulness of running content.

You would perform the same recursive analysis for each set of data points to help answer the question about adoption.

Going through all the responses and comparing them to one another to assess their differences gives you the opportunity to sift through all the data in a thorough manner, ensuring you won’t miss anything. Think of this process as similar to the thoroughness with which a Zamboni machine resurfaces the ice between the periods of a hockey game. The entire rink gets resurfaced! Every iota of data must get some analysis love!

Identifying Themes in the Data

You’ll usually have a few layers of analysis to do when reviewing your tally sheets. Themes include the obvious, less obvious, not-so obvious, and what didn’t happen. Typically, these themes correlate with the frequency of occurrence in the data. But don’t get bogged down trying to be precise about this. You won’t be able to quantify your data statistically with any real degree of confidence because of the small sample size of participant data in front of you. Just use frequency as a loose guide.

Identifying Themes: The Obvious

Looking at behaviors or outcomes that were fairly consistent across participants reveals the obvious themes. A classic example might be the one thing that all or most participants did—or did not do.

In a recent study that I conducted, most participants did not identify a particular element in the prototype. The theme was so obvious that it ended up being a key finding in my final report. See Figure 3 for an example of an obvious theme on a tally sheet. Can you guess what the theme was? (See the end of this article for the answer.)

Figure 3—Tally sheets make obvious themes more obvious
Tally sheets make obvious themes more obvious

Identifying Themes: The Less Obvious

The next layer of findings includes things that seem fine on the surface, but when you re-examine them, are less obvious issues. These issues are more subtle—in part because only a small number of participants might have noticed these themes. Since your sample size is typically small when doing these kinds of studies, an issue might have been apparent in a few sessions, but not in others.

However, this doesn’t mean you can ignore the issue altogether. It’s important that you not ignore these less-obvious findings. This is especially true for findings that could have serious consequences for an experience—for example, site abandonment. You should view a feature that only a few prospective users do not understand or deem unusable as an opportunity to improve the experience for everyone—even those who didn’t encounter the issue!

Figure 4 shows that a couple of participants, who noticed that the search box accepts only ZIP codes, indicated that they might not know the ZIP code so wouldn’t be able to find a race. One suggested searching for the name of a city instead of a ZIP code. Even though only a couple of participants mentioned this limitation of site search, that didn’t mean we could ignore this issue.

Figure 4—Do not ignore less-obvious findings
Do not ignore less-obvious findings

While one could argue that users could easily find the ZIP code by opening a new browser window and using Google to search for a city’s ZIP code, little inconveniences such as these can make or break a great design! A frustrated user might simply abandon your site for that of a competitor with a more effective site-search design that supports searching by city.

Do not ignore less-obvious findings that have serious consequences for your intended audience—consequences such as site abandonment. Instead, consider them key findings for your research report, especially if the opportunity cost is high.

Identifying Themes: The Not-So Obvious

The not-so-obvious themes might emerge from only a small number of participants. The particular issues that arise could also vary greatly across participants, making them more challenging to analyze. A key question you should ask in these situations is: How important is this issue to the success of the product? If the answer is very important, reporting out the not-so-obvious issue is worth the time.

Let’s say the product stakeholders hypothesized that users would never want to save their credit-card information on Middlepacker.com because of the threat of hackers’ getting access to their valuable personal information. One of the tasks in our research sessions was for participants to go through the shopping-cart flow and, optionally, save their credit-card information. (You shouldn’t prompt the participants to save their credit-card information because you want to observe their behavior when they come across this option.) Some participants opted to save it, while others didn’t. So, when you’re conducting a comparative analysis, you’d be unable to find a consistent theme in the data regarding participants’ underlying motivations.

Perhaps you didn’t ask enough probing questions about why participants didn’t opt to save their credit-card information so you never got into the reasons for their decision-making. You can only guess based on your notes and what you observed. If you observed participant body language that suggested concerns about security—or the opposite, complete apathy—you could use this not-so-obvious data in your analysis—especially if an issue is important to the success of the product. Use these insights as an indicator you need to do more research and see whether the not-so-obvious issues becomes more obvious.

Identifying themes: What Didn’t Happen

Can you guess what the what-didn’t-happen theme is? Your stakeholders might have had a hypothesis about something that simply never emerged during the research sessions. Do we call this a failure? Definitely not. Is something that participants didn’t even mention during your research worth mentioning in your final report? Absolutely! It is important that the stakeholders know that the issue never occurred; they can then try to figure out why. You could recommend a course correction based on the research findings. There might be some hidden gold in the dataset that suggests a next step.

Conclusion

Data analysis can be intensive and, at times, tedious work. Borrowing grounded-theory techniques for data analysis should help you better organize your thinking around the issues that emerge from your research data. Recognizing the different levels of themes can help you frame your approach to analysis. Always dig a little deeper until you get down to the essence of the experience.

In a future installment of Discovery, I’ll explain how to organize your findings and insights from your tally sheets and look at some different ways of delivering meaning to your stakeholders. 

Answer to the question about Figure 3: Most participants were not sure what we meant by a middle-packer.

References

Goodman, Elizabeth, Mike Kuniavsky, and Andrea Moed. Observing the User Experience: A Practitioner’s Guide to User Research. Burlington, Massachusetts: Elsevier Science, 2012.

Matavire, Rangarirai, and Irwin Brown. “Investigating the Use of Grounded Theory in Information Systems Research.” Proceedings of the 2008 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT), October 6–8, 2008.

Source link