In the scenario shown in Figure 2, the participant clicked the view more link first, then the Share button, then tried to click the album cover. The numbers denote the sequence of action, while the carat symbol indicates the interaction type. Combining these two annotations saves space and time! Plus, having fewer symbols to remember helps you to focus on the participant’s behavior.
Often, during early-phase concept testing, stakeholders might want to discover what types of tasks users need to perform with the system. They might have a rough, but uncertain idea about how people would use a product. Using the language of business analysts and product owners, we came up with the use-case annotation.
According to usability.gov, a use case describes how a user performs tasks using a Web site or application. It also describes how a system responds to a user’s requests. The use case describes the context of use at a very basic level. When I was a business analyst, the use case formed the basis for the functional requirements that I provided to the engineers who were building my organization’s systems. Use cases comprise three main components:
- Preconditions for successfully completing the task
- The task the user is performing
- Expected feedback from the system
In the VDC method, the use-case annotation corresponds to the second of these components—the task the user is performing.
Let’s say your stakeholders are interested in learning the circumstances under which their customers—buyers and sellers of records—would want to explore a brand new band. Here is a description of the user’s context and a snippet of dialogue from a UX-research session during which this insight emerged:
Scenario:The participant is on the artist detail page for a favorite band, looking at the Discover similar artists feature.
Moderator: What would cause you to want to look for new artists?
Participant: As a DJ, I’m always looking for new artists—especially those in the same vein as some of my current faves. These serve as potential candidates for getting some playtime at upcoming gigs!
In this scenario, the annotation might look similar to that shown in Figure 3.
For a more detailed scenario, which typically includes a narrative that the participant provides, use an anecdote annotation. The biggest difference between the use-case and anecdote scenarios is the level of detail they provide. Anecdotes typically include a lot more detail than use cases do, which simply provide some basic, high-level details. A use case is sufficiently broad that you can more easily recontextualize it, while an anecdote is already highly contextualized. Let’s look at a scenario snippet that depicts the use of an anecdote annotation:
Scenario: The participant is on the artist detail page for a favorite band, looking at the Discover similar artists feature.
Moderator: What would cause you to want to look for new artists?
Participant: DJ’ing is a part-time gig and doesn’t really pay the bills, so I have very limited time to look for new jams. In fact, my fans have been tweeting that I haven’t played much new music lately—just the same old tracks. This feature would save me time and help me to improve my reputation!
Figure 4 shows the use of an anecdote annotation.
Capturing anecdote annotations lets you easily identify stories within your research that could influence decisions that your team might potentially make during the design process. Keep in mind that, while this annotation puts a face on the user, it does not necessarily attempt to quantify users in any large-scale, statistically significant manner. But it helps you to provide a good answer when stakeholders question your insights: “How do you know? Prove it!” Easy. The proof is in the richly detailed stories of your participants—in the anecdotes!
Points of Confusion
When participants don’t understand a particular element in a design, it is useful to capture their reactions—especially if the same elements confuse many participants. For example, there might be a button label that participants find confusing. The confusion annotation consists of a question mark and a line that connects the question mark and the confusing element. Figure 5 shows an example of its use. In this scenario, the participant became confused about why a section with the heading You might also like… would include advertising. Such insights could help you to significantly improve the usability of the site’s content.
Elements Participants Understand and Don’t Understand
Another type of data that stakeholders find of interest is knowing whether participants understand specific aspects of a design. For example, you might be trying out an unconventional design pattern that your team is uncertain about. Would participants understand that specific aspect of the design or might you need to rethink it?
Let’s say your product team has been receiving reports from customer service about people uploading copyrighted content to the music site. The product team wants to encourage customers to be vigilant about reporting content that doesn’t belong on EarWorm.com. So they’ve decided to add a feature that lets customers report such incidents, and they want to learn whether participants understand what the Report button does. The understood / not understood annotation lets you capture such insights. In the scenario shown in Figure 6, the participant did not know exactly what he was reporting. This insight suggests that it might be necessary to add more information on the page to better convey how customers can report incidents.
Even if recording user-interface ideas that participants generate is not one of your specific research goals, you still might want to record them—especially those that come from paying customers! Particularly during early-phase, concept testing, you might get an earful of ideas from participants. Fortunately, there is an annotation for these. No, it’s not a light bulb, but it looks kind of like one. Use an exclamation point to convey an idea that come from a participant—an Aha! moment.
When “crazy” ideas come early in the product-development lifecycle and development costs have yet to materialize, stakeholders are more likely to consider them than the issues you identify later on during usability evaluations, whose resolution would cause development costs to escalate dramatically. Capture the idea data that you collect in a feed, then assess this data to determine whether the ideas represent a potential goldmine. The scenario shown in Figure 7 provides an example of how to use the idea annotation. In this case, a participant described wanting to see album reviews to help make purchase decisions.
Here are some additional insights and considerations to help you get started using annotations in your research projects:
- Be economical in choosing your selection of annotations—and choose them wisely. Your first reaction to these strange symbols might be: Wow! How am I going to use all of these? Don’t let a flood of too many symbols overwhelm you. It’s important that you choose a limited set of annotations—neither too many nor too few. Think about your research questions and what annotations might be useful in answering them. For example, if stakeholders want to know whether something is missing from the prototype, it would probably make sense to include the idea annotation on your cheat sheet. On the other hand, if they want to know whether participants understand a particular aspect of the design, you should absolutely include the understood / not understood annotation. There is no need to boil the ocean and include every possible symbol you might conceivably use in your study.
- Practice. Practice. Practice. When you begin using this method, you might ask yourself: What if I apply the wrong symbol to a piece of data I’m collecting? You probably won’t get everything right the first time. The only way to become more consistent and disciplined in your notetaking is through practice. Integrate this notetaking approach into your upcoming studies—perhaps starting with a pilot study for which the risks for getting things wrong are lower. Commit to trying this method at least a few times to see how it works for you. Give this approach a fair shot. Don’t give up on using it after the first participant in your first study. Make another notetaker the official record keeper while you practice using this new method. Figure out what works and what doesn’t. Don’t be afraid to swap out certain annotations for other new or different ones. Use the annotations that work best for you. Print out your cheat sheet and always bring it with you to studies—keeping it next to you on the opposite side from the participant. It’s for your eyes only!
- With practice comes proficiency and recognition of annotations becomes automatic recall. Don’t worry if you initially have difficulty remembering all of the annotations. Once you’ve used these conventions for a while, you’ll no longer need your cheat sheet because you’ll have committed them to memory. Ultimately, the cheat sheet becomes an artifact from your study. File it away for future reference, in case you do a similar study.
- You’ll likely discover many other useful types of annotations. While, after a few studies using the annotations, you might think there are no more to discover, this is simply not the case. Your research questions change from study to study, so you’ll probably find the need to create new types of annotations. For example, the anecdote annotation came from hearing participants’ great stories describing circumstances of product need and use during exploratory research. The annotations we’d previously used did not really capture these rich details. So look for unique instances in your research data that might warrant your creating a new annotation.
Are you interested in sharing these annotations with your colleagues? What unique annotations are you using in your UX research. Please describe them in the comments and share them with me and the rest of the UX research community!
In a future Discovery column, I’ll discuss how to put all of these annotations to work during the analysis of research data. Stay tuned!