Based on these exploratory interviews and a competitive-analysis exercise, the Design team and I determined the must-have features for our MVP. The Design team put together some initial mockups, then we prepared for a round of concept testing to see whether we were on the right track. After the research, I compiled the key findings and recommendations and worked with the team to determine what changes we should make.
We seemed to be making real progress, so I wanted to share our process and success more widely. I put together a slide deck chronicling our research and design process. It showed the designs before and after research, highlighting the key changes we made based on user feedback. I presented this case study at a company-wide, Friday-lunch talk, and this was a great moment. I finally felt that people had really begun to understand research and its value more clearly.
I definitely felt that pivoting from general, foundational research to targeted concept testing was a better approach for demonstrating quick, concrete, research wins. People needed to have a basic understanding of user research, which I could then slowly build on over time. I needed to dispel some of their preliminary skepticism and capitalize on their early enthusiasm.
In addition to strategically shifting my research focus and projects, I revised my recruiting efforts based on environmental feedback. Initially, I had asked Sales, Support, and Product to recommend research participants for me. However, my requests often went unanswered. Since this method wasn’t working, I decided to try a scrappier approach. I had learned that we logged internal call reports, in which associates detailed any calls with clients, so I began scouring them. I created a spreadsheet and recorded promising contacts’ names, as well as the products they used. Then, whenever I needed to do a study, I could search for relevant prospects and ask internal stakeholders whether certain clients would be a good fit.
Additionally, I started running internal pilots for my research studies. By doing this, I can collect valuable feedback, tweak my script and questions, build relationships across teams, and teach people the value of UX research through first-hand experience. Running pilot sessions internally helps me to provide full transparency into the research process, gain the trust of my colleagues, and increase internal awareness of UX research. After participating in pilot studies, internal associates are much more willing to recommend clients to contact about participating in our research.
I also started creating email templates for each study, so internal associates can easily reach out to clients. These templates include the product we’ll focus on, details about what to expect during the research session, and the benefits of participating in a research session. This approach has proven much more successful because internal associates are happy to help once I’ve done most of the legwork, laid out the details for them, and reduced their barriers to action. Over time, with thoughtful reflection and tweaks to my process, I’ve gained the trust of my teammates and greatly improved my recruiting success.
Along with iterating on my research topics and recruiting strategies, I soon realized I needed to adapt how I presented my findings. In my previous role, we had compiled a research report and held formal readout sessions to discuss the findings and next steps. I noticed that this approach was too formal and sluggish for the rapid pace of my new company.
Instead, I decided to try compiling a summary document after each research study, including the key findings and recommendations. This worked much better than a formal report, but I wondered how I could spur more discussion around the findings. I was lucky to have engaged stakeholders, so I decided to hold a quick debrief with the team after each research session. These proved effective and let us discuss what we had heard from participants while it was still fresh in our minds. Then, after we had completed all of the sessions, I synthesized our debrief discussions and users’ overall feedback, compiling the key issues and action items that we needed to address in the summary document. Using Google Docs let me easily share the summary document and enabled real-time commenting and collaboration, which was necessary to sustain our fast pace.
I found that the post-interview debriefs and summary document worked well with highly engaged product teams, but I needed a different process for teams that were stretched thin and couldn’t attend all of my sessions. So I decided to try inputting comments directly into Figma, the design and prototyping tool we were using. This proved to be a great way to gather our notes, findings, and design suggestions in context. Design, Product, and Engineering all had full visibility into Figma and could read and respond to these comments. These in-context comments offered the added benefit of providing reminders, so my findings wouldn’t get lost or be ignored.
By moving from formal reports to a key-findings summary page to gathering in-context insights and recommendations, I experimented with different ideas, gathered feedback on what worked, and revised my approach accordingly.
In UX research, we collaborate with product teams to iterate on product designs and implementations based on user feedback. So I decided to apply the investigative, iterative, and adaptive skills I have learned as a UX researcher to develop a brand-new, UX-research practice. To do this, I needed to understand this particular company’s core needs and culture and develop responsive, effective processes and practices. Experimentation, observation, and openness to change have been crucial to my success. But I’m only just beginning. Learning what works best is a constant process, and I’m continually iterating and improving my user-research practice.