What Is a Multimodal User Interface?
Multimodal user interfaces incorporate images, icons, videos, sounds, and interactive content. Voice-enabled multimodal interfaces include Lenovo’s Smart Display with Google Assistant, which you can see in Figure 3, and Amazon Echo Show, in Figure 4. For clarity, I’ll use the term multimodal throughout the remainder of this article when referring to user interfaces that incorporate both voice and visual elements.
The price points of these products are much higher, ranging from $130–250, but the capabilities of these devices are also significantly greater. The Lenovo Smart Display lets users watch YouTube cooking videos and view visualizations of their Google Calendar. Google also has plans to sync information from Smart Displays across platforms, including smartphones. For example, if a user asked for directions to a restaurant, he could immediately see the directions on both his Smart Display and his smartphone.
People can use the Amazon Echo Show in much the same way as the Echo Dot, but it also offers the capability of visualizing products and using Skype to talk to family and friends. These products fill the in-between space between smart speakers and more traditional tablets and notebook and desktop computers.
While the Lenovo Smart Display and Amazon Echo Show are currently among the most well-known products, Google has announced that they are partnering with Harman and LG  on smart-display products that will be available to customers soon.
While users do not explicitly think of these products as multimodal user interfaces, most are already comfortable with the daily use of the multimodal user interfaces of smartphones and wearables. Through these multimodal smart displays, users can make requests just as they would with a voice-first smart speaker. However, the response they receive will likely be multimodal—comprising both voice and visual responses—for example, showing the user the temperature on the thermostat she is adjusting or the artist performing the song she has requested. Multimodal products are more complex in their design and their content and may well be the future of user interfaces.
Example Use Cases
Today, we have a range of devices that could be voice enabled in the future. Some of these devices might be better suited to voice-first interfaces, with either a simple visual presence or none at all. However, there are many reasons why multimodal user interfaces might eventually lead the market.
Now, let’s consider some example use cases and the rationales behind them. Among these use cases, there is certainly room for voice-first interfaces to shape and diversify product offerings in these sectors. However, there are also many ways to integrate voice into the tools we already have, as well as to conceptualize new tools that would strongly benefit from being voice enabled and fully multimodal. We’ll all be researching, designing, and using these products in the coming decades.
Shopping Use Case
The user wants to buy a pair of running shoes. Most likely, the consumer would prefer a multimodal experience that would let him visualize various models of shoes and compare their prices. However, Tech Crunch  reports that only around two percent of Amazon Alexa users currently use the product to purchase items. It will be interesting to see whether this trend changes as more and more voice technology becomes available.
Web-Search Use Case
The user wants to know about a local shop’s store hours. For such a simple, succinct question, using a voice-first tool might be the most efficient way to get the information. However, if the user wants to get follow-up information such as directions, visual content might be useful.
Medical-Device Use Case
The user wants to do a routine check on a prescription. For such habitual actions, it makes sense that the user might not need visuals. Again, if the user’s question is brief and asks for only small amounts of information, a voice-first user interface might be the right fit. However, if the user interface would require long strings of numbers such as insurance information, visual comparison of prescription tablets, or other information is necessary, a multimodal user interface might prove more effective.
Transportation Use Case
The user wants to get driving directions to a new café. Multimodal information might be the best fit, allowing the user to both hear and see the name of the road on which to turn. The GPS voice tool from Garmin Speak and Amazon Alexa  shows basic icons and numbers to help direct drivers. Seeing the intersection on a GPS is important when driving on unfamiliar roads.
Social-Media Use Case
The user wants to talk to a family member or friend in a different city. While texting and phone calls are still commonplace, the rise of Facetime, Skype, and Google Hangouts indicates that users want to see the faces of the people they care about. Facebook is also betting on multimodal displays with the recent release of Portal, a larger-screen version of Facetime that pivots to follow user movement.  Although Facebook’s continued catastrophic data breaches might make some users hesitate before bringing this product into their home, this product release demonstrates that the social-media company is investing heavily in multimodal smart displays.
Gaming Use Case
The user wants to play games and is already a fan of such popular mobile games as Candy Crush and World of Warcraft. Some games provide wonderful voice-first experiences—for example, Adva Levin’s award-winning Kids Court Alexa skill —but all users have working-memory and attention-span limitations. Voice-first games must be well structured and paced to prevent the user from becoming lost. What level is the user on? What other cards, jewels, or tools can he use in this round? All of this information might be too much to keep in his head for long durations. Florian Hollandt’s 2018 Medium article  discusses games with voice integration. This seems like a logical direction for voice and gaming. As an augmented reality (AR), voice-enabled game in the 2013 movie Her illustrates, the incredibly enticing ability to talk to games and the characters in them will be essential as multimodal user interfaces advance.
Education Use Case
The user wants to brush up on her knowledge of Spanish, Python, and Chemistry terms and theories. While the presentation of content and quizzing might help her to memorize specific facts, a voice-first interaction would make visualizing concepts and connecting them to other ideas somewhat trickier. Does that new word tecnología have an h in it? Which letter is the accent on? Did I name the variable in my code correctly? Which lines need debugging? What does the ring structure of methyl benzoate look like? While a voice assistant might be able to support the user adequately by providing verbal answers to some of these questions, the ability to see much of this information would be key to solidifying learning.
Research on Voice User Experiences
My recent UX research has focused on asking a young, affluent demographic about their experiences with multimodal and voice-first products. While most participants had heard of voice-first products by name, a whopping 20% reported never having used them at all. In this study, 134 participants tested a new multimodal product over the course of five weeks. Participants reported feeling 24% more comfortable using the multimodal product than using voice-first products. That huge difference in their comfort levels could have direct implications for their purchasing choices.
In interviews, participants reported that some voice-first products felt “creepy” and were frustrating because the products often had significant difficulty understanding their speech. While many users have touted recent advances in speech-recognition systems that better understand their speech, poor usability is still an often-cited user experience issue, as 2018 reports from Answer Labs  and the Nielson Norman Group  indicate.
While we must conduct more studies and more broadly disseminate what we’ve learned about users’ experiences with voice user interfaces, conclusions from my study point to concrete reasons why multimodal user interfaces might gain momentum and become dominant in their popularity. The most successful, market-leading products will be those that customers feel comfortable bringing into their homes and using regularly.
Conclusions and the Future
In this article, I’ve made a case for why you might want to consider keeping your product’s visual features and why voice-enabled multimodal user interfaces might become users’ preferred user interfaces in the coming years. Of course, predictions have ample room for error, and many might completely disagree with the examples and arguments in this article.
This debate will help shape the future of human-computer interactions and how they will change in the coming decade. Over time, we’ll learn what users will adapt to as the transition to a voice-enabled world continues. There may be a few product blunders and comical missteps along the way. Think back to the case of the wooden horse head  that was designed to be affixed to the front of a car to help smooth the transition between buggies and cars.
Whenever there are big advances in technology, the key is to keep a finger on the pulse of trends in user experience and how they affect the purchase, adoption, and retention of new tools.
1. Thompson, Graham, and Ashok Ganesan. “Communication System Architecture for Voice First Collaboration.” Google Patents, December 23, 1999. Retrieved October 16, 2018.
2. Constine, Josh. “Say ‘Aloha’: A Closer Look at Facebook’s Voice Ambitions: Code Reveals Unlaunched Speech Recognition Feature.” Tech Crunch, August 21, 2018. Retrieved August 22, 2018.
3. NPR and Edison Group. “The Smart Audio Report.” NPR Press Room, January 10, 2018. Retrieved March 10, 2018.
4. Kinsella, Bret. “Gartner Predicts 75% of US Households Will Have Smart Speakers by 2020.” Voicebot.ai, April 14, 2017. Retrieved September 4, 2018.
5. World Bank. “Columbia.” The World Bank Group, 2018. Retrieved August 22, 2018.
6. Snips. “Snips.” Snips, 2018. Retrieved August 22, 2018.
7. Google. “Platforms Displays.” Google, 2018. Retrieved August 2, 2018.
8. Coldewey, Devin. “Surprise, No One Buys Things via Alexa.” Tech Crunch, August 6, 2018. Retrieved August 6, 2018.
9. Garmin. “Garmin Speak with Amazon Alexa.” Garmin Ltd., 2018. Retrieved October 17, 2018.
10. Facebook. “Portal.” Facebook, October 8, 2018. Retrieved October 17, 2018.
11. Levin, Adva. “Kids Court.” Pretzel Labs, 2018. Retrieved October 17, 2018.
12. Hollandt, Florian. “Multi-modality in #VoiceGames.” Medium, August 12, 2018. Retrieved September 4, 2018.
13. Chowdhry, Amy Buckner, and Christopher Geison. “Embracing Voice UX: 7 Steps to Building Great Experiences.” UXmatters, April 9, 2018. Retrieved August 2, 2018.
14. Budiu, Raluca, and Page Laubheimer. “Intelligent Assistants Have Poor Usability: A User Study of Alexa, Google Assistant, and Siri.” Nielsen Norman Group, July 22, 2018. Retrieved September 4, 2018.
15. Davies, Alex. “Well, That Didn’t Work: The 1899 Car With a Full-Size Wooden Horse Head Stuck to the Front.” WIRED, February 10, 2015. Retrieved September 2, 2018.