The Bold Future of UX :: UXmatters

Interactions

Our understanding of user interactions with AI systems is still developing. How should someone use AI? Is use even the right term when it comes to AI? Once AI becomes fully realized, we might see complex AI systems that intertwine the systems of a home, car, office, appliances, and personal technology gadgets, with all of them talking to each other and exchanging information without the user having to do anything.

Think ahead to the future when you’ll have your own personal AI. Our interactions with AI systems might consist of nothing more than offhand comments. We would essentially be interacting with the AI without even knowing that we’re doing so.

For example, if I were making breakfast and muttered to myself, “Almost out of milk,” a strong AI would remind me at an appropriate time and place to buy milk—or simply take the initiative to order a gallon of milk from the automated grocery service in my area and time its delivery for when I’ll be home from work. Or maybe I wouldn’t even need to state that I’m out of milk for the AI to act. Finishing a gallon of milk might be a passive interaction that prompts the AI to take the next logical step and order milk automatically.

In the future, the user would not need to proactively use the AI. Instead, the system would simply pass and parse data behind the scenes.

Trust

Trust in AI has been a recent topic of discussion in technology. For people to want to use AI on a regular basis, they need to trust it.

Everyone has probably tried Siri. The early, buggy interactions people have had with Siri have had a negative impact on trust. The question is: how many people deliberately use Siri today—or, of the last few times Siri appeared, how many were accidental? Our lack of trust in Siri has eroded our perception of the value of voice assistants. This has scared so many users away from voice assistants that most have not even tried Microsoft’s Cortana. Have you tried Cortana, or did you just think, “Eh, it’s like Siri?”

It took a device with an entirely new form factor to get people to try voice assistants again. Alexa sits on a table and has finally encouraged people to give voice assistants—that is, AI-enabled voice devices—a second chance. Luckily, by the time Alexa appeared on the market, the technology had evolved, so has become more widely accepted and adopted.

Siri and Cortana have also advanced and evolved. But how many have re-engaged and tried them again? And why? Because of trust. Trust in AI is created when we ask a question and receive the right answer, when we give the AI a task and the AI performs it correctly, when we purchase a product and receive the correct product—and, possibly most importantly, when an AI keeps our personal information safe.

Once an AI system has achieved these three components of success—context, interaction, and trust—it will be much more likely for it to hit the mainstream and that AI will become the runaway success that futurists predict it will be. Even if we never fully realize these components or truly deliver on the promise of AI to users, the developers of AI systems should always keep their users in mind. After all, we’re ultimately creating these AI systems for their benefit.

The Singularity and the Future It Will Bring

Futurist Ray Kurzweil believes that we are rapidly approaching the Singularity—the point at which the computing power of technology exceeds the computing power of people. A variety of emerging technologies will fuel this Singularity, including AI, robotics, and nanotechnology.

Once this Singularity arrives, Kurzweil and other similar-minded theorists believe that life as we know it today will no longer prevail. He compares the difficulty of describing this post-Singularity society today as being just as difficult as describing to a caveman how different life would be with bronze tools and agriculture. This future is likely to be radically different. What should we do to help shape our future rather than simply sit back and watch it happen?

If you buy into the whole notion of Kurzweil’s Singularity, how should you design for a future that predictions say will be wildly different from anything we’ve ever known or could now fathom? How would a UX designer implement traditional usability principles such as effectiveness, efficiency, and satisfaction—or will these principles become relics that we leave by the wayside as radically different interaction models emerge?

Robotics

Now, let’s think about how AI and robotics have the potential to completely flip the paradigm of usability and user experience. The user should not have to learn how to use an AI system. AI is supposed to do the learning—about our habits and routines and what actions to take in response to whatever happens. There will be a role reversal in which—using UX research terminology—the user becomes the stimuli and the stimuli becomes the user. The human being would become the stimuli to which the technology learns to react and respond.

A robot is essentially an AI that has a corporeal form. The addition of a physical form creates further challenges—regardless of whether that form is vaguely humanoid. How would users properly interact with a fully autonomous mechanical being that can act on its own? The flip-side to this question is just as important: how does a robot interact with the user?

Before we dive into answering these questions, let’s get on the same page about what a robot is. A robot must be able to perform tasks automatically based on stimuli from either the surrounding environment or another agent—for example, a person, a pet, or another robot. When people think of robots, it’s often of something similar to Honda’s ASIMO or their more recent line of 3E robots. Our definition also includes less conventional robots such as autonomous vehicles and machines that can perform surgery.

A research team at the University of Salzburg has done extensive research on human-robot interactions, testing a human-sized robot in public in various situations. One finding that is particularly interesting is that people prefer robots that approach from the left or right, but not head on.

In San Francisco, a public-facing robot that works at a café knows to double-check how much coffee is left in the coffee machines and gives each cup of coffee a little swirl before handing it to a customer.

UX Design Principles for Robotics and AI

While a robot in Austria that approaches from the left and a robot in San Francisco that swirls a cup of coffee might not seem related, both point to UX design principles that we should keep in mind as public-facing robots become more ubiquitous:

  • A robot should be aware that it is a robot and endeavor to gain the trust of an untrusting public. People’s preferences for robots not to approach them head on and always to remain visible to the user are evidence of a lack of trust.
  • Design a robot knowing that people like to anthropomorphize objects. For example, people prefer the coffee-serving robot to do the same things a barista might do, even if they’re things the robot doesn’t need to do.

As with all design principles, these are likely to evolve. Once robots become more ubiquitous in our lives and people are accustomed to seeing them everywhere, different preferences for the ways in which humans and robots interact may become the norm.

This may already be the case in Japan, where robots have been working in public-facing roles for several years. While anthropomorphic robots are still the dominant type of bot in Japan, there is now a hotel in Tokyo that is staffed entirely by dinosaur robots. The future is now, and it is a weird and wild place. 

Source link