A Design Language for Touch, Gesture, and Motion :: UXmatters

Showing Areas of Interaction

Communicating what areas users must click can often be problematic. For some reason, many developers default to making only words or icons clickable rather than entire buttons, boxes, or rows. Tap targets always need all the space they can get, so the best way to make sure they get coded properly is to specify them visually.

Yes, my written specifications also say things such as, “Tapping anywhere on a row loads a details page…,” but developers often don’t read or follow those instructions. Box diagram overlays are clear, but interfere with viewing a visual design, so I often add brackets to the side of or below an element to indicate a whole area is clickable, as Figure 3 shows.

Figure 3—Brackets indicating two different clickable areas
Brackets indicating two different clickable areas

Often, it’s necessary to explain a function because only part of a user interface scrolls or moves. In fact, almost everything I design now has a fixed header and footer or chyron.

The old-school, Web-design viewpoint of pages scrolling within a viewport is so prevalent that I must often very carefully indicate what part of a page scrolls and how. At least once in every product-design system or project style guide, it’s important to provide a diagram similar to that shown in Figure 4 to make scrolling clear.

Figure 4—Indicating only part of a screen scrolls
Indicating only part of a screen scrolls

Indicating Gestural Interactions

Did you notice the little, orange dot under the fingertip in Figure 2? It is necessary because the finger isn’t the only thing that diagram represents. It also indicates what the interaction is.

Over time, I’ve developed a visual language for gestural interactions in specifications. For example, these might indicate the difference between a tap and a press or the direction of a drag or rotation. The chart in Figure 5 shows the key components of this visual language.

Figure 5—Design language for gestures
Design language for gestures

For this gestural language, the hand is the key orienting element. Dots represent a touch, ringed dots a press or touch-and-hold interactions. Arrows show the direction of movement—or the available directions of movement—for scrolling, dragging, rotating, or whatever other action is occurring.

Depicting Motion

Another key thing whose motions you need to depict are elements within the user interface. If there are elements that move on their own in any way, you must specify where they move and how.

For example, while carousels are annoying and ineffective, they are very common. Figure 6 shows an example of interactions with a carousel.

Figure 6—Annotations for selection and animation
Annotations for selection and animation

Simply showing that an element animates by providing an annotation outside the design, as shown in Figure 7, is usually enough detail. But to ensure that nobody interprets the direction arrow as an action path, enclose it in a box, diagrammatically illustrating that one banner moves to display another.

Figure 7—Diagram detailing a banner’s animated movement
Diagram detailing a banner's animated movement

Note that Figure 7 also shows a selection diagram, with a bracketed area indicating that the user can click or tap the banner to take an action. If the user also has manual control of a carousel, you should show those tappable areas or swipe interactions as well.

Communicating additional details can sometime be important. For example, you might need to detail the animation movement in both a diagram and in your written specifications, as Figure 7 shows. You can expand on the diagram by showing movement and label each phase of movement. The separate line in green indicates animation speed as a function of the vertical axis.  

While this diagram shows time on the horizontal axis, it is rare to represent time at scale because this would be hard to indicate properly. The most critical phases of an interaction often have very short timeframes, so could be lost.

Showing Off-Screen Interactions

I leverage the principle of off-screen design elements for all of my specifications documents. The UI layer—the screen design—is only part of the overall product design. It must function as part of an integrated, well-considered system.

A few of the iconic representations of elements outside an app or Web UI that I commonly use include the following:

  • share intents
  • email or SMS
  • cross link to another platform—for example, for an app, the Web
  • camera
  • settings
  • delay
  • process complete
  • sound
  • haptics, or vibration
  • LED annunciator

Figure 8 shows a few examples of these iconic representations.

Figure 8—Diagram showing off-screen behaviors
Diagram showing off-screen behaviors

Employ best practices for the use of icons, reinforcing all icons with text labels. It’s necessary to design specifications documents as much as actual user interfaces.

Designing for Hardware

Hardware is even harder to prototype than software—especially when integrating software prototypes with hardware. Therefore, design specifications are critical to joining the two sides of a product.

Because hardware can take so many different forms and perform so many functions, there is an almost infinite number of possible specifications for interactions with hardware. Let’s look at three interesting examples of specifications that I create regularly.

Buttons

Early in my career, I wrote a lot of specifications detailing what off-screen button pushes do to on-screen behaviors. However, after years of doing touchscreen work, I’ve observed that certain devices are now making a comeback and with better designs, so buttons on hardware matter again.

Showing a device with a finger on a button is helpful in orienting the project team, but specifying the actions that hardware buttons initiate on a screen is hard. Figure 9 shows an example of a button-driven process flow.

Figure 9—Snippet from a diagram of a button-driven process
Snippet from a diagram of a button-driven process

Note how, instead of showing the user interacting directly with the device, the diagram shows the button itself as initiating a process function. In the case shown in Figure 9, there were relatively few, very clear buttons, so I could show them individually and depict their direct interactions. But for some five-way pads that don’t have such clear labeling, I instead provide different representative diagrams of the entire direction pad, with the relevant portion highlighted.

Blinking

In many industries, blinking is a very common signaling technique for warning and annunciator lights. However, the logic that drives the fundamental nature of blinking lights as a warning is flawed because of a change in technology.

Back when basically all annunciator lights were incandescent, they had significant start and stop times. The filament took a visible amount of time to power on, then to go dark after power was removed. For a simple blink circuit, applying and cutting power would not make the light turn on and off, but instead, make it slowly build to full power, then gradually drop off. So the light would pulse between off and on.

LEDs, on the other hand, turn on and off almost instantly. When the blink cycle is off, the light is completely off. A problem I have encountered many times is that people happen to glance at a panel or the top of their phone between blink cycles. So they can miss seeing a bright, blinking light entirely or just see a blink out of the corner of their eye, then quickly look at it only to see that it is off. To avoid this problem, you should specify blink behavior, as shown in Figure 10.

Figure 10—Diagram specifying the blink-rate behavior for an LED
Diagram specifying the blink-rate behavior for an LED

Because the power applies slowly, the light is never off during a blink, so users can see it no matter when in the cycle they observe the light. This is so complex to explain that simple written specifications never work, so a diagram similar to this is necessary to orient everyone to the proper behavior.

Kinesthetics

So far, I’ve assumed that a mobile device is relatively fixed in space or any movement is irrelevant, but this is often not the case. In fact, movement can sometimes be critical to understanding what the user is doing because the system performs actions based on that movement. Figure 11 shows a series of device-movement behaviors and how I depict them by extending the hand-and-arrows gestural language that I use for on-screen interactions.

Figure 11—Gestural design language for mobile-device movements
Gestural design language for mobile-device movements

Conclusion

The creation of design artifacts such as prototypes and specifications should encompass common, easily understood methods that go beyond the traditional wireframe. Our design documents must communicate not just the look, but also the feel of the entire designed system. You can easily accomplish this by designing arrows, labels, lines, and icons that represent behaviors and placing them adjacent to the UI design layer.

Doing this makes creating design artifacts less about drawing user-interface comps or wireframes and places greater emphasis on written specifications that describe the user experience of the entire product or system. 

Resources

Hoober, Steven. “Adaptive Information Design and the Box Diagram.” UXmatters, January 7, 2019. Retrieved April 22, 2019.

—— “Paging, Scrolling, and Infinite Scroll.” UXmatters, November 5, 2018. Retrieved April 22, 2019.

—— “Cascading UX Specifications.” UXmatters, January 8, 2018. Retrieved April 22, 2019.

—— “Tools for Mobile UX Design.” UXmatters, June 17, 2013. Retrieved April 22, 2019.

Cook, Daniel. “Creating a System of Game Play Notation.” Lost Garden, January 15, 2006. Retrieved April 22, 2019.

Koster, Raph. “An Atomic Theory of Fun Game Design.” RaphKoster.com, Jan 24, 2012. Retrieved April 22, 2019.

Adams, Ernest, and Joris Dormans. “Machinations: A New Way to Design Game Mechanics.” Gamastura, August 16, 2012. Retrieved April 22, 2019.

Source link