Designing a Practical UI for a Gesture-Based Interface

Co-authored by: Alona Lerman, Shachar Oz, and Yaron Yanai

Part 1: The Evolution of the Arc Menu

In the first article of the series, Omek UX Studio’s Creative Director Yaron Yanai and Lead Designer Shachar Oz talk about designing the application’s menu system called the Arc Menu.

Jump directly to Part 2, featuring how we used Augmented Reality to provide “feedback” to users.

The final product: a virtual bookshelf you can interact with using your hands

It is the mission of Omek’s UX Studio to explore new ways of interaction with computers using gesture and motion control. UX Studio team members research user experiences as the technology is being developed in order to inspire developers and create better tools for using this exciting new means of control.

For CES 2013 we decided to create a demo that shows off a 3D content browser: essentially, being able to view your books, music and pictures as three dimensional models. You can “pick them up”, look at them from all sides, open them and compare them – all using natural gestures, with no teaching required. The demo we created shows how to use this application for one’s own library of books, this can easily be extended to an online retail application, providing customers the ability to “try out” products virtually, right in their own home. Consumers can “try out” actual items, compare them with similar items, and then purchase them.

The “Practical UI” as we’ve called it, was built from the ground up with the intention of deploying gesture recognition to control every aspect of the app. The ease of use of the tool is the result of several months of development and user testing.

Making Sense of Gestures

At the start of designing the application, we defined several interactions we planned to create:

  1. Menu Navigation and selection: navigating between different collections
  2. Collection Navigation: navigating in 3D once inside a collection, i.e., panning and zooming
  3. Object Selection and Manipulation: Picking up an object, pulling it closer to see more details, rotating it in 3D to view it from every angle
  4. Object Comparison: Picking up two objects, one in each hand, and compare them visually by rotating each of them
  5. Player: Looking inside an object, i.e., opening a book, playing a record

The challenge

Create an application with the functionality to perform all of the gestures and interactions listed above in an intuitive and comfortable fashion. Ensure that we are always providing a responsive, engaging, convenient, and most of all, fun, experience. And finally, showcase the potential of gesture recognition to enable a more dynamic interface with better control of three dimensional objects in a virtual world.

When we began the design process, we leveraged the Studio’s vast experience of building gesture-based applications to ensure we avoided a few of the classic pitfalls people make when translating standard mouse + keyboard or touch paradigms to a 3D environment.

  • Boundaries: The user must always know whether or not she is being tracked, i.e., being “seen” by the camera
  • Location feedback: Provide constant feedback on where a user is located so they can know how & where to move in order to get to their desired selection.
    • One simple option is to place a cursor on the screen the way a regular mouse does, and have it follow the user’s finger. There are a lot of disadvantages to this method, however, because it requires the user to be very accurate leading to increased fatigue and frustration.
  • Item selection: How does a user make a selection? There are several methods for selection using gestures, however a simple “click” is not one of them since there’s no button to click.
  • Fatigue, responsiveness and accuracy: We found out that these mechanisms are closely tied together and fatigue, being one of the major drawbacks of this technology, is probably the problem we spend the most time on.

This first article in our series touches on our approach to designing an engaging, easy-to-user menu system, we’ve called: The Arc Menu.

The Menu System

The first challenges we addressed were Boundaries and Location Feedback. Often, boundaries are defined by a small “live feed” frame at the bottom of the screen and an “out of boundaries” alert when the user approached the edge of the field of view of the camera.

This time, however, we decided to try something new: we stretched the camera’s feed (the depth data) to fill the entire screen, so that it became the background of the entire application. This way the user is receiving real-time feedback on whether his hand is inside the FOV of the camera. It acts as if the application is a mirror of the user’s hands. When we tested it out, people’s reactions were very positive. We ended up liking it so much that we let it shape many other aspects of the application.

Figure 1: Stretching the live feed over the entire screen

We decided to give the application an augmented reality feel, where the hand itself becomes the “pointer” instead of a traditional cursor. This required the creation of a specialized tracking system to “understand” what the user is pointing at. The application needed to support most hand configuration scenarios in order to keep the application intuitive; we designed for:

  • index finger pointing
  • full hand pointing
  • middle finger pointing
  • And more!

While we were testing the input system, we started designing the menu. We started out with a four item menu system: 1) Books, 2) Music, 3) Photos and 4) Friends. We kept in mind a few key learnings from our previous experiences building menu systems designed for long range environments:

  • Buttons must be relatively large in order to be selected easily
  • Menu selections must be placed sufficiently far apart to avoid false selection
  • Menu buttons must enlarge on hover in order to avoid flickering at the edges

We started off with a simple design: a linear horizontal distribution, with four buttons spread across the width of the screen.

Figure 2: Horizontal Menu

So, what did we find out from our initial user testing?

The good:

  • The Augmented Reality style selection worked great for all of the test subjects in terms of intuitiveness and responsiveness. There was no need to explain how to use the interface since a user could immediately see their hand and whether it was being tracked. There was almost zero latency since the hand itself was the cursor and there was no need for accuracy since the buttons were big and you didn’t need to have a tiny cursor point at a specific point

The not-so-good:

  • Fatigue was high, even though the movements are relatively small
  • When a user moved her hand in a horizontal line across the screen, her hand would obscure parts of the menu and the screen
  • Right-handed users found it difficult to select items on the far left side of the screen

Small, incremental changes sometimes aren’t enough.

We started off by making small changes to the design to tackle the issues we identified during user testing.  First, we limited the menu to the right half of the screen only.  User testing showed that the experience was slightly better but not good enough.

We realized that the best way to deal with fatigue was to enable users to rest their elbow on a table or the arm of their chair. Our tests showed that for the first two buttons on the right side, the users had great results. Fatigue was minimal even after several minutes of interaction. To reach the two buttons on the left, however, the users had to raise their elbow, which brought back the problem of fatigue. This proved to be true even when we put all four buttons in a stack formation or a square formation. We kept coming up against the issue that only some of the points were convenient to select without having to bend the wrist or elbow in an uncomfortable way.

Build Upon a User’s natural movements.

We took a step away from the computer and just observed the natural movements of our bodies.  We had a few test users sit comfortably in front of their computer and had them move their hands horizontally and vertically, without lifting their elbow from their desk.  Visualizing these movements as lines it quickly became clear that the joints in our body don’t naturally move in a straight line, but rather as an arc.  So why not build the menu into the shape of an arc?

Figure 3: Visualization of users’ natural hand movements

We gathered a wide set of examples of hand movement “arcs” from people of all sizes in order to create a “standard” Arc Menu that would work across a wide range of users.

We tested the new Arc Menu and the results were especially positive. The interaction was intuitive and fun, while fatigue was low even after several minutes of continued use. All of the buttons on the menu were equally accessible and it worked perfectly with the input system.

Finishing Touches

To ensure an elegant experience, we designed the application so that the arc menu only appears when needed. We accomplished this by folding the arc into a single button on the top right that unfolds automatically when a user hovers over it.

Figure 4: Final design of the arc menu

What’s next for the Arc Menu?

  1. Extend the experience to more than 4 buttons
  2. Make the arc size adjustable according to the screen’s size or even user preferences
  3. Make the arc flip for left-handed people

Conclusion

Close range interaction is very different from long range gesture-based experiences. Although the tracking is much more precise and responsive, we still face similar issues, such as fatigue.  And in close-range, these issues are often felt immediately and get worse over time.

When the user rests his elbow on a table or arm, fatigue is much lower and interaction can last minutes or even more. This however limits the mobility of the user’s hand to pivot around the elbow. All of this information led us to create an arc-like menu which was not only an answer to a problem but actually proved as a very useful and even fun experience extending the amount of time the user can work in front of the computer.

And if you’re interested in signing up for our upcoming Grasp beta…just click on the link below.

Thanks and stay tuned for the next chapter.

14 thoughts on “Designing a Practical UI for a Gesture-Based Interface

  1. Pingback: Omek | Designing a Practical UI for a Gesture-Based Interface « StartUp and Grace

  2. Pingback: TV gesture control: a case study - Small Surfaces

  3. Pingback: Omek's Gestural Interface Makes Perceptual Computing Human-Friendly | Your Total Solutions Technology Center

  4. Pingback: Omek’s Gestural Interface Makes Perceptual Computing Human-Friendly | Waynes IT World

  5. Pingback: Omek’s Gestural Interface Makes Perceptual Computing Human-Friendly | Arts et diversités

  6. Pingback: xbox 360

  7. Pingback: Omek’s Gestural Interface Makes Perceptual Computing Human-Friendly | FlexBeta

  8. Pingback: Omek’s Gestural Interface Makes Perceptual Computing Human-Friendly | liveimmigration.com

  9. Pingback: Omek's Gestural Interface Makes Perceptual Computing Human-Friendly - How to do

  10. Pingback: UI и жесты: навигационное меню в форме арки | Fresh: новости мира юзабилити

  11. Pingback: Ergonomische Gestensteuerung | JNS - ISRASWISS

  12. Pingback: Новости » Blog Archive » Обзор свежих материалов, январь-март 2013

  13. Pingback: Les interfaces gestuels gagnent en maturité « SimpleWeb.fr SimpleWeb.fr

  14. Pingback: Les interfaces gestuelles gagnent en maturité « SimpleWeb.fr SimpleWeb.fr

Click on a tab to select how you'd like to leave your comment

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*