Imaginary Worlds and the Invisible Wall

A few weeks ago we published our thoughts on designing for the Effective Interaction Zone.  A concept closely related to the EIZ, is the idea of the Virtual Wall.

Yep, a Virtual Wall. It’s an appropriate name for the topic of gesture, since, well, as we said in our last post, gesture is essentially “invisible” to the end user. What does that mean?

When interacting with gesture, there are no (visible) physical constraints or guideposts. The camera doesn’t project out a soft, triangular light that notifies the user of how accurately her hand is being tracked or whether she’s in the field of view.

This idea of the “invisibility” of gesture – where there is not necessarily a physical representation of what you are doing – actually creates a unique and fun design challenge. Designing for the “invisible”, though, requires added consideration from the designer as they are creating and designing their app.

During our many internal design iterations, we’ve come up with the idea of the Virtual Wall as a means to tackle this design challenge. The Virtual Wall takes its inspiration from “touch screens” but enhances the experience by leveraging the 3rd dimension offered from a 3D camera. Essentially you create a plane in space that when “touched” will generate a corresponding reaction in your application.

The Virtual Wall offers a couple of benefits. On the one hand it can reduce the learning curve for a user by simulating familiar experiences and interaction approaches. And on the other hand, it can prevent the user from getting too close to the camera , thus blocking the camera from actually being able to identify hand tracking and gestures!

How to: Create Your Own Virtual Wall

So you would like to design your own Virtual Wall into your program? No problem! Read on to learn the elements you need to incorporate to create your own.

To start, think about determining a plane in space that is convenient for the user. For example, if your application will be used during pauses when a user is typing at their computer, it makes sense to define the Virtual Wall a couple of centimeters above the keyboard and perhaps a couple of centimeters just beyond where the top of the keyboard ends. This enables the user to simply lift his hand a minimal amount and immediately engage the wall. 

Grasp and the Virtual Wall

One of the major advantages of using Omek’s Grasp technology is that as a developer you are provided with a full model of the hand with accurate finger naming. Why does that matter?

Well, this way you can set up your application to recognize the difference between a user’s palm (or pinky finger) hitting the wall versus their index finger. The aim here is to prevent false positives –to ensure that users don’t trigger the wall when they don’t intend to. If it’s their palm or a few fingers then it won’t activate the wall. If it’s a single pointed finger, then you can be pretty certain that the user was intending to activate the selection item on the wall.

Leveraging the 3rd Dimension

We’ve highlighted a couple of reasons above for employing the Virtual Wall. There are also interesting ways to leverage the 3rd dimension afforded by 3D depth based cameras.

For example, a drawing application, like one built by our UX Studio – you can simulate different types of physical force based on how far or how close a user is to the Virtual Wall. In this use case, the line thickness changes so that the closer you move to the wall, the heavier the impact and the thicker the resulting line. As you move your finger further away the line becomes thinner.

Another effect you can implement with the Virtual Wall is to grab and rotate an object.  Imagine a user reaches the wall with multiple fingers that then close around an object. Once they have “grabbed” the object, they can then pull their hand back and “bring” the object with them. Here you are building upon the familiarity of touch but extending it to an entirely new dimension using your 3D camera!

Stay tuned for more posts coming soon!

Compal Electronics and Omek Partner to Develop First Gesture-Based “Touchless” Computer

You may have read the press release we posted up on the site yesterday announcing our partnership with Compal. It’s pretty exciting stuff!

Compal is one of the leading manufacturers of consumer electronics and is definitely leading the way in terms of future-forward thinking about gesture.

Compal started out as an early tester of our Grasp SDK. Impressed with their experience with Grasp and confident in their belief that gesture can enhance how we interact with our computers, they decided to go forward and build a product. Using Grasp as the underlying software to add gesture recognition, Compal has designed and built a fully integrated gesture-based All-In-One for one of their major customers.

We can’t go into detail about all of the functionality of their system. Suffice it to say, it’s worth checking out!

This is a great case study, though, of the different options partners and customers have to work with Omek and our UX Studio. Here’s an example of a company that took our SDK and ran with it! Compal built an impressive Flash wrapper over Grasp, and developed their own gesture-based UX experience in-house.

Here, our UX Studio team members played more of a behind-the-scenes advisory role, offering suggestions and recommendations for optimal gestures depending on the use case. For the most part, though, Compal engaged the Studio on a limited, as-needed basis.

The end result features an All-in-One computer with full 3D motion control and gesture recognition based on a 3D camera built directly into the bezel of the screen. No additional peripheral device needed. Hands, though, are required.

Compal executives have been showing this exciting new device in their suite at Computex this week.  Missed it? You can reach out to them directly to set up a meeting to see it in action for yourself.sd

Stay tuned next week for a new post all about invisibility!

Compal Electronics and Omek Interactive Partner to Develop First Gesture-Based “Touchless” Computer

Taipei, Taiwan, June 4 2013 – Compal Electronics, the leading manufacturer of notebook PC and LCD products, and Omek Interactive, the leading provider of 3D gesture recognition solutions, announce the formation of a partnership to design and deliver motion-controlled and gesture-enabled PCs targeted for the Consumer Electronics market.

Compal and Omek will present the first gesture-enabled AIO PC based on 3D technology, with a 3D camera embedded into the bezel of the device. The experience was developed based on Omek’s sophisticated Grasp technology, a close-range gesture and motion tracking solution for 3D sensors.

The collaboration builds on Compal’s long-standing expertise in design and manufacturing of CE products. “Compal is revolutionizing the future of computing devices,” said Janine Kutliroff, CEO, Omek Interactive. “By eliminating any peripheral camera attachments, they have created a truly seamless experience for end users.”

This new integrated device will enable touch-free interaction with PCs running Windows 8 Operating Systems, next generation media control, gaming, and more, fueling a paradigm shift in the way people interact with their personal computing devices.

Omek’s Grasp technology is based on a comprehensive understanding of the hand, enabling robust and reliable tracking. Omek’s solutions include a full suite of tools for developers to rapidly create and bring applications to market.

“Our Grasp technology provides the foundation to create simplified, intuitive and natural user experiences,” said Ms. Kutliroff. “Grasp delivers the intelligence computers need to understand people’s movements, establishing the freedom to communicate with devices the way we do with each other.”

Compal and Omek are at the forefront of delivering innovative and integrated hardware, software and UI solutions by leveraging the latest advancements in motion sensing technology from Omek.

For Media Inquiries, please contact:

Omek Interactive
Alona Lerman
+972-(0)72-215-5811
alona@omekinteractive.com

About Compal

Ever since its initiation as a PC peripheral supplier in 1984, Compal has grown to its present scale with outstanding management and solid R&D capacity. Today, Compal has secured its status as a leading manufacturer of notebook PCs and LCD products which are renowned for their exceptional quality. In addition, Compal has been taking firm strides in the development of the 5Cs (Cloud, Connecting, Computing, Communication and Consumer). In an effort to construct an efficient global operation system, Compal has established business sites in China, Vietnam, the U.S., Brazil, Poland and Mexico so as to provide versatile and speedy services to live up to its reputation as a world renowned brand name in Original Design Manufacturer.

About Omek Interactive

Omek is transforming the way people interact with their devices and applications, by providing tools and technology that enable manufacturers and software developers to add gesture-based interfaces to their products. Omek’s gesture recognition and body tracking software is being incorporated into TVs, set-top boxes, computers and peripherals, smartphones, interactive signs, and medical and fitness devices – and into the content and applications that run on these devices. Omek’s tools work with all major 3D cameras, and support a broad range of processors and operating systems – giving customers the flexibility to take advantage of the latest technology, while maintaining portability for their applications. A privately held company, Omek is headquartered in Bet Shemesh, Israel, and has an office in Taipei, Taiwan. For more information, visit www.omekinteractive.com.

Keep your hands where I can see them: Designing for Touch-free Interaction

The success of your interface is locked within the interaction zone

In past blog posts we’ve referenced the idea of the “Effective Interaction Zone” – a not-so-sexy name for a really important concept. Today we’re going to dive into this topic in more detail, exploring the reasons behind designing for this space and providing tips on how to create your own effective interaction zone.

A Definition:

The Effective Interaction Zone (or EIZ, for short) is an area in space  that is defined by the designer where the movements of a user should be registered by the application. This is in contrast to areas that are designated as outside of the EIZ, where movements of a user may be “seen” but do not register any response from the application.

The exact borders of an EIZ will correspond to the specific requirements of a given application. It is a means by which a designer can guide a user to stay within the “area” that the designer wants.

Examples of EIZs:

  • When designing for touch, the entire screen would be within the Effective Interaction Zone
  • If you’re using a mouse and keyboard, then the actual physical keyboard and the area within the mousepad would fall under the title of EIZ

Since Omek is a provider of gesture recognition solutions, though, this post will address the EIZ in the context of touch-free, gesture interaction using a 3D camera. So we’ll add in the additional point that the EIZ must be within the field of view of the camera.

There’s no specific formula or set of hard & fast rules that are used to calculate the EIZ. This means that the exact parameters of your EIZ will depend on the properties of the sensor you are working with (field of view, range, etc…), the placement of the sensor, and the requirements of the application.

Why have an EIZ at all?

Sensors usually have data that degrades as you get close to the “edges” (of the field of view). If you’re too close or too far from the range of the camera, you’ll likely come across the same problem. This translates into poor and inaccurate tracking, or possibly false positives.

Rather than try to resolve this issue at the algorithms level by excluding data that is degraded, you can address these concerns during the application development and design phase. That’s why Omek’s Grasp SDK captures everything within the field of view + range of the camera. For example, if the Grasp SDK detects a hand, it will register that hand no matter where it exists in the frame.

As a designer you can “design around the issue” by leveraging the concept of the EIZ.

How exactly do you “design around the issue”, you ask?

Great question!

Well, you can first identify where tracking is more problematic and set areas you don’t want tracked in your application — i.e., closer to the edges of the field of view. Again, this will differ for each set-up and application requirements.

Then, you set up your application to ignore these areas. Yep. Just pretend those areas don’t exist.

Other areas that you may choose to ignore:

  • The approximate distance of the user’s face from the camera, so that when they scratch their nose or fix their glasses they don’t inadvertently activate your application.

  • The areas close to the edges of the field of view, where we know we are not likely to get accurate information, negatively impacting the user experience.

  • The space super close to the camera where the data may also be distorted, degraded or incomplete.

So essentially, to create the Effective Interaction Zone, you are “clipping” the areas that you don’t want to track, setting up your application to ignore those areas, and defining a 3D zone in space in which your application will be activated by predefined gestures.

This significantly raises the likelihood that when a user does perform a gesture within the defined EIZ that the user *actually intended* to perform that gesture and interact with the application.

So Many Acronyms! The Intersection of UX & the EIZ

There are a few tricks you can use within your application to guide someone into the EIZ.  Remember: the EIZ lies within the field of view of the camera. So the camera will still be tracking objects that fall outside of the EIZ.

For example, if a user’s hand is found to be just a bit outside of the EIZ to the right, then it can trigger the application to generate an alert, with a message that says something like, “Oh, hey! Move your hand a bit to the left!”

Remember: designing for the EIZ means taking into consideration how to create a comfortable user experience.  You will want to keep selection items within the EIZ easy for a user to reach while keeping an eye to “fatigue”. When we interact with our computers, for the most part our forearm is resting on the desk, whether we are typing on a keyboard or moving around our “mouse”. It’s comfortable. We’ve gotten used to it. And let’s face it. Sometimes we can be a bit lazy.

Depending on the exact type of application you are designing, you may want to ensure that the main selection items within the EIZ can be accessed while a user rest her arm on the desk. Once someone starts reaching to the different corners of the screen it can get pretty tiring pretty quickly.

A quick summary: the EIZ is a very valuable tool for application developers designing gesture-based systems.  Used correctly it can mean the difference between an application that responds consistently and accurately when a user expects it to, and one that responds to “false positives” or doesn’t respond when a user most wants it to.

Guest Post: The Impact of Gesture Recognition on UX Design

Hello readers!

We’ve been keeping busy over at Omek headquarters but found the time to write a a short guest post over on Paul Olyslager’s blog. Paul is an interface designer and writes about web usability, UX (user experience), and UI (user interface) design.

Our post highlights what we see as the main impact of gesture recognition on current and future UX design. Read on for a glimpse of what we wrote, or head on over to Paul’s blog for the full version.

Shifting our Paradigm: Touch-free Interaction and gesture recognition

Very often new technologies suffer from what Scott Jenson calls, “Default Thinking“. As Jenson puts it, “the problem is simple, but pernicious: designers think of new technologies in terms of yesterday’s tasks, failing to clearly see the real potential of the new technologies”.

New Input Methods Require New Design Principles

What are the implications of motion tracking and gesture recognition technology on design interfaces?

We’ve identified five key design features that are important in creating effective gesture-based interfaces. While this list will continue to be refined with time, it offers a starting toolkit for designers as they create 3D NUI experiences.

Read the full post here.

And stay tuned for a new post going up on the blog this week!