Grasp: Powering Gesture-Based Experiences

In our last two posts on the Omek blog we shared our thoughts and advice on how to incorporate gesture-based control into applications and interfaces in a way that is intuitive, easy-to-use, dynamic, and engaging.  We begin with a user-centered approach to interaction design, taking time to observe people’s actual movements to understand how they translate into truly natural gestures.  We incorporate feedback learned from frequent testing, making sure that our ideas work across a broad set of users.  And, perhaps most importantly, we leverage a tool that is unique to Omek – our Grasp SDK.

Grasp is in many ways the “magic” that enables us to transform our ideas into reality.  You may already be familiar with our Beckon SDK, which provides full-body skeleton tracking from distances of 1-5 meters.  Well, Grasp is our “close-range” counterpart.  It is a middleware solution and full set of tools for hand & finger motion tracking and gesture recognition from distances as close as 10cm.

But it is so much more! With Grasp, we took a unique approach in order to solve the question of close-range motion sensing.  We built a sophisticated and detailed solution, because it is our belief that you need technology complexity in order to achieve user simplicity.

Sign up now!

Interested in how you can get your hands on Grasp?  Currently it is in closed alpha testing but we are rapidly gearing up for a beta launch.  You can sign up to be notified of the release and be one of the first to try out these tools.

Below, we highlight several key features that are unique to our Grasp offering, all of which we leveraged when designing the Practical UI to ensure we ended up with a genuinely intuitive interface.

Cross-Camera, Software-only Motion Sensing and Gesture Solution.

While there have been quite a few exciting recent product developments, the gesture recognition market is still quite young with many new innovations underway.  At Omek, we want to enable you to take advantage of the latest advancements in camera technology.  To that end, our solution is cross-platform – we are working closely with different hardware providers to support their 3D sensors. What does this mean for you?

  • Support for depth cameras to offer you the most robust experience. With our eye towards usability, we want to provide developers with the most cutting edge technology to create meaningful user experiences.  At Omek we believe that 3D cameras have the power to change the paradigm of how we interact with our devices. Read our prior blog post on why we work with depth cameras rather than 2D cameras.
  • Seamless integration into your personal computing devices. One of our goals at Omek is to help you simplify your life; not complicate it by adding more peripherals!  That’s why we focus on supporting cameras that will be incorporated into your device, whether it’s an All-in-One PC or the dashboard of your car.

Full Hand Skeleton Model vs. 6-Point Tracking.

Unlike other close-range solutions available, Grasp creates a full 3D skeleton model of the hand, complete with 22 joints. Rather than simply assigning tracking points to each of your fingertips, we offer developers a complete model of the hand, opening up a broad set of possibilities to create a range of experiences.

Our approach offers significant advantages to ensure more robust tracking.  Why?  Well, the hand has several degrees of freedom, making it difficult to track. Think about it – you can open or close your hand, cross your fingers, curl your fingers, or rotate your palm. Your hand can take on many different configurations. By using a hand model, though, we are able to tackle many different scenarios, including self-occlusions, rotations, or closed / curled fingers.  The hand skeletal model effectively constrains the movements of the fingers to only actual possible configurations your hand could feasibly make.

  • Applying it to the Practical UI. Take for example, the pinch and rotate gestures used when taking a book off the shelf and opening it up to see what’s inside. First, we have to recognize the pinch – not too difficult.  It becomes more complicated, though, once the hand rotates.  A lot of information all of a sudden becomes occluded, making it difficult to maintain continuous tracking.  The fingertip points are no longer in the field of view of the camera.  Using Grasp, however, we can define that we are seeing the back of the hand and can instruct the application to continue tracking.

No Calibration Required.

When you develop your application using Grasp, your users will be able to immediately interact with your interface without having to calibrate to get started, providing a smoother and more dynamic introduction. Essentially, behind the scenes and invisible to anyone using Grasp, we calibrate the skeleton model to different hand dimensions and continue to auto-calibrate during the duration of the use of the application.  This ensures we are adjusting our tracking to all variety of users, whether they are kids or adults, those with small hands or someone with long, thin fingers.

When you walk up and begin using our Practical UI, regardless of your hand size or shape you are guaranteed to have a smooth, robust, and instantly-tracked experience.

Hand Detection + Classification.

Unlike other systems, we detect hands by searching for real hand features rather than simply looking for motion in the scene or using a skin color model. This helps us ensure that we are tracking what we want to be tracking – hands and fingers.  It allows us to robustly remove false positives and instead, quickly detect hands as soon as they appear in the scene, so your users won’t get frustrated by the application not working as they expected it to.

  • Differentiate between Right and Left hands. Combined with the full skeleton model of the hand, we offer the ability to detect whether the camera is tracking a right vs. a left hand, even in the case of poses that are non-trivial to detect.   

Designed for Usability

A major benefit to having an in-house UX Studio is that we have actual developers working with our SDK from the first stages of its creation. We leverage this feedback loop for both early testing and to ensure that our tools are designed with developers and their specific needs in mind. Our tools are easy-to-use and process much of the technical components “behind-the-scenes”, allowing you to focus on creating meaningful applications, rather than trying to solve computer vision problems.

In Summary….

Are you a developer seeking to create a gesture-based game?  Perhaps a medical device company looking to design a touchless interface? Or, an automotive company reinventing the in-vehicle-infotainment system? Whatever system you would like to create, Omek offers the most advanced, cutting-edge and user-centric tools to help you power your gesture-based experiences.

Sign up now to be notified of our upcoming Grasp beta:

 

Click on a tab to select how you'd like to leave your comment

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*