Don’t Miss Out! Gesture Recognition + Embedded Vision

Have the following questions been keeping you up at night?

  • What is embedded vision all about?
  • What are practical things I should know if I want to incorporate gesture recognition into embedded systems?
  • What are some of the different technologies used to create depth maps?
  • Why is 3D technology better than other technologies at solving certain computer vision problems?

If you live in the Bay Area (or plan to be there April 25th) then you’re in luck! Omek Interactive CTO, Gershom Kutliroff, will be one of a few esteemed speakers represented at the upcoming Embedded Vision Alliance Summit.

Embedded Vision Summit 2013

What exactly is the Embedded Vision Summit? According to the source, it is “a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software.” The Summit is part of the larger DESIGN West event being held next week.

Quick Details:
April 25, 2013
San Jose Convention Center
San Jose, California, U.S.A.
In Conjunction with DESIGN West
Register Now!

Why should you attend?

I caught up with Gershom before he leaves for his trip and asked him to provide us with a few highlights from his presentation. Check in on our blog after April 25th for more details from his talk, including videos showing off what you can accomplish using depth data from 3D cameras vs. a standard RGB camera.

Sneak Preview: Gershom’s Talking Points

In addition to touching on the questions listed at the beginning of this post, Gershom will offer analytical thinking about 3D data, explaining how it is inherently different from 2D data. He will explain how these differences drive the need for different algorithms to be constructed in order to understand the data that comes from depth sensors.  Gershom argues that these algorithms are the basis for a more fundamental paradigm shift that has broad implications which go beyond the depth sensor.

Using a case study to illustrate his points, Gershom will show how these 3D-specific algorithms cascade down the value chain. Ultimately, he argues, different software libraries and different hardware architectures will be required — ones which are optimized to support these new algorithms.

Gershom will provide you with key insights into how algorithms for depth cameras are constructed. These  ideas will help form the basis of thinking on how to best design software systems and construct your hardware architecture so that it is optimally designed to run these new software library systems.

What does that mean for you? Well, whether you are a software developer building 3D-based applications or a hardware manufacturer interested in learning how to better support emerging 3D cameras, this session is for you.

Sign up today (for free! space permitting): http://www.embedded-vision.com/embedded-vision-summit-registration

We hope to see you there.

Click on a tab to select how you'd like to leave your comment

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*