Wikipedia
OmniTouch is a wearable computer, depth-sensing camera and projection system that enables interactive multitouch interfaces on everyday surface. Beyond the shoulder-worn system, there is no instrumentation of the user or the environment. For example, the present shoulder-worn implementation allows users to manipulate interfaces projected onto the environment (e.g., walls, tables), held objects (e.g., notepads, books), and their own bodies (e.g., hands, lap). On such surfaces - without any calibration - OmniTouch provides capabilities similar to that of a touchscreen: X and Y location in 2D interfaces and whether fingers are “clicked” or hovering. This enables a wide variety of applications, similar to what one might find on a modern smartphone. A user study assessing pointing accuracy of the system (user and system inaccuracies combined) suggested buttons needed to be in diameter to achieve reliable operation on the hand, on walls. This is approaching the accuracy of capacitive touchscreens, like those found in smart phones, but on arbitrary surfaces.
OmniTouch was developed by researchers from Microsoft Research and Carnegie Mellon University in 2011. The work was accepted to and presented at the prestigious 2011 ACM User Interface and Software Technology conference. Many major news outlets and online tech blogs covered the technology. It is conceptually similar to efforts such as Skinput and SixthSense. A central contribution of the work was a novel depth-driven, fuzzy template matching approach to finger tracking and click registration. The system also finds and tracks surfaces suitable for projection, on which interactive applications can be projected.