Concept illustration of reach-behind, in AR game context

This novel interaction method allows users to reach behind a display and manipulate virtual content (e.g., AR objects) with their bare hands in the volume behind the device. We designed and constructed multiple prototypes: some are video see-through (tablet with an inline camera and sensors in the back), some are optical see-through (transparent LCD panel, depth sensor behind the device, front facing imager). The latter system featured (1) anaglyphic stereoscopic rendering (to make objects appear truly behind the device), (2) face tracking for view-dependent rendering (so that virtual content "sticks" to the real world), (3) hand tracking (for bare hand manipulation), and (4) virtual physics effects (allowing completely intuitive interaction with 3D content). It was realized using OpenCV for vision processing (face tracking), Ogre3D for graphics rendering, a Samsung 22-inch transparent display panel (early prototype), and a PMD CamBoard depth camera for finger tracking (closer range than what a Kinect allows). This prototype was demonstrated Samsung internally to a large audience (Samsung Tech Fair 2010), and we filed patent applications. (Note that anaglyphic rendering was the only available stereoscopic rendering method with the transparent display. Future systems will likely be based on active shutter glasses, or parallax barrier or lenticular overlays. Also note that a smaller system does not have to be mounted to a frame, like our prototype, but can be handheld.)

Video

Download Video: MP4, WebM, Ogg
HTML5 Video Player by VideoJS

This video shows a research prototype system that demonstrates a behind display interaction method. Anaglyph based 3D rendering (colored glasses) and face tracking for view dependent rendering creates the illusion of dices sitting on top of a physical box. A depth camera is pointed at the users hands behind the display, creating a 3D model of the hand. Hand and virtual objects interact using a physics engine. The system allows users to interact seamlessly with both real and virtual objects.

This video, together with a short paper, was presented at the MVA 2011 conference (12th IAPR Conference on Machine Vision Applications, Nara, Japan, June 13-15, 2011).

Please note that this video is not 3D itself: the virtual content appears to the glasses wearing user as behind the display, where (from his perspective) they sit on a physical box.

Publications