Perceiving Systems Talk Biography
13 February 2012 at 13:30 - 16:00

Algorithms for low-cost depth imaging

Cgarbe

Recovering the depth of a scene is important for bridging the gap between the real and the virtual world, but also for tasks such as segmenting objects in cluttered scenes. Very cheap single view depth imaging cameras, i.e. Time of Fight cameras (ToF) or Microsoft's Kinect system, are entering the mass consumer market. In general, the acquired images have a low spatial resolution and suffer from noise as well as technology specific artifacts. In this talk I will present algorithmic solutions to the entire depth imaging pipeline, ranging from preprocessing to depth image analysis. For enhancing image intensity and depth maps, a higher order total variation based approach has been developed which exhibits superior results as compared to current state-of-the-art approaches. This performance has been achieved by allowing jumps across object boundaries, computed both from the image gradients and the depth maps. Within objects, staircasing effects as observed in standard total variation approaches is circumvented by higher order regularization. The 2.5 D motion or range flow of the observed scenes is computed by a combined global-local approach.

Particularly on Kinect-data, best results were achieved by discarding information on object edges. These are prone to errors due to the data acquisition process. In conjunction with a calibration procedure, this leads to very accurate and robust motion estimation. On these computed range flow data, we have developed the estimation of robust, scale- and rotation-invariant features. These make it feasible to use our algorithms for a novel approach to gesture recognition for man-machine interactions. This step is currently work inprogress and I will present very promising first results.

For evaluating the results of our algorithms, we plan to use realistic simulations and renderings. We have made significant advances in analyzing the feasibility of these synthetic test images and data. The bidirectional reflectance distribution function (BRDF) of several objects have been measured using a purpose-build “light-dome” setup. This, together with the development of an accurate stereo-acquisition system for measuring 3D-objects lays the ground work for performing realistic renderings. Additionally, we have started to create a test-image database with ground truth for depth, segmentation and light-field data.

Speaker Biography

Christoph Garbe (University of Heidelberg, Interdisciplinary Center for Scientific Computing)