Perzeptive Systeme Members Publications

Layers, Time and Segmentation

Research photo layerfigurefull

Top: Estimated flow fields and scene structure on some Middlebury test sequences. Row 1: our nLayers method separates the claw of the frog from the background and recovers the fine flow structure. Row 2: nLayers separates the foreground figures of “Mequon” from the background and produces sharp motion boundaries; However, the non-rigid motion of the cloth violates the layered assumption and causes errors in estimated flow fields. Row 3: by correctly recovering the scene structure, nLayers achieves very low motion boundary errors on the “Urban” sequence; a part of the building in the bottom left corner moves out of the image boundary and its motion is predicted by the affine model. However the building’s motion violates the affine assumption, resulting in errors in the estimated motion.

Bottom: Benefits of multiple frames. Occlusion reasoning using frames 2 and 3 (e-f) is hard (detail from Urban3); enforcing temporal coherence of the support functions using 4 frames significantly reduces the errors in both the flow field and the segmentation (g-h). The flow field is from frame 2 to frame 3 and the segmentation is for frame 2.

Color key shows depth ordering of layers for farthest (blue) to closest (red).

Members

Publications

Perceiving Systems Conference Paper A fully-connected layered model of foreground and background flow Sun, D., Wulff, J., Sudderth, E., Pfister, H., Black, M. In IEEE Conf. on Computer Vision and Pattern Recognition, (CVPR 2013), :2451-2458, Portland, OR, June 2013 () pdf Supplemental Material BibTeX

Perceiving Systems Conference Paper Layered segmentation and optical flow estimation over time Sun, D., Sudderth, E., Black, M. J. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), :1768-1775, IEEE, 2012 () pdf sup mat poster BibTeX