Back
Human can easily see 3D shape from single 2D images, exploiting multiple kinds of information. This has given rise to multiple subfields (in both human vision and computer vision) devoted to the study of shape-from-shading, shape-from-texture, shape-from-contours, and so on.
The proposed algorithms for each type of shape-from-x remain specialized and fragile (in contrast with the flexibility and robustness of human vision). Recent work in graphics and psychophysics has demonstrated the importance of local orientation structure in conveying 3D shape. This information is fairly stable and reliable, even when a given shape is rendered in multiple styles (including non-photorealistic styles such as line drawings.)
We have developed an exemplar-based system (which we call Shape Collage) that learns to associate image patches with corresponding 3D shape patches. We train it with synthetic images of “blobby” objects rendered in various ways, including solid texture, Phong shading, and line drawings. Given a new image, it finds the best candidate scene patches and assembles them into a coherent interpretation of the object shape.
Our system is the first that can retrieve the shape of naturalistic objects from line drawings. The same system, without modification, works for shape-from-texture and can also get shape from shading, even with non-Lambertian surfaces. Thus disparate types of image information can be processed by a single mechanism to extract 3D shape. Collaborative work with Forrester Cole, Phillip Isola, Fredo Durand, and William Freeman.
Edward H. Adelson (Dept. of Brain and Cognitive Sciences and Computer Science and Artificial Intelligence Lab, MIT)