From images to depths and back

Tal Hassner, Ronen Basri

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

This chapter describes what is possibly the earliest use of dense correspondence estimation for transferring semantic information between images of different scenes. The method described in this chapter was designed for nonparametric, “example-based” depth estimation of objects appearing in single photos. It consults a database of example 3D geometries and associated appearances, searching for those which look similar to the object in the photo. This is performed at the pixel level, in similar spirit to the more recent methods described in the following chapters. Those newer methods, however, use robust, generic dense correspondence estimation engines. By contrast, the method described here uses a hard-EM optimization to optimize a well-defined target function over the similarity of appearance/depth pairs in the database to appearance/estimated-depth pairs of a query photo. Results are presented demonstrating how depths associated with diverse reference objects may be assigned to different objects appearing in query photos. Going beyond visible shape, we show that the method can be employed for the surprising task of estimating shapes of occluded objects’ backsides. This, so long as the reference database contains examples of mappings from appearances to backside shapes. Finally, we show how the duality of appearance and shape may be exploited in order to “paint colors” on query shapes (“colorize” them) by simply reversing the matching from appearances to depths.

Original languageEnglish
Title of host publicationDense Image Correspondences for Computer Vision
PublisherSpringer International Publishing
Pages155-172
Number of pages18
ISBN (Electronic)9783319230481
ISBN (Print)9783319230474
DOIs
StatePublished - 1 Jan 2015

Bibliographical note

Publisher Copyright:
© Springer International Publishing Switzerland 2015.

Fingerprint

Dive into the research topics of 'From images to depths and back'. Together they form a unique fingerprint.

Cite this