This chapter describes what is possibly the earliest use of dense correspondence estimation for transferring semantic information between images of different scenes. The method described in this chapter was designed for nonparametric, “example-based” depth estimation of objects appearing in single photos. It consults a database of example 3D geometries and associated appearances, searching for those which look similar to the object in the photo. This is performed at the pixel level, in similar spirit to the more recent methods described in the following chapters. Those newer methods, however, use robust, generic dense correspondence estimation engines. By contrast, the method described here uses a hard-EM optimization to optimize a well-defined target function over the similarity of appearance/depth pairs in the database to appearance/estimated-depth pairs of a query photo. Results are presented demonstrating how depths associated with diverse reference objects may be assigned to different objects appearing in query photos. Going beyond visible shape, we show that the method can be employed for the surprising task of estimating shapes of occluded objects’ backsides. This, so long as the reference database contains examples of mappings from appearances to backside shapes. Finally, we show how the duality of appearance and shape may be exploited in order to “paint colors” on query shapes (“colorize” them) by simply reversing the matching from appearances to depths.