Abstract
The percept of the shape of a 3D object produced by a single 2D image is veridical enough to recognize the object from a different viewpoint (i.e. to achieve shape constancy). Recovering and recognizing 3D shapes are very important tasks for the human visual system. Recovering a 3D shape from a single 2D image is formally an ill-posed problem: infinitely many 3D shapes can produce the same 2D image. In order to recover a unique and veridical 3D shapes, a priori constraints about the 3D shape are required. Last year we presented a model for 3D shape recovery using priors that restrict the 3D shapes to be symmetric (Li, Pizlo, & Steinman, 2008) or, at least, approximately symmetric (Sawada & Pizlo, 2008). However, there are 3D asymmetric shapes whose every 2D image is consistent with a symmetric interpretation. Interestingly, the human observer can almost always recognize the 3D shape as asymmetric, even when only a single 2D image is presented. How can the observer reliably discriminate between symmetric and asymmetric 3D shapes, when every 2D image of every shape allows for 3D symmetric interpretation? I will present a new, generalized computational model for recovery of symmetric and asymmetric 3D shapes. The model first recovers symmetric 3D shapes. Next, the model distorts the recovered shape so that it jointly satisfies the following constraints: symmetry of the 3D shape, planarity of faces, minimum surface area, and 3D compactness. Performance of the model was tested with the same 2D images that were used in psychophysical experiments. Performance of the model was as good as that of the subjects.