Abstract
Two disparate views of a planar surface patch are consistent with an infinite number of (V, H, d) triplets; where V and H are inclinations around vertical and horizontal axes, and d the convergence angle. Such indeterminacy is conveniently described by two functions [V= f(K1, d); H= f(K1, K2, d)], with K1 and K2 derivable from the relation between two local features of projected surface markers: orientation disparity and average orientation.
We found that the perceived orientation of a stereoscopic surface depends on the shape of indeterminacy functions rather than on simulated (V, H, d) values. In a sequential 2AFC task observers discriminated which patch deviated more from the frontoparallel plane. Test and reference patches were specified by randomly-oriented intrinsic lines visible through a circular aperture. Keeping dref constant we selected three reference patches with different inclinations (Vref, Href= 50, 30; 30, 50; 50, 50) and generated nine test patches for each reference, combining three ds (dsmall[[lt]] dref[[lt]] dlarge) with three inclinations specified by indeterminacy functions.
In all reference conditions observers performed at chance when the simulated (V, H, d) values of the test were consistent with the pair of indeterminacy functions of the reference; while they accurately discriminated patches consistent with different indeterminacy functions. The probability of perceiving the test patch as deviating from the frontoparallel plane more than the reference increased as a direct function of the ratio between the areas subtended by the indeterminacy functions of the test and those of the reference.
A model that selects (V, H, d) values corresponding to the weighted difference between the areas below the indeterminacy functions explained data better than the weighted linear combination of simulated (V, H, d) values. We argue that humans recover surface orientation using the implicit knowledge of indeterminacy functions, without further assumptions on viewing geometry.