Abstract
Purpose: We have previously shown that humans learn to down-weight the figure compression cue to slant in an environment containing a large proportion of randomly shaped figures. We proposed a model in which observers use image information alone to match their internal models of the statistics of figure shape to the statistics of their environment. To further test the model, we tested whether subjects could learn different statistical models for different shape categories, leading to shape-contingent weighting of the compression cue. Methods: Subjects viewed stereoscopic images of elliptical and diamond shaped figures and adjusted a 3D line probe to appear perpendicular to the surface. We measured cue weights for circles and square diamonds using test stimuli that were near-circular ellipses and near-square diamonds presented at a slant of 35° (containing 5° conflicts between the compression cue and the stereoscopic cues). Test trials were embedded in a large set of trials containing images of ellipses and diamonds rendered at slants between 20° and 40°. In the first two “baseline” sessions of the experiment, the non-test figures were circles and square diamonds. In the final three “training” sessions the shapes of some of the figures in the non-test trials were randomized. In one condition, the ellipses were randomly shaped; in the other, the diamonds were randomly shaped. No feedback was given. Results: While observers' gave equal weights to both types of figure in the baseline conditions, they later gave less weight to the compression cue for the shape category that was randomized in training (Mean weight change = .15). Conclusions: Humans can learn different prior models for categorically different shapes, so that in one environment, figure shape can be more salient as a slant cue for one type of figure than another, while in another environment, it can be less salient.
Research supported by NIH grant EY-17939.