Abstract
Depth perception relies on many cues, with disparity being perhaps the most compelling. Our spatial resolution limit for disparity, however, is only ~4cpd. This low pass characteristic suggests that disparity transitions should appear blurry: a near object against a far background should appear to warp toward the background around the edges, due to the loss of high spatial frequencies; just as blurring a luminance pattern creates intermediate values between light and dark transitions. Since this does not occur, we propose that disparity perception is modulated by other features perceivable at a higher spatial resolution. Here we show luminance, which has a much higher spatial resolution, is combined with disparity to determine the locations of edges. Three subjects judged the locations of depth defined OR luminance defined edges, which were shown at the same time with varying amounts of spatial separation on a mirror stereoscope. Although subjects were instructed to ignore the task-irrelevant edge in each condition, they could not. Even when the two edges could be perceptually distinguished, judgments about the depth edge’s location were shifted toward the luminance edge. Judgments about the luminance edge were also shifted toward the depth edge, but to a smaller amount. For both judgments, we found that reducing the visibility of the disparity defined edge caused the luminance defined edge to have a greater influence. Thus our data are roughly compatible with optimal cue combination models that give more reliable cues a heavier weight. All cues (depth and luminance) contribute to the final percept, with an adaptive weighting depending on the task and the acuity with which that cue is perceived. Since luminance acuity is generally higher than disparity acuity, however, we conclude that most often luminance edges will serve to define the edges of objects, not disparity.
Meeting abstract presented at VSS 2013