Abstract
Observers perform approximately veridical when asked to walk to or reach for a target under “open-loop” conditions. However, perceptual judgments of relative depth extent are systematically distorted. One possible explanation for this dissociation is that motor response and perceptual judgment are based on fundamentally different processes. Another possible explanation is that performance has been compared across tasks that require subjects to use different types of visual information. Whereas judgments of relative depth extent require the representation of distances, reaching for a target can be performed based on a point-to-point mapping of effector-to-target location that does not require the representation of distances. Consequently, we would expect equivalent performances on motor and perceptual tasks when they rely on the same visual information. The experiments presented here were designed to test this prediction.
On each trial, subjects viewed a target line segment (TLS) in depth that varied in length across trials. In the “motor” task, subjects performed “open-loop” reaches, where they matched the length of their reach to the length of the TLS. In some cases, hand and TLS starting locations coincided, and subjects could reach to the TLS endpoint. In other cases, TLS and hand starting locations differed, and subjects had to rely on perceived TLS length alone. In the “perceptual” task, the TLS was briefly presented and subjects adjusted an auxiliary line segment to match the perceived length of the TLS.
Our results show, that performance in perceptual and motor tasks is equivalent. Subjects are more accurate and reliable when they can rely on information on TLS endpoints, than when they must rely on TLS length information alone and they show systematic errors in the latter case. We suggest that perceptual and motor processes are tightly connected and that systematic errors are caused by distortions in perceived depth extent.