Abstract
e human visual system seems to be well able to interpret the layout of objects in pictures but is less accurate in determining the location from which the picture was taken. Here, we used an exocentric pointing task in immersive virtual reality (VR) to infer the perceived distance between the observer and a pointing virtual character (VC). Participants adjusted orientations of the VC to a highlighted target positioned on a 2.5m-radius circle. We presented the VC inside a frontoparallel frame at the center of this circle. The frame either behaved like a picture or like a window. We also used two intermediate conditions: either stereopsis behaved as if the frame was a window and motion parallax behaved as if it was a picture, or vice versa. The VC was rendered at different distances relative to the frame (-2, -1, 0, and 1m) as determined by projected size and perspective projection, as well as stereopsis and motion parallax information, depending on condition. Perceived distance was inferred from the adjusted pointing direction and the known location of the target. Perceived distance deviated systematically from the intended distance. The data could be modeled accurately (r^2 mean = 0.93, SD = 0.08) after taking a second parameter into account – a depth compression factor. We found that if the frame did not provide stereopsis perceived distance varied little as a function of intended distance and was estimated to be closely behind the frame, and there was little depth compression. However, when the frame provided stereopsis perceived distance was close to the intended distance when taking considerable depth compression into account. This study demonstrated that when viewing a picture, observers perceive depicted objects to be slightly behind the picture plane even if size, perspective, and motion parallax indicate different distances, and stereopsis dominates perceived distance.