Purchase this article with an account.
Christopher Davoli, James Brockmole, Jessica Witt; Compressing Perceived Distance Through Real and Imagined Tool Use. Journal of Vision 2011;11(11):943. doi: https://doi.org/10.1167/11.11.943.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Accumulating evidence suggests that visual perception is scaled according to the actions of an observer. For example, reaching for an object with a tool results in a compressed perception of space. We examined the manner and extent to which these illusions arise. In Experiment 1, participants estimated their distance from targets positioned 6–100 ft. away. Participants who illuminated the target with a laser pointer consistently judged it to be closer than those who pointed at the target with a baton. The magnitude of the underestimation increased with physical distance, indicating a non-uniform compression of space. Experiment 2 examined whether object interaction always results in spatial compression. Participants estimated the distance of targets 6–44 ft. away while holding the nozzle of a shop-vac that was running in either ‘vacuum’ or ‘blower’ mode. Interactions involving both attraction and repulsion resulted in targets being perceived closer compared to a non-interactive control condition. This suggests that object interactions, regardless of their form, result in compressions in perceived distance. Moreover, Experiment 3 showed equivalent spatial distortions among participants who imagined interacting with targets, indicating that implied or imagined interactions can have the same perceptual consequences as physical interactions. Finally, Experiment 4 explored whether the perceptual distortions that result from tool use persist in memory after the interaction has terminated. Participants initially described small scenes positioned 17–74 ft. away, and were given a surprise memory test in which they indicated the distances of each scene on a scale model of the environment. Participants who shined a laser pointer on the scenes while generating stories positioned scenes closer together in their models than did participants in a control condition, suggesting that perceptual distortions persist in memory beyond the moment of interaction.
This PDF is available to Subscribers Only