Abstract
Environmental context is known to influence visual space perception, but characterizing the underlying mechanisms poses many challenges. Natural environments are heterogeneous and complex. Thus, it is difficult not only to match environments across experiments and labs, but also to identify relevant scene features out of the great multiplicity of possibilities. These constraints motivate a need for extensive data collection to evaluate candidate hypotheses. Using rendered scenes and Amazon's Mechanical Turk, the current study provides a means of meeting these challenges, and focuses specifically on the possible role of room size on distance judgments. Stimuli were empty rooms lined with irregularly-placed doors; rooms varied in depth (6-40m) and width (1.5-40m), with an orange cone placed 2–37 m from the observer's viewpoint. 100 MTurk workers numerically judged the cone distance, with 11,484 collective judgments being completed in 3.5 hours. Analysis of 4 widths x 3 depths x 6 distances (2–4.8m) showed a main effect of room depth (p=0.012); deeper rooms were associated with shorter distance judgments, averaging 2.09 vs. 2.18m for 40m vs. 6m room depths, respectively. There was no effect of room width (p=0.159). Analysis of 4 widths x 14 distances (2 – 37m; 40m room depth only) showed a main effect of width (p< 0.001); wider rooms were associated with larger distance judgments (averaging 8.12 vs. 10.28m for 1.5m vs. 40m room widths, respectively). For targets at 37m, mean judged distances differed by over 8m between 1.5 vs. 40m width rooms. This indicates that visual features well outside the nearby ground plane can play a role in environmental context effects. Given past evidence of rapid extraction of mean depth based on global image features, we propose that similar global processes participate in scaling the perceived egocentric distance of objects in the environment.
Meeting abstract presented at VSS 2017