Abstract
The visual system must transform two-dimensional retinal information into estimates of real-world distance and physical size based on pictorial and binocular cues. Although numerous human neuroimaging studies have investigated how various brain regions represent retinal size, distance, familiar size, physical size, these studies have often used visually impoverished stimuli, typically images, with conflicting cues, and studied factors in isolation. We investigated how brain activation levels (and patterns) in ventral- and dorsal-stream regions depend upon multiple factors (retinal size, physical size, familiar size, distance) that co-occur in the real world. We presented real objects at real distances during functional magnetic resonance imaging (fMRI). We manufactured MR-compatible Rubik’s cubes and dice at their typical sizes (5.7 cm and 1.6 cm, respectively) and each other’s typical sizes. We oriented them obliquely at two distances (25 cm, within reach, vs 91 cm, out of reach), such that two combinations subtended the same retinal angle (4.7° for small, near and large, far) while other combinations yielded smaller (1.3°; small, far) or larger (15.9°; large, near) retinal angles. Univariate contrasts revealed that the dorsal-stream regions -- left superior parietal occipital cortex (SPOC) and bilateral anterior intraparietal sulci (aIPS) – showed higher activation for objects in near (vs. far) space, even when retinal angles were matched, and for physically large (vs. small) objects in near space. Higher-order perceptual visual areas – lateral occipital cortex (LOC) and the parahippocampal place area (PPA) – distinguished between the two objects, showing higher activation for the Rubik’s cube (vs. die). Activation in bilateral V1 and PPA increased with retinal size. Taken together, our results suggest that distance and physical size are processed in the dorsal stream, where those features are critical for actions such as reaching and grasping; whereas, object features are more important in the ventral stream, where they are relevant for perception.
Acknowledgement: NSERC, BrainsCAN