Abstract
Visual scene perception is a core cognitive ability that allows us to recognize where we are and how to act upon our environment. Visual scene perception is therefore crucial to our functioning. Despite this, the neural implementation of visual scene perception remains largely unexplored. Although previous neuroimaging studies have identified several scene-selective brain regions – most notably the parahippocampal place area (PPA), retrosplenial cortex (RSC), and a region along the transverse occipital sulcus (TOS) – data thus far do not indicate the type of neural computations underlying visual scene perception. When a visual system computes a statistic based upon multiple visual features, it is said to perform textural analysis. Clearly, texture analysis is useful to characterize the texture of surface materials. But from a computational perspective, it can also be used to characterize visual scenes. We reasoned that if the brain applies textural analysis to scenes, one would expect it to encode textures and scenes in the same cortical regions. To test this hypothesis, we used long-interval fMRI repetition priming to identify regions in which neuronal activity attenuates on repetition of visually presented textures. This approach allowed us to probe regions that encode visual texture independent of spatial image transformations. Such independence is important because the result of textural analysis (i.e., extracted statistical image information) should be stable across varying retinal projections. This was verified by the observation that rotated and scaled repetitions of the stimuli did not cancel the priming-induced reduction of activity. In addition, we used a classic fMRI ‘localizer’ sequence in order to independently identify the PPA, RSC, and TOS. Our results reveal that the human brain encodes texture in regions that are also scene-selective. This, we argue, indicates that there is one cortical network for visual scene and texture perception that uses statistical image information in its computations.
This research was supported by European Union grants #043157 and #043261.