Abstract
For understanding complex natural environments, the brain must efficiently extract information from a rich, ongoing stream of sensory input. Here we characterize how spatial schemata (i.e., our knowledge about the structure of the world) help the visual system to make sense of these inputs. Specifically, we elucidate how schemata contribute to rapidly emerging perceptual representations of the environment. In separate EEG and fMRI experiments, we showed participants fragments of natural scene images, presented at central fixation, while they performed an orthogonal categorization task. Using multivariate analyses, we then investigated where and when neural representations of these fragments were explained by their position within the scene. We observed a sorting of incoming information according to its place in the schema in scene-selective occipital cortex and within the first 200ms of vision. This neural sorting operates flexibly across visual features (as measured by a deep neural network model) and different types of environments (indoor and outdoor scenes). This flexibility highlights the mechanism’s ability to efficiently organize incoming information under dynamic real-world conditions. The resulting organization allows for rapid inferences about the current scene context and its behavioral affordances and can thereby support efficient real-life behaviors.
Acknowledgement: The research was supported by DFG grants awarded to D.K. (KA4683/2-1) and R.M.C. (CI241/1-1).