Purchase this article with an account.
Soojin Park, Talia Konkle, Aude Oliva; Neural coding of the size of space and the amount of clutter in a scene. Journal of Vision 2011;11(11):818. doi: 10.1167/11.11.818.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Estimating the size of a space and the level of clutter within it is central to our interactions and navigation through the world, for example, deciding whether or not to take a crowded elevator. The size of a space is a property defined by scene boundaries, while clutter is a property defined by the amount of objects within these boundaries. Given that these dimensions can reflect independent properties of indoor scenes, we examined whether size and clutter are differentially represented in any scene-selective neural regions.
We gathered images from 36 different indoor scene categories that spanned six levels of size (from a closet to an enclosed arena) and six levels of clutter (from empty to full), resulting in a fully crossed stimulus set. Observers were shown blocks of these scene categories and performed a one-back repetition task while undergoing whole brain imaging in a 3T fMRI scanner. Given this parametric design, we used a linear regression model on the multivoxel pattern activity from scene-selective regions. The regression model was trained with five categories per level and tested with the remaining categories, thus requiring generalization across different semantic categories to correctly predict the level of size or clutter. We found a significant interaction across region (RSC, PPA, LOC) and scene property (Size or Clutter; F(2,10) = 5.0, p < .05): Patterns of activity in the parahippocampal place area (PPA) and the lateral occipital complex (LOC) parametrically predict both size and clutter, but patterns in the retrosplenial complex (RSC) predict only the size of the depicted space.
The results suggest that in the retrosplenial cortex, the size of the scene is coded independently of the amount of clutter within that scene, consistent with previous results showing complementary and distributed neural representations of properties describing the spatial boundary and the content of a scene.
This PDF is available to Subscribers Only