Purchase this article with an account.
Teresa Pegors, Russell Epstein; Neural coding of scene affordances. Journal of Vision 2010;10(7):1258. doi: https://doi.org/10.1167/10.7.1258.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Previous work has identified cortical regions such as the parahippocampal place area (PPA) that respond more strongly to scenes than to nonscene objects. Little is known, however, about the principles used to encode scenes in this region. One possibility is that scenes are coded based on the actions that they afford. For example, some scenes have highly constrained spatial layouts that afford movement in only one direction (e.g. an alleyway), while other scenes have more open spatial layouts that afford movement in multiple directions (e.g. an open plain). We investigated this issue by scanning subjects with fMRI while they viewed real-world scenes that varied under two dimensions: (1) the extent to which the scene constrained motion (highly constrained vs. unconstrained), (2) whether the direction of afforded motion was leftwards, rightwards, or straight ahead. Constraint and direction of afforded motion values were determined for each image through pre-scan surveys of a separate group of subjects. We examined both main effects and cross-image adaptation for items with similar vs. different affordances. Preliminary results indicate that the PPA responds more strongly to highly constrained than to unconstrained spatial layouts, consistent with a putative role for this region in processing information about the motion affordances within a scene.
This PDF is available to Subscribers Only