Purchase this article with an account.
Talia Brandman, Marius Peelen; Object cues facilitate the multivariate representations of scene layout in human fMRI and MEG. Journal of Vision 2018;18(10):1242. doi: https://doi.org/10.1167/18.10.1242.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We recognize our surroundings even with little layout information available in the visual image, such as in fog or darkness. One way to disambiguate scenes is through object cues. For example, a boat supports the inference of a lake. Previously we have shown how scenes facilitate the neural representation of objects. The current study examines the reverse interaction, by which objects facilitate the neural representation of scene layout, using fMRI and MEG. Photographs of indoor (closed) and outdoor (open) real-world scenes were blurred such that they were difficult to categorize on their own, but easily disambiguated by the inclusion of an object. Classifiers were trained to distinguish response patterns to fully visible indoor and outdoor scenes, presented in an independent acquisition run, and then tested on layout discrimination of blurred scenes in the main experiment. fMRI results revealed a strong improvement in classification in left parahippocampal place area (PPA) and occipital place area (OPA) when objects were present, despite the reduced low-level visual feature overlap with the training set in this condition. These findings were specific to left PPA/OPA, with no evidence for object-driven facilitation in right PPA/OPA, object-selective areas, and early visual cortex. Furthermore, contextual facilitation in the left, but not right, PPA/OPA was significantly correlated with classification of objects without scenes. MEG results revealed better decoding of scenes with objects than scenes alone and objects alone, particularly at around 300 ms after stimulus onset. Altogether, these results provide evidence for inferred scene representation, which is facilitated by contextual object cues in the left scene-selective areas and at around 300 ms from visual onset. Furthermore, our findings demonstrate separate roles for left and right scene-selective cortex in scene representation, whereby left PPA/OPA represents inferred scene layout, influenced by contextual object cues, and right PPA/OPA represents a scene's visual features.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only