Purchase this article with an account.
Caitlin Mullin, Seyed-Mahdi Khaligh-Razav, Dimitrios Pantazis, Aude Oliva; The neural separation and integration of object and background scene information in natural images. Journal of Vision 2017;17(10):1089. doi: https://doi.org/10.1167/17.10.1089.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
A major challenge of scene understanding is to describe how information from multiple brain regions, is synthesized over time. Behavioral and neurological evidence suggests a division of object and scene background processing into distinct neural pathways. Despite extensive investigation, whether these pathways function sequentially or in parallel remains unknown. Here, we investigated the spatial and temporal representation of scene perception, from the deconvolution of a singular natural image into separate object and background information to their recombination into a unified percept. During individual MEG and fMRI sessions, participants viewed a series of natural images containing objects orthogonally paired with different backgrounds. Outside the scanner, participants arranged these stimuli based on the similarity of their object and background content separately. We then employed representational similarity analysis to correlate the behavioral representations of the object and scene background with activity-pattern representational dissimilarity matrices of the brain over space (fMRI) and time (MEG). Results from MEG analysis support the parallel processing pathways of object and background information with both signal onsets occurring simultaneously at ~100ms. However, these signals deviate after onset with scene backgrounds showing a transient response while objects were more sustained over time. fMRI searchlight analysis revealed distinct as well as overlapping regions corresponding to the representational similarity of both object and background. While expected regions such as lateral occipital and retrosplenial cortex were correlated with object and background representations respectively, the transverse occipital sulcus and parahippocampal cortex correlated with both representations. This suggests that while some regions parse the visual input into background and object others may treat the image in its entirety. These findings shed light on how the higher-order properties of images are separated and converge in specific brain regions at different stages of processing to enable our unified visual experience.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only