August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Exploring spatio-temporal neural basis of scene processing with MEG/EEG using a convolutional neural network
Author Affiliations
  • Ying Yang
    Center for the Neural Basis of Cognition, Carnegie Mellon University
  • Robert Kass
    Center for the Neural Basis of Cognition, Carnegie Mellon University
  • Michael Tarr
    Center for the Neural Basis of Cognition, Carnegie Mellon University
  • Elissa Aminoff
    Center for the Neural Basis of Cognition, Carnegie Mellon University
Journal of Vision September 2016, Vol.16, 526. doi:https://doi.org/10.1167/16.12.526
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ying Yang, Robert Kass, Michael Tarr, Elissa Aminoff; Exploring spatio-temporal neural basis of scene processing with MEG/EEG using a convolutional neural network. Journal of Vision 2016;16(12):526. https://doi.org/10.1167/16.12.526.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Human brains can efficiently process rich information in visual scenes. The neural mechanisms underlying such proficiency may involve not only feedforward processing in the hierarchical visual cortex, but also top-down feedback. To understand these mechanisms, we explored the nature of the visual scene features processed at different brain locations and different time points using high-temporal-resolution MEG and EEG - in separate sessions - while participants viewed briefly presented (200ms) photographs of scenes. We used linear regression to quantify the correlations between neural signals and visual features of the same images, where these features were derived from a convolutional neural network (CNN) with 8 hierarchically organized layers. Next we tested whether variance in the neural signals was explained at each time point and each location by features in different layers, thereby creating a spatio-temporal profile describing the significance of correlation with different CNN layers. For both the MEG and EEG sensor data, we observed that the majority of layers exhibited significant correlations from 60~450 ms after the stimulus onset. When contrasting low-level Layer1 with higher-level Layer6, we found that Layer1 demonstrated greater significance early on (before 120 ms), while Layer6 showed greater significance somewhat later (after 150 ms). In a preliminary analysis of source localized MEG data, we again observed sustained significance for the majority of layers, as well as early greater significance of Layer1 in lower-level visual cortex and later greater significance of Layer6 in higher-level visual cortex. This early to late, lower- to higher-level progression indicates feedforward information flow. Additionally, the sustained significance of low- and high-level layers, which was maintained until at least 400 ms, indicates possible non-feedforward neural responses during scene processing. We are also using connectivity analysis to further investigate if there is top-down feedback from frontal lobe and inferior temporal lobe to the visual cortex.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×