August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Decoding the informative value of early and late visual evoked potentials in scene categorization
Author Affiliations
  • Bruce Hansen
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY
  • Michelle Greene
    Computational Sciences, Minerva Schools at KGI, San Francisco, CA
  • Catherine Walsh
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY
  • Rachel Goldberg
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY
  • Yanchang Zhang
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY
Journal of Vision September 2016, Vol.16, 259. doi:10.1167/16.12.259
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Bruce Hansen, Michelle Greene, Catherine Walsh, Rachel Goldberg, Yanchang Zhang; Decoding the informative value of early and late visual evoked potentials in scene categorization. Journal of Vision 2016;16(12):259. doi: 10.1167/16.12.259.

      Download citation file:


      © 2017 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

Recent advances in information-based brain imaging data analysis (e.g., neural decoding) have provided deep insight into the spatiotemporal dynamics of how the brain processes and ultimately represents objects and scenes (e.g., Cichy et al., 2014; Ramkumar, et al., 2015 respectively). However, the spatiotemporal dynamics involved in the neural representation of scene category have only been explored with a handful of categories, and therefore can only speak to coarse categorization of exemplars from disparate scenes. The time-coarse of neural signals underlying fine-grained scene categorization therefore remains an open question. Here, we sought to extend information-based analysis of neural temporal dynamics involved in scene categorization via neural decoding of visual evoked potentials (VEPs) measured through Electroencephalography (EEG). Specifically, we examined the informational value of different VEPs with respect to their relative ability to signal for fine-grained scene category information. Participants viewed 2,250 exemplars of full-color scene images (matched for luminance and color contrast) drawn from 30 different categories while having their brain activity measured through 128-channel EEG. The stimuli subtended 14° x 14° of visual angle, and were presented to the fovea for 500ms. Participant attention was maintained with a super-ordinate categorization task. All VEPs were decoded with a linear multiclass support vector machine (SVM) classifier applied within a 40ms sliding window at each time point at each electrode. The results revealed that the foveal C1 component (peaking between 90-100ms post-stimulus onset) was best able to discriminate between all 30 scene categories, with the bilateral P1 and central-frontal N2 being a distant second. All later components contained very little, if any, category information. Given that the foveal C1 has been argued to be dominated by cortical generators within striate cortex, the current results imply that early low-level visual signals may serve to define the boundaries between different fine-grained scene categories.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×