October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
A geometric state-space framework reveals the evoked potential topography of the visual field
Author Affiliations & Notes
  • Bruce C. Hansen
    Colgate University, Department of Psychological & Brain Sciences, Neuroscience Program
  • Michelle R. Greene
    Bates College, Neuroscience Program
  • David J. Field
    Cornell University, Department of Psychology
  • Footnotes
    Acknowledgements  James S. McDonnell Foundation grant (220020430) to BCH; National Science Foundation grant (1736394) to BCH and MRG.
Journal of Vision October 2020, Vol.20, 1652. doi:https://doi.org/10.1167/jov.20.11.1652
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bruce C. Hansen, Michelle R. Greene, David J. Field; A geometric state-space framework reveals the evoked potential topography of the visual field. Journal of Vision 2020;20(11):1652. doi: https://doi.org/10.1167/jov.20.11.1652.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Voxelwise encoding models of blood oxygen level-dependent (BOLD) signals offer insight into how information at different visual field locations is simultaneously represented in visual cortex. Here, we sought to extend this modeling approach to visual evoked potentials (VEPs) measured at different scalp locations by capitalizing on the principles of the cruciform model (Jeffreys and Axord, 1972a,b). However, using raw VEPs to simultaneously map the visual field to the scalp topography of EEG electrodes would result in overlapping components that differ in polarity as a function of visual field location. What that means is that a complete simultaneous topographic mapping of the visual field would be largely obscured by dipole cancellation. To circumvent this problem, we mapped the localized outputs of a log-Gabor filter encoding model to different VEPs within a geometric state-space framework. Specifically, we measured the correspondence between the state-space geometry produced by our encoding model at every location within large-field visual scenes and the state-space geometry of VEPs measured at each electrode on the posterior scalp. Data were gathered in a standard VEP paradigm whereby participants (n = 23) viewed 150 grayscale scenes (18.5 degrees of visual angle) while undergoing 128-channel EEG. The encoding model state-space produced at each location of the visual field was then regressed against the neural state-space produced at each time point for each electrode. The results show that each posterior electrode can be simultaneously mapped to unique regions of the visual field, with a complete map of the entire visual field represented across all posterior electrodes starting at 75msec post-stimulus onset. The success of this state-space mapping approach suggests that it is possible to use evoked potentials to assess the temporal encoding of visual information at different locations within the visual field, thereby providing insight into visual feature usage over space and time.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.