December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
How do behavioral goals shape the spatiotemporal evolution of the sparse code for scenes?
Author Affiliations & Notes
  • Bruce C. Hansen
    Colgate University
  • Michelle R. Greene
    Bates College
  • David J. Field
    Cornell University
  • Isabel S. H. Gephart
    Colgate University
  • Victoria E. Gobo
    Colgate University
  • Footnotes
    Acknowledgements  James S. McDonnell Foundation grant (220020430) to BCH; National Science Foundation grant (1736394) to BCH and MRG.
Journal of Vision December 2022, Vol.22, 4199. doi:https://doi.org/10.1167/jov.22.14.4199
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bruce C. Hansen, Michelle R. Greene, David J. Field, Isabel S. H. Gephart, Victoria E. Gobo; How do behavioral goals shape the spatiotemporal evolution of the sparse code for scenes?. Journal of Vision 2022;22(14):4199. https://doi.org/10.1167/jov.22.14.4199.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The early visual system is believed to use a sparse distributed code to represent scene information that enables intelligent behavior. Interestingly, the early visual code is not a static solution, but evolves over time to prioritize different scene regions (Hansen et al., 2021). Further, the code is not deterministic, but results in representations that best suit the goals of the observer (Schyns & Gosselin, 2003). However, we don’t know how behavioral goals shape the spatiotemporal evolution of sparse distributed coding. We developed a brain-supervised sparse coding network to assess the sparsification of the neural code at every location in scenes over time. 128-channel EEG recordings were made while participants viewed repeated presentations of 80 scenes while making cued assessments about either 1) their confidence that a given object was present in a scene, or 2) the likelihood that they would perform a given action afforded by a scene. We then used dynamic electrode-to-image (DETI) mapping (Hansen et al., 2021) to guide the selection of scene regions that would be used to train a sparse-coding network that was augmented by visual evoked potentials (VEPs) to build a large set of visual encoders. The stimuli were then reconstructed by those encoders at different time points and sparsified according to the participants’ VEP variance. The results revealed that identical scenes undergo different amounts of sparsification depending on the task as early as 70ms, with affordance judgements yielding more sparsification. Interestingly, while both tasks resulted in sparse codes for a third of the scenes by 170ms, the affordance task required de-sparsification of some of the initially sparse coded scenes. These results suggest that sparse distributed codes are not only shaped by behavioral goals early, but can actually be undone over the spatiotemporal evolution of the visual signal according to the goals of the observer.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×