August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
The Spatiotemporal Dynamics of Goal-driven Efficient-coding Revealed Through Brain-supervised Sparse Code Mapping
Author Affiliations & Notes
  • Bruce Hansen
    Colgate University
  • Michelle Greene
    Bates College
  • David Field
    Cornell University
  • Footnotes
    Acknowledgements  James S. McDonnell Foundation grant (220020430) to BCH; National Science Foundation grant (1736394) to BCH and MRG.
Journal of Vision August 2023, Vol.23, 5799. doi:https://doi.org/10.1167/jov.23.9.5799
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bruce Hansen, Michelle Greene, David Field; The Spatiotemporal Dynamics of Goal-driven Efficient-coding Revealed Through Brain-supervised Sparse Code Mapping. Journal of Vision 2023;23(9):5799. https://doi.org/10.1167/jov.23.9.5799.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The early visual system is believed to build visual representations based on a dynamic code that is flexibly adapted to suit the behavioral goals of the observer. It has been argued that the adaptive nature of task-relevant coding is, in part, achieved through selective information reduction over time (Zhan et al., 2018). However, the processes that enable such task-relevant coding efficiency have not been elucidated. Here, we propose that task relevant information reduction may be achieved through sparse-distributed coding (Olshausen & Field, 1996), which has recently been shown to carry scene relevant information (Tang et al., 2018; Zhang et al., 2022). To explore this possibility further, we replaced the linear operations of the dynamic electrode-to-image (DETI) mapping technique (Hansen et al., 2021) with novel brain-supervised sparse-coding network operations to build sparse reconstructions of different task-relevant information in scenes. Participants (n = 24) viewed repeated presentations of 80 scenes while making cued assessments about either the presence of an object in the scene, or whether the scene afforded the ability to perform a function. Neural data were gathered via 128-channel EEG in a standard visual evoked potential (VEP) paradigm. The variance from the VEPs across images was used to sparsify the responses of Gabor encoders that were used to reconstruct task relevant information at different electrodes. The results showed that while both tasks resulted in coarse-to-fine processing over time, there were task related efficiency differences at electrodes associated with peripheral field responses. Specifically, task relevant information for the function task required fewer encoders than the object task early (~55ms), but required more encoders (became less efficient) later (~250ms). Only weak differences were observed over time in electrodes associated with central visual field responses. These findings argue that coding efficiency does not necessarily improve over time as currently believed.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×