December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
An enhanced inverted encoding model for neural reconstructions of visual perception, attention, and memory
Author Affiliations & Notes
  • Paul S Scotti
    The Ohio State University
  • Jiageng Chen
    The Ohio State University
  • Julie D Golomb
    The Ohio State University
  • Footnotes
    Acknowledgements  NSF DGE-1343012 (PS), NIH R01-EY025648 (JG), NSF 1848939 (JG)
Journal of Vision December 2022, Vol.22, 3975. doi:https://doi.org/10.1167/jov.22.14.3975
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Paul S Scotti, Jiageng Chen, Julie D Golomb; An enhanced inverted encoding model for neural reconstructions of visual perception, attention, and memory. Journal of Vision 2022;22(14):3975. https://doi.org/10.1167/jov.22.14.3975.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Inverted encoding models (IEMs) have recently become a popular method for investigating neural representations by reconstructing the contents of perception, attention, and memory from neuroimaging data. Here we present a more interpretable and flexible approach, “enhanced inverted encoding modeling” (eIEM), that results in improved reconstructions of visual features across a wide range of perceptual and mnemonic applications. eIEM incorporates several methodological improvements, including proper consideration of the encoder’s population-level tuning functions. Improved interpretability is further gained via a trial-by-trial prediction error-based metric; reconstruction quality can be measured in meaningful units that are directly comparable across experiments rather than the current standard of arbitrary units. Improved flexibility is gained via eIEM’s new goodness-of-fit feature: for trial-by-trial reconstructions, goodness-of-fits are obtained independently (and non-circularly) to prediction error. Incorporating this trial-wise goodness-of-fit information can reliably improve reconstruction quality and brain-behavior correlations. We validate the improved utility of eIEM from methodological principles and across three pre-existing fMRI datasets (1. decoding horizontal position of a perceived stimulus, 2. decoding an attended item’s orientation from a multi-item stimulus array, 3. decoding orientation of an item held in working memory). Researchers can also benefit from partial adoption of eIEM: e.g., goodness-of-fits from eIEM can be used to improve results obtained from any IEM procedure or decoding metric. Notably, our enhanced IEM procedure is easy to apply and broadly accessible; our publicly available Python package implements our recommended approach (on simulated or real neuroimaging data, including fMRI, EEG, MEG, etc.) in one line of code, and is easily modifiable to compare performance metrics and/or scale up to more complex models.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×