August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
An encoding model in shared functional space to reconstruct representations in multiple datasets
Author Affiliations & Notes
  • Laurent Caplette
    Yale University
  • Nicholas B. Turk-Browne
    Yale University
  • Footnotes
    Acknowledgements  Funding: FRQNT Postdoctoral Research Scholarship
Journal of Vision August 2023, Vol.23, 5370. doi:https://doi.org/10.1167/jov.23.9.5370
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Laurent Caplette, Nicholas B. Turk-Browne; An encoding model in shared functional space to reconstruct representations in multiple datasets. Journal of Vision 2023;23(9):5370. https://doi.org/10.1167/jov.23.9.5370.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In recent years, neural encoding models have enabled the decoding and reconstruction of percepts, thoughts, and dreams. However, such models are usually fit to individual participants and require large amounts of data. This makes them both laborious to construct and unrepresentative of the population. Here, we propose the use of shared response modelling (SRM) to build an encoding model that generalizes to new participants. SRM is a functional alignment method that uses a common stimulus to learn a mapping from individual data to a lower-dimensional, shared latent space. This enables the simultaneous analysis of otherwise incompatible data. Our approach has four steps: (1) record brain activity from participants while they view a common movie; (2) use SRM to learn a mapping to a shared space; (3) record brain activity while the participants are shown new stimuli (can be different across participants); (4) fit an encoding model to these latter data after projecting them into the shared space. After constructing the encoding model this way, other researchers can transform their own data into its feature space without needing to fit the model themselves, as long as they show their participants the same common movie. We tested this approach using CNN encoding models and the StudyForrest dataset, which contains fMRI activity from participants who watched the movie “Forrest Gump”. We could significantly predict brain activity in high-level visual areas using an encoding model fit on held-out participants and movie segments. Moreover, this model often performed better than a model fit to each participant themselves. However, the specific movie segment used to learn the SRM mapping had a large influence on the outcome. Overall, this method enables the reconstruction of representations in new participants without the need to fit a new encoding model. We plan to further test these ideas on MEG data.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×