December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Metamer generation 2.0: using fMRI and deep learning to assess the specificity of human visual processing and encoding
Author Affiliations & Notes
  • Jean-Maxime Larouche
    University of Montreal
  • Clémentine Pagès
    University of Montreal
  • Frédéric Gosselin
    University of Montreal
  • Footnotes
    Acknowledgements  AWS activate; Hippoc; Mitacs
Journal of Vision December 2022, Vol.22, 4382. doi:https://doi.org/10.1167/jov.22.14.4382
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jean-Maxime Larouche, Clémentine Pagès, Frédéric Gosselin; Metamer generation 2.0: using fMRI and deep learning to assess the specificity of human visual processing and encoding. Journal of Vision 2022;22(14):4382. https://doi.org/10.1167/jov.22.14.4382.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This methodological research validates the most efficient approach to generate metameric stimuli i.e., stimuli recruiting different populations of neurons in different brain regions. We first trained different encoding models to predict linearly the fMRI activation for an image in each visual ROI, based on the activation of each layer in deep convolutional neural networks (DCNN). To find the most accurate models, we then compared multiple DCNN trained to classify object categories on millions of images, and different fMRI datasets of natural images. Using the most accurate encoding models, we predicted the fMRI activation associated with an image X and iteratively found the image X’ (representing a metamer of X) with an Adam optimizer function. We compared different loss function to minimize the distance between X and X' in some parts of the visual cortex (IT to V2) but maximize the distance in other parts of visual processing (V1). For the image X, we changed the loss function parameter and calculated the images X', X'' and X''' which represent gradual metamers of image X for each part of the visual system, where X' is different from X in V1, X'' is different from X in V1 and V2, X''' in V1, V2 and V4. This approach allows a better understanding of the role of different levels in the visual ventral stream by mapping the activated brain areas in a more interpretable space - that of stimuli - and make possible the development of more precise experimental protocols in visual neuroscience.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×