Journal of Vision Cover Image for Volume 24, Issue 10
September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Exploiting large-scale neuroimaging datasets to reveal novel insights in vision science
Author Affiliations
  • Ian Charest
    Université de Montréal
    Mila - Québec AI Institute
  • Peter Brotherwood
    Université de Montréal
  • Catherine Landry
    Université de Montréal
  • Jasper van den Bosch
    Université de Montréal
  • Shahab Bakhtiari
    Université de Montréal
    Mila - Québec AI Institute
  • Tim Kietzmann
    University of Osnabrück
  • Frédéric Gosselin
    Université de Montréal
  • Adrien Doerig
    University of Osnabrück
Journal of Vision September 2024, Vol.24, 149. doi:https://doi.org/10.1167/jov.24.10.149
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ian Charest, Peter Brotherwood, Catherine Landry, Jasper van den Bosch, Shahab Bakhtiari, Tim Kietzmann, Frédéric Gosselin, Adrien Doerig; Exploiting large-scale neuroimaging datasets to reveal novel insights in vision science. Journal of Vision 2024;24(10):149. https://doi.org/10.1167/jov.24.10.149.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Building quantitative models of neural activity in the visual system is a long-standing goal in neuroscience. Though this research program is fundamentally limited by the small scale and low signal-to-noise of most existing datasets, with the advent of large-scale datasets it has become possible to build, test, and discriminate increasingly expressive competing models of neural representation. In this talk I will describe how the scale of the 7T fMRI Natural Scenes Dataset (NSD) has made possible novel insights into the mechanisms underlying scene perception. We harnessed recent advancements in linguistic artificial intelligence to construct models that capture progressively richer semantic information, ranging from object categories to word embeddings to scene captions. Our findings reveal a positive correlation between a model's capacity to capture semantic information and its ability to predict NSD data, a feature then replicated with recurrent convolutional networks trained to predict sentence embeddings from visual inputs. This collective evidence suggests that the visual system, as a whole, is better characterized by an aim to extract rich semantic information rather than merely cataloging object inventories from visual inputs. Considering the substantial power of NSD, collecting additional neuroimaging and behavioral data using the same image set becomes highly appealing. We are expanding NSD through the development of two innovative datasets: an electroencephalography dataset called NSD-EEG, and a mental imagery vividness ratings dataset called NSD-Vividness. Datasets like NSD not only provide fresh insights into the visual system but also inspire the development of new datasets in the field.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×