September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Disentangling the unique contribution of human retinotopic regions using neural control
Author Affiliations & Notes
  • Alessandro T. Gifford
    Freie Universität Berlin
  • Radoslaw M. Cichy
    Freie Universität Berlin
  • Footnotes
    Acknowledgements  A.T.G. is supported by a PhD fellowship of the Einstein Center for Neurosciences. R.M.C. is supported by German Research Council (DFG) Grant Nos. (CI 241/1-1, CI 241/3-1, CI 241/1-7) and the European Research Council (ERC) starting grant (ERC-StG-2018–803370).
Journal of Vision September 2024, Vol.24, 300. doi:https://doi.org/10.1167/jov.24.10.300
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alessandro T. Gifford, Radoslaw M. Cichy; Disentangling the unique contribution of human retinotopic regions using neural control. Journal of Vision 2024;24(10):300. https://doi.org/10.1167/jov.24.10.300.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Early- and mid-level retinotopic regions of the human ventral visual stream (V1 to V4) implement key stages of visual information processing. However, what aspects of the visual input each region uniquely encodes remains incompletely known. A major experimental roadblock in assessing each regions’ unique role is that typically their activation profiles are highly correlated, hiding their respective contribution to information processing. Here we used a novel analytical approach to disentangle the unique contribution of each retinotopic region. We started by leveraging NSD, a large-scale fMRI dataset, to build encoding models of all retinotopic regions. With these models we predicted neural responses for >100k naturalistic images (coming from NSD/ImageNet). We then implemented two neural control algorithms to find images that maximally distinguished predicted responses between all pairwise region combinations, thus revealing their idiosyncratic computations. The first neural control algorithm determined images that maximally activated the univariate response of each region while maximally deactivating the univariate response of other regions. The second neural control algorithm used genetic optimization to select an imageset that decorrelated (r=0) the multivariate responses between regions, through representational similarity analysis. We cross-validated both algorithms across NSD subjects, resulting in quantitatively disentangled responses, particularly for non-adjacent regions. The controlling images showed consistent qualitative patterns such as texture frequency, color, and object presence. Finally, we collected EEG responses for the V1-V4 comparison controlling images. These images disentangled the univariate and multivariate EEG responses over time, showcasing the generalizability of the neural control solutions across neuroimaging modalities. In sum, our contributions are threefold: we provide new quantitative and qualitative findings on the unique computation of retinotopic regions; we propose novel neural control algorithms capable of disentangling univariate and multivariate representations within biological and artificial information processing systems; and we demonstrate how data-driven exploration promotes discovery in understudied regions of the brain.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×