December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Whole-network activation maximization: a flexible method for exploring visual selectivity in the brain
Author Affiliations
  • Matthew W. Shinkle
    University of Nevada, Reno
  • Mark D. Lescroart
    University of Nevada, Reno
Journal of Vision December 2022, Vol.22, 4462. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthew W. Shinkle, Mark D. Lescroart; Whole-network activation maximization: a flexible method for exploring visual selectivity in the brain. Journal of Vision 2022;22(14):4462.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Regression models based on deep neural networks (DNNs) can accurately predict BOLD responses to diverse stimuli in human visual cortex. However, interpretation of DNN models remains challenging. Past work has used a range of methods including deconvolution, occlusion sensitivity and single-unit activation maximization. However, this work has been predominately in nonhuman primates, using direct neural recordings and models tailored to specific neural populations. Here, we demonstrate that many-unit activation maximization based on human BOLD data is a flexible, data-driven tool for characterizing low- and high-level selectivity throughout visual cortex. We computed all unit activations for a pre-trained object-recognition DNN in response to a set of naturalistic stimuli. We then used regularized linear regression to fit these activations to previously collected BOLD responses from multiple human subjects. In addition to producing accurate predictions of voxel responses throughout visual cortex, this model serves as a differentiable mapping from image inputs to predicted voxel responses. Via 'gradient ascent', we iteratively updated input images to maximize predicted responses of individual voxels. Resulting images consistently capture known feature selectivity in multiple cortical regions. Images generated for voxels in V1, V2 and V3 replicate independently estimated spatial receptive fields, and images generated for voxels in place- and face-selective areas contain qualitatively face- and place-like contents. Additionally, averaging of weights across many voxels produces images which summarize selectivity across regions, subjects and datasets. The quality of our model's performance and the interpretability of these images suggest that this approach could be used as an exploratory tool outside of well-characterized regions. We also show that, by contrasting mean weights for different sets of voxels, we can explore functional distinctions within and between cortical regions. Our results serve as the first demonstration of whole-network activation maximization using BOLD data and affirm the exploratory and confirmatory potential of this approach.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.