September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Neural Taskonomy: Explaining high-level visual processing of natural scenes using task-derived representations
Author Affiliations
  • Aria Y Wang
    Carnegie Mellon University
  • Michael J Tarr
    Carnegie Mellon University
  • Leila Wehbe
    Carnegie Mellon University
Journal of Vision September 2021, Vol.21, 2687. doi:https://doi.org/10.1167/jov.21.9.2687
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aria Y Wang, Michael J Tarr, Leila Wehbe; Neural Taskonomy: Explaining high-level visual processing of natural scenes using task-derived representations. Journal of Vision 2021;21(9):2687. https://doi.org/10.1167/jov.21.9.2687.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

What kind of information does the human brain use when we perceive a natural scene? We investigated this question using representations from 20 visually task-specific deep neural networks trained on natural scenes. Using whole brain data from two of the largest fMRI datasets on natural scenes, NSD (Natural Scene Dataset) and BOLD5000, we built voxelwise encoding models that use network representations learned individually for each task to predict brain responses for viewing these scenes. Our results show that networks trained on 2D and 3D tasks explain distinct variance in the brain. In particular, we found that high-level visual processing is better explained by 3D representations. Moreover, those neural network models that learned to focus on different images regions to perform their tasks were able to predict distinct receptive fields along the visual pathway. In aggregate, the individual brain prediction maps from each task representation enabled us to recover a landscape explicating how task-related information is processed across the brain. More generally, we suggest that using representations from a pool of task-driven deep neural networks, provides a means for combining the power of deep learning in extracting complex representations with interpretability to better explain complex processes in the human brain.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×