September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Explaining Scene-selective Visual Area Using Task-specific and Category-specific DNN Units
Author Affiliations & Notes
  • Kshitij Dwivedi
    Information Systems Technology and Design, Singapore University of Technology and Design, Singapore
  • Michael F Bonner
    Department of Psychology, University of Pennsylvania, Philadelphia, PA, United States of America
  • Gemma Roig
    Information Systems Technology and Design, Singapore University of Technology and Design, Singapore
    Massachusetts Institute of Technology
Journal of Vision September 2019, Vol.19, 190b. doi:https://doi.org/10.1167/19.10.190b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kshitij Dwivedi, Michael F Bonner, Gemma Roig; Explaining Scene-selective Visual Area Using Task-specific and Category-specific DNN Units. Journal of Vision 2019;19(10):190b. doi: https://doi.org/10.1167/19.10.190b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep neural networks (DNN) trained for classification are often used to explain responses of the visual cortex. Recently it was demonstrated that a DNN trained on a task related to the function of a brain region explains its responses better than a DNN trained on a task which is not explicitly related. Taking motivation from the previous results, in this work we investigate if we can infer the functionality of different areas in the scene-selective visual cortex by comparing the correlation of brain areas with DNNs trained on different tasks. We select 20 DNNs trained on diverse computer vision tasks including multiple 2D, 3D, and semantic tasks from the Taskonomy dataset. We select 2 areas in the scene-selective visual cortex, namely occipital place area (OPA) and parahippocampal place area (PPA). We perform representation similarity analysis (RSA) of OPA and PPA with 20 DNNs to investigate if the relative correlation of brain areas with different tasks can shed some light into the functions of these brain areas. The results reveal that OPA shows a higher correlation with 3D tasks as compared to semantic and 2D tasks, while PPA shows a higher correlation with semantic tasks. We further probe the functionality of these brain areas by category-specific units of a scene-parsing DNN. The results reveal that OPA shows high correlation with the categorical units crucial for navigational affordances while PPA shows high correlation with the categorical units crucial for scene-classification. Our results are consistent with previous neuroimaging studies investigating the function of PPA and OPA showing that PPA is involved in scene classification while OPA is involved in representing 3D scene structure and navigational affordances. Our results suggest that performing a searchlight analysis with RDMs of DNNs trained on different tasks may reveal the functional map of the visual cortex.

Acknowledgement: This work was funded by the MOE SUTD SRG grant (SRG ISTD 2017 131). Kshitij Dwivedi was also funded by SUTD President’s Graduate Fellowship. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×