October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Cortical organization as optimization
Author Affiliations
  • Nicholas Blauch
    Carnegie Mellon University
  • Marlene Behrmann
    Carnegie Mellon University
  • David Plaut
    Carnegie Mellon University
Journal of Vision October 2020, Vol.20, 1683. doi:https://doi.org/10.1167/jov.20.11.1683
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicholas Blauch, Marlene Behrmann, David Plaut; Cortical organization as optimization. Journal of Vision 2020;20(11):1683. https://doi.org/10.1167/jov.20.11.1683.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The presence of category-selective areas in ventral temporal cortex (VTC) of humans and other primates has been used to support modular theories of perception containing separable components for the processing of categories such as faces and text. However, substantial evidence supports a non-modular, distributed account of processing containing topographic, graded specialization. Whether the developed system is best characterized as modular or not, a theory of its development is required. We performed small-scale abstract and large-scale visual recognition simulations to understand the development of specialization in tasks with varying degrees of functional overlap. Abstract autoencoder simulations revealed a small benefit from sharing hidden representations across orthogonal input domains – that is, from avoiding modularity. However, when the autoencoder was required to simultaneously encode inputs from both domains, it developed fully modular representations. By varying the fraction of inputs coming from a single domain or multiple domains, we could precisely control the degree of developed modularity. We next examined a deep convolutional neural network trained to recognize objects and faces. A fully shared network performed slightly better than architecturally modular networks matched in total units. Further, the shared network developed substantial but graded specialization for objects and faces, with many units demonstrating domain-preferential mean responses and category-invariant information, while retaining such properties for the non-preferred domain. In ongoing work with a map-like deep convolutional recurrent neural network, we find that a simple and biologically-plausible scaling of connection noise or probability with axon distance may be sufficient to produce localized face-selective clusters. Our modeling approach demonstrates that graded, localized specialization may emerge from optimizing hidden representations for multiple tasks under architectural constraints, and that such graded specialization may be preferable to modularity even in the abstract scenario of representing orthogonal patterns. Our results thus weaken the case for full-fledged modularity in visual recognition.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×