December 2017
Volume 17, Issue 15
Open Access
OSA Fall Vision Meeting Abstract  |   December 2017
Bottom-up and top-down computations in word- and face-selective cortex
Author Affiliations
  • Kendrick Kay
    University of Minnesota
Journal of Vision December 2017, Vol.17, 13. doi:https://doi.org/10.1167/17.15.13
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kendrick Kay; Bottom-up and top-down computations in word- and face-selective cortex. Journal of Vision 2017;17(15):13. https://doi.org/10.1167/17.15.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Specific regions of ventral temporal cortex (VTC) appear to be specialized for the representation of certain visual categories: for example, the visual word form area (VWFA) for words and the fusiform face area (FFA) for faces. However, a computational understanding of how these regions process visual inputs is lacking. Here we develop a fully computable model that addresses both bottom-up and top-down effects and quantitatively predicts responses in VWFA and FFA (Kay & Yeatman, eLife, 2017). This model is based on measurements of BOLD responses to a wide range of carefully controlled images obtained while subjects perform different tasks on the images. The model shows how a bottom-up stimulus representation is computed, how this representation is modulated by top-down interactions with the intraparietal sulcus (IPS), and how IPS activity is related to the behavioral goals of the subject. We also briefly discuss the broader endeavor of modeling neural information processing and propose principles for assessing and evaluating models (Kay, NeuroImage, 2017).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×