October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
A compositional letter code explains orthographic processing
Author Affiliations & Notes
  • Aakash Agrawal
    Center for Biosystems Science and Engineering,Indian Institute of Science
  • K.V.S. Hari
    Department of Electrical Communication Engineering, Indian Institute of Science
  • S.P. Arun
    Center for Neuroscience, Indian Institute of Science
  • Footnotes
    Acknowledgements  This research was funded through a Senior Fellowship from the DBT-Wellcome India Alliance (Grant # IA/S/17/1/503081) and the DBT-IISc partnership programme (both to SPA)
Journal of Vision October 2020, Vol.20, 1035. doi:https://doi.org/10.1167/jov.20.11.1035
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aakash Agrawal, K.V.S. Hari, S.P. Arun; A compositional letter code explains orthographic processing. Journal of Vision 2020;20(11):1035. https://doi.org/10.1167/jov.20.11.1035.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Reading is a recent cultural invention that exploits the intrinsic abilities of the visual system to process text. However, the underlying neural mechanism that enables us to read efficiently is unclear. Our ability to read fluently can arise due to the formation of specialized detectors for letter combinations. Alternatively, the representation of words can be more compositional, like the default representation in visual cortex wherein the neural response of an object can be predicted using its part responses. Here, we show evidence for the latter hypothesis by constructing a model in which the response to a string can be predicted using single letter responses. This model is purely visual in nature and does not incorporate any linguistic factors. We tested the performance of this model in predicting human performance in two tasks. The first was visual search, in which subjects had to find an oddball target string embedded among distractors. The second was a lexical decision task, in which subjects had to indicate whether a given string was a word or not. In both tasks, the model was able to predict human performance accurately, without invoking any lexical or linguistic factors. To investigate the underlying neural correlates, we performed measured brain activity using fMRI while subjects performed a lexical decision task. We found that dissimilarities between words and nonwords in visual search corresponded best with neural dissimilarities in the Lateral Occipital region (LO). By contrast, lexical decision times, which were best predicted using word-nonword dissimilarities in the compositional model, were best matched to the overall activation of the Visual Word Form Area (VWFA). Thus, viewing a string of letters activates a compositional code in the higher visual areas, and subsequent decisions about its lexical status are computed in the visual word form area.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.