August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Early visual areas recruited in automatic contextual processing of words
Author Affiliations
  • Elissa Aminoff
    Center for the Neural Basis of Cognition, Carnegie Mellon University
  • Michael Miller
    Department of Psychological and Brain Sciences, University of California, Santa Barbara\nInstitute of Collaborative Biotechnologies, University of California, Santa Barbara
  • Scott Grafton
    Department of Psychological and Brain Sciences, University of California, Santa Barbara\nInstitute of Collaborative Biotechnologies, University of California, Santa Barbara
  • Michael Tarr
    Center for the Neural Basis of Cognition, Carnegie Mellon University\nDepartment of Psychology, Carnegie Mellon University
Journal of Vision August 2012, Vol.12, 1112. doi:https://doi.org/10.1167/12.9.1112
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Elissa Aminoff, Michael Miller, Scott Grafton, Michael Tarr; Early visual areas recruited in automatic contextual processing of words. Journal of Vision 2012;12(9):1112. https://doi.org/10.1167/12.9.1112.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Objects do not appear randomly in our environment, but rather clustered in typical contexts. For example, an oven will likely appear with a refrigerator and a microwave in close proximity. Previously, the parahippocampal cortex (PHC), the retrosplenial complex (RSC), and the medial prefrontal cortex (MPFC; Bar & Aminoff, 2003) were designated in the neural mechanism underlying contextual processing by comparing BOLD activity elicited when viewing pictures of objects with strong contextual associations (e.g., shower) with activity elicited when viewing pictures of objects with weak contextual associations (e.g., folding chair). This neural mechanism was defined in various experiments using only pictures, and it was unclear whether these regions would also respond when viewing words with strong contextual associations. To explore this, twenty participants evaluated 360 words for contextual strength. Based on these results, words were delineated as having a strong context or a weak context. In a separate experiment, ninety-five participants performed a recognition memory test, unrelated to contextual processing, on these 360 words while undergoing fMRI. BOLD activity was compared when viewing words with strong contextual associations (e.g., bullet) with words with weak contextual associations (e.g., fountain) while performing the memory test. In efforts to isolate contextual processing, other factors such as concreteness, imageabiity, memory condition, frequency, familiarity, number of letters, etc., were used as regressors in the model. As hypothesized, words with strong contextual associations elicited greater activity in the PHC, RSC, and MPFC. However, we also observed significant differential activity in early visual areas between strong and weak context words that were equated on concreteness and imageability. We posit that this unexpected finding reflects automatic, contextually-driven processing which provides feedback to visual areas.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×