Purchase this article with an account.
Elissa Aminoff, Michael Miller, Scott Grafton, Michael Tarr; Early visual areas recruited in automatic contextual processing of words. Journal of Vision 2012;12(9):1112. doi: 10.1167/12.9.1112.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Objects do not appear randomly in our environment, but rather clustered in typical contexts. For example, an oven will likely appear with a refrigerator and a microwave in close proximity. Previously, the parahippocampal cortex (PHC), the retrosplenial complex (RSC), and the medial prefrontal cortex (MPFC; Bar & Aminoff, 2003) were designated in the neural mechanism underlying contextual processing by comparing BOLD activity elicited when viewing pictures of objects with strong contextual associations (e.g., shower) with activity elicited when viewing pictures of objects with weak contextual associations (e.g., folding chair). This neural mechanism was defined in various experiments using only pictures, and it was unclear whether these regions would also respond when viewing words with strong contextual associations. To explore this, twenty participants evaluated 360 words for contextual strength. Based on these results, words were delineated as having a strong context or a weak context. In a separate experiment, ninety-five participants performed a recognition memory test, unrelated to contextual processing, on these 360 words while undergoing fMRI. BOLD activity was compared when viewing words with strong contextual associations (e.g., bullet) with words with weak contextual associations (e.g., fountain) while performing the memory test. In efforts to isolate contextual processing, other factors such as concreteness, imageabiity, memory condition, frequency, familiarity, number of letters, etc., were used as regressors in the model. As hypothesized, words with strong contextual associations elicited greater activity in the PHC, RSC, and MPFC. However, we also observed significant differential activity in early visual areas between strong and weak context words that were equated on concreteness and imageability. We posit that this unexpected finding reflects automatic, contextually-driven processing which provides feedback to visual areas.
Meeting abstract presented at VSS 2012
This PDF is available to Subscribers Only