October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Accessing object concepts: Effects from brief exposure using anaglyphs
Author Affiliations & Notes
  • Caitlyn Antal
    Concordia University
  • Roberto G. de Almeida
    Concordia University
  • Footnotes
    Acknowledgements  NSERC - Natural Sciences and Engineering Research Council and SSHRC - Social Sciences and Humanities Research Council
Journal of Vision October 2020, Vol.20, 945. doi:https://doi.org/10.1167/jov.20.11.945
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Caitlyn Antal, Roberto G. de Almeida; Accessing object concepts: Effects from brief exposure using anaglyphs. Journal of Vision 2020;20(11):945. https://doi.org/10.1167/jov.20.11.945.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

We sought to investigate how concepts are accessed via object and feature recognition during two brief exposure times (50/60 or 190/200 ms). Participants performed a picture/word masked priming congruency task, whereby they had to judge whether a picture/word pair were related to each other. Participants wore blue-red anaglyph glasses, with objects presented in red in the left visual field and words presented in blue in the right visual field. Using anaglyphs allowed us to investigate the role of the early posterior visual projections during object and word recognition, by projecting the word to the visual word form area in the left hemisphere and the picture to the right temporal lobe--one of the bilateral object recognition areas. Pictures and target words were presented simultaneously with a 10 ms difference accounting for their recognition times: objects were presented for 50 or 190 ms, while words were presented for either 60 ms or 200 ms. For each picture, one of four word probes was presented for congruency decision: the basic level category label of the picture (dog), a high-prototypical (bark), a low-prototypical (fur), or a superordinate feature (pet). Response times (RTs) and accuracy to congruency decisions were analyzed through linear mixed effects models. Results showed that participants were faster and more accurate in responding to picture-word pairs when stimuli were presented for 190/200 ms rather than 50/60 ms. Furthermore, high prototypical and superordinate feature probes yielded significantly faster and more accurate responses when stimuli were presented for 190/200 ms. But crucially, basic level probes yielded significantly faster RTs and greater accuracy than all other probe types, at both presentation times. Taken together, this suggests that concept tokening relies on non-decompositional processes, and that conceptual features are processed only after concepts have been accessed.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.