September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Automaticity and Specificity of Attentional Capture by Language
Author Affiliations
  • Leeland Rogers
    Department of Psychological and Brain Sciences, University of Delaware
  • Sarah Fairchild
    Department of Psychological and Brain Sciences, University of Delaware
  • Anna Papafragou
    Department of Psychological and Brain Sciences, University of Delaware
  • Timothy Vickery
    Department of Psychological and Brain Sciences, University of Delaware
Journal of Vision August 2017, Vol.17, 950. doi:https://doi.org/10.1167/17.10.950
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Leeland Rogers, Sarah Fairchild, Anna Papafragou, Timothy Vickery; Automaticity and Specificity of Attentional Capture by Language. Journal of Vision 2017;17(10):950. https://doi.org/10.1167/17.10.950.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The extent to which language affects non-linguistic processes is debated. While it is well-established that a spoken word quickly directs attention to the relevant object (e.g., Tanenhaus et al., 1995), recent research suggests that spoken language automatically guides visual attention even when it is task-irrelevant (Salverda & Altmann, 2011). Here, we ask whether stored linguistic knowledge – in the form of verbal labels associated with single objects – can capture attention when it is task-irrelevant. Participants were exposed to two novel manmade artifacts: one with a label associated (e.g., zeg) and one without. In a pilot study, after the training phase we administered a modified Posner cueing task (Posner & Petersen, 1989) in which locations were uninformatively precued with either labeled objects or unlabeled objects before having to respond to a target letter "F" appearing on the left or the right side of the screen. If stored linguistic knowledge associated with an object is capable of "capturing" attention to any extent, participants should be faster to respond to the target on trials where the labeled object is a valid cue for target location. Indeed, participants were faster on valid trials than invalid trials in the first block, t(12) = 2.4, p < 0.05, suggestive of attentional capture by labeled objects. We replicated this finding in a follow-up experiment that required localizing a non-linguistic target (a simple rectangle): participants were faster to respond on valid trials than invalid trials, t(27) = 2.2, p < 0.05. These findings provide evidence that stored linguistic knowledge is capable of capturing attention: merely learning a label to an object gives that object attentional priority.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×