September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Spatial and feature-based attention to emotional faces
Author Affiliations
  • David De Vito
    Department of Psychology, University of Guelph
  • Cody Cushing
    Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital
  • Hee Yeom Im
    Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital
  • Reginald Adams, Jr.
    Department of Psychology, The Pennsylvania State University
  • Kestutis Kveraga
    Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital
Journal of Vision August 2017, Vol.17, 1290. doi:10.1167/17.10.1290
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      David De Vito, Cody Cushing, Hee Yeom Im, Reginald Adams, Jr., Kestutis Kveraga; Spatial and feature-based attention to emotional faces. Journal of Vision 2017;17(10):1290. doi: 10.1167/17.10.1290.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Anticipation enhances our ability to adaptively respond to positive and negative stimuli. However, research is quite mixed about how different types of expectations affect response efficiency. Specifically, we examined how cueing stimulus location (spatial attention) and/or emotion (feature-based attention) affects response efficiency on a trial-by-trial basis. Our task engaged participants (N=44) in speeded identification of emotion expressions (angry versus happy) that were presented peripherally (left versus right) for 250 ms. Prior to face onset, colored arrow cues, which were 95% predictive, were displayed for 1 s to inform subjects of either stimulus location or emotion, both, or neither (Uncued). Overall, responses to happy faces and expressions presented in the right visual field yielded shorter reaction times (RTs), but the RTs, accuracy, and response patterns were strongly modulated by cueing condition. Uncued trials elicited the longest RTs (average: 615 ms). Cueing location produced faster (average: 578 ms) and more accurate responses than uninformative cues (p< .001), while cueing emotion resulted in even shorter RTs (average: 546 ms) than cueing location (p< .001). Cueing both location and emotion (p< .001) evoked fastest responses (average: 481 ms), in a superadditive function reflecting the combined effects conferred by cueing emotion and location (p < .001) separately. Moreover, the face identity cues (gender and race) interacted with cueing. On Uncued trials or when only the location was cued, we found emotion x gender (faster for happy female and angry male), emotion x race (faster for angry black and happy white faces), as well as emotion x gender x race interactions, which were abolished by emotion cueing. In conclusion, we found that being able to anticipate facial emotion and location substantially speeds up recognition, while interactions with facial identity cues (race and gender) are abolished by emotion cueing.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×