September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Labeling Emotion: Semantic Processing of Facial Expressions
Author Affiliations
  • Yi-Chen Kuo
    Department of Psychology and Center for Research in Cognitive Sciences, National Chung Cheng University, Chiayi, Taiwan
  • Chon-Wen Shyi
    Department of Psychology and Center for Research in Cognitive Sciences, National Chung Cheng University, Chiayi, TaiwanAdvanced Institute of Manufacturing with High-tech Innovations, National Chung Cheng University, Chiayi, Taiwan
  • Ya-yun Chen
    Department of Psychology and Center for Research in Cognitive Sciences, National Chung Cheng University, Chiayi, Taiwan
Journal of Vision September 2018, Vol.18, 615. doi:https://doi.org/10.1167/18.10.615
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yi-Chen Kuo, Chon-Wen Shyi, Ya-yun Chen; Labeling Emotion: Semantic Processing of Facial Expressions. Journal of Vision 2018;18(10):615. https://doi.org/10.1167/18.10.615.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Due to formidable diversity and nuanced variability, we suspect that not only perceptual processing of facial configurations but also labeling them with semantic codes are needed using facial expressions to convey emotion. Here we examined whether the image-to- label conversion (ILC) strategy is actively employed when participants compared facial expressions of different identities in four conditions. In the BaseFace condition, they were to match facial expressions from the same identity; in the BaseLabel condition, participants were to choose between a pair of affective labels that matches a previously displayed facial expression; in the FaceCue, they were to match two faces of different identities but exhibiting the same expression; finally, in the LabelCue, they were to choose a facial expression that matches a previously displayed affective label. The results of Experiment 1 showed that that the inferior performance of the FaceCue condition might be due to doing ILC twice in that condition, both when the face cue and when the face alternatives were displayed, which can be both time-consuming and error-prone. In Experiment 2, we manipulated the stimulus onset asynchrony (SOA) between cue and choice display to further explore the timing of ILC and the duration it might require. The results indicated that (a) with relatively short SOA (500 ms), participants may have adopted feature-based matching, which was time-consuming, (b) when SOA was increased 1,000 ms, they appeared to rely upon holistic processing of faces, yielding faster RTs, and finally (c) when SOA was further increased to 1,500 ms, participants appeared to adopt ILC strategy for converting both the face cue and test faces into corresponding affective code, and as a consequence, led to longer RTs. These interpretations were further tested and corroborated by the results of Experiment 3, where we used a block design for manipulation the duration of SOA.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×