Abstract
Due to formidable diversity and nuanced variability, we suspect that not only perceptual processing of facial configurations but also labeling them with semantic codes are needed using facial expressions to convey emotion. Here we examined whether the image-to- label conversion (ILC) strategy is actively employed when participants compared facial expressions of different identities in four conditions. In the BaseFace condition, they were to match facial expressions from the same identity; in the BaseLabel condition, participants were to choose between a pair of affective labels that matches a previously displayed facial expression; in the FaceCue, they were to match two faces of different identities but exhibiting the same expression; finally, in the LabelCue, they were to choose a facial expression that matches a previously displayed affective label. The results of Experiment 1 showed that that the inferior performance of the FaceCue condition might be due to doing ILC twice in that condition, both when the face cue and when the face alternatives were displayed, which can be both time-consuming and error-prone. In Experiment 2, we manipulated the stimulus onset asynchrony (SOA) between cue and choice display to further explore the timing of ILC and the duration it might require. The results indicated that (a) with relatively short SOA (500 ms), participants may have adopted feature-based matching, which was time-consuming, (b) when SOA was increased 1,000 ms, they appeared to rely upon holistic processing of faces, yielding faster RTs, and finally (c) when SOA was further increased to 1,500 ms, participants appeared to adopt ILC strategy for converting both the face cue and test faces into corresponding affective code, and as a consequence, led to longer RTs. These interpretations were further tested and corroborated by the results of Experiment 3, where we used a block design for manipulation the duration of SOA.
Meeting abstract presented at VSS 2018