August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Categorical structure and perception of facial expressions in dyadic same-different task
Author Affiliations
  • Olga Kurakova
    Center for Experimental Psychology MCUPE, Moscow, Russia
  • Alexander Zhegallo
    Institute of Psychology RAS, Moscow, Russia
Journal of Vision August 2012, Vol.12, 972. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Olga Kurakova, Alexander Zhegallo; Categorical structure and perception of facial expressions in dyadic same-different task. Journal of Vision 2012;12(9):972.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Emotional facial expressions are perceived categorically by individual subjects in discrimination and identification tasks (Young et al., 1997). We tested whether this effect can be replicated in shared same-different task. Participants, arranged in 15 dyads, were synchronously presented pairs of images for 3 s on two separate displays. The task was to discuss the images and decide, whether they were same or different, without seeing each other's image. As stimuli, we used 6 stills from video record of male poser performing transition between happy and surprised facial expressions. To explore the strategies used and structure of verbal categories, participants’ eye movements and speech were recorded. Overall task performance analysis showed categorical perception (better discrimination of images far from categories prototypes) in 1-step pairs, and U-shape function in pairs of identical images. Although free description was allowed, the verbal units used fall into three main categories: configural (describing facial features deformation), emotional and situational. Based on their performance, dyads of participants were divided into 3 equal groups. Comparison of verbal activity in contrast groups showed: in the low performance group (0.62–0.7), the mean amount of verbal units per trial was distributed equally across all stimuli pairs; in the high performance group (0.82–0.94), it increased significantly while discussing same images or 1-step pairs, and the use of emotion terms was more diverse (low group subjects preferred more general descriptors like "happy" and "surprised" without mentioning complex mixed emotions). Low performance dyads showed no clear categorical perception. Moreover, the average eye fixation patterns differed significantly: dyads with lower performance observed the eyes area longer, and subjects with higher performance paid more attention to the mouth region. We suggest that, on happy-sad transition, extended verbal description and relying on mouth transformation help to elaborate more effective strategies in dyadic images differentiation.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.