July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Perceptual Learning of Facial Expressions
Author Affiliations
  • Hisa Hasegawa
    Chubu Gakuin University
  • Hideyuki Unuma
    Kawamura Gakuen Woman's University
  • Philip J. Kellman
    University of California, Los Angeles
Journal of Vision July 2013, Vol.13, 254. doi:https://doi.org/10.1167/13.9.254
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hisa Hasegawa, Hideyuki Unuma, Philip J. Kellman; Perceptual Learning of Facial Expressions. Journal of Vision 2013;13(9):254. https://doi.org/10.1167/13.9.254.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Perceptual learning (PL) facilitates picking up structural information of patterns (Gibson, 1969; Kellman, 2002). Recent work suggests that Perceptual Learning Modules (PLMs), consisting of many short speeded classification trials, can accelerate picking up structural information in ecological situations (e.g., Kellman, Massey & Son, 2010). In the present study, we examined whether PLMs facilitate pick up information of facial expressions. The experiment consisted of pretest, PL interventions, and posttest. The task in pretest and posttest was visual search for a facial expression target, which was one of 6 basic emotion categories, among neutral face distractors. Materials were photos of facial expressions. All photo sets in pretest, PL interventions, and posttest were different from each other. We tested for effects of two PL interventions. One was an Emotion PLM and the other was an Identity PLM. In the Emotion PLM, participants were required to classify the emotion of a target person in the display. Choices were 6 photos in 6 emotion categories expressed by other person. In the Identity PLM, observers were required to select the same person of other facial expressions. Each observer received 360 learning trials, given in 10 blocks of either the Emotion PLM or Identity PLM. The primary dependent measure was improvement of search efficiency or slope in the visual search task from pretest to posttest. Results showed that the search slope difference from pretest to posttest in the Emotion PLM condition was significantly greater than the difference in Identity PLM condition. Emotion category also significantly affected search slope. These results suggest that (a) information pick up from facial expressions can be improved in only a few hundred trials, (b) that this improved ability transferred to novel situations, and (c) that fluency in pick up information can be improved especially in fear, anger, and sadness categories.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.