September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Strategies for improving own-and other-race face recognition with learning context and multiple image training
Author Affiliations
  • Jacqueline Cavazos
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
  • Eilidh Noyes
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
  • Alice O'Toole
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
Journal of Vision September 2018, Vol.18, 1105. doi:https://doi.org/10.1167/18.10.1105
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jacqueline Cavazos, Eilidh Noyes, Alice O'Toole; Strategies for improving own-and other-race face recognition with learning context and multiple image training. Journal of Vision 2018;18(10):1105. https://doi.org/10.1167/18.10.1105.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The Other-Race Effect (ORE) refers to the well-known finding that people recognize own-race faces more accurately than other-race faces. Is it possible to reduce the ORE? Here, we examined the role of learning context, in combination with multiple-image training on recognition accuracy for own-and other-race faces. East Asian and Caucasian participants saw images of each identity in either a contiguous order (multiple images of an identity grouped together) or a distributed order (multiple images of an identity dispersed randomly throughout the learning set). Participants learned faces from four highly variable images (Exp. 1A) or from one image repeated four times (Exp. 1B). A robust other-race effect was found in both experiments, indicating that image variability alone is insufficient to eliminate the other-race effect. Also, the effect of learning context was mediated by image variability. Participants in the distributed learning condition were more accurate when they trained with a single repeated image (Exp. 1B), F(1,136) = 5.633, MSE = 0.60, p = .019, ηp2 = .04, but not from multiple variable images (Exp. 1A), F(1,129) = 0.140, MSE = 0.49, p = .71, ns. Overall, accuracy was higher for multiple image training (M = 1.22, SD = 0.49) than repeated single image training (M = 1.01, SD = 0.56), F(1, 265) = 10.712, MSE = 0.55, p = .001, ηp2 = .039. Our novel approach revealed that a distributed learning context improves own-and other-race recognition accuracy, but only when participants can already "tell faces together" (Jenkins et al., 2011). Also, using a cross-race experiment, we extended previous results that suggest that multi-image training improves recognition accuracy for own-race (Murphy et al., 2015) and other-race faces (cf., Matthews and Mondloch, in press). Our results indicate that, with lower image variability, distributed learning can improve recognition accuracy for both own-and other-race faces.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×