Abstract
The Other-Race Effect (ORE) refers to the well-known finding that people recognize own-race faces more accurately than other-race faces. Is it possible to reduce the ORE? Here, we examined the role of learning context, in combination with multiple-image training on recognition accuracy for own-and other-race faces. East Asian and Caucasian participants saw images of each identity in either a contiguous order (multiple images of an identity grouped together) or a distributed order (multiple images of an identity dispersed randomly throughout the learning set). Participants learned faces from four highly variable images (Exp. 1A) or from one image repeated four times (Exp. 1B). A robust other-race effect was found in both experiments, indicating that image variability alone is insufficient to eliminate the other-race effect. Also, the effect of learning context was mediated by image variability. Participants in the distributed learning condition were more accurate when they trained with a single repeated image (Exp. 1B), F(1,136) = 5.633, MSE = 0.60, p = .019, ηp2 = .04, but not from multiple variable images (Exp. 1A), F(1,129) = 0.140, MSE = 0.49, p = .71, ns. Overall, accuracy was higher for multiple image training (M = 1.22, SD = 0.49) than repeated single image training (M = 1.01, SD = 0.56), F(1, 265) = 10.712, MSE = 0.55, p = .001, ηp2 = .039. Our novel approach revealed that a distributed learning context improves own-and other-race recognition accuracy, but only when participants can already "tell faces together" (Jenkins et al., 2011). Also, using a cross-race experiment, we extended previous results that suggest that multi-image training improves recognition accuracy for own-race (Murphy et al., 2015) and other-race faces (cf., Matthews and Mondloch, in press). Our results indicate that, with lower image variability, distributed learning can improve recognition accuracy for both own-and other-race faces.
Meeting abstract presented at VSS 2018