Abstract
Introduction: Various studies have shown humans preferentially utilize certain facial features in identity discrimination (Schyns et. al, 2002). However, there has not been thorough investigation into the ability of humans to learn which features are more discriminating. The goal of this study is to measure human perceptual learning in comparison to a learning optimal Bayesian for a face identification task. Methods: We selectively augmented features (nose, eyes, chin and mouth) of simulated faces of four individuals to create four feature sets. For each set a distinct feature maximally discriminated the individuals' faces as confirmed by an ideal observer analysis and a human study. The study comprised 50 sessions, each with 50 learning blocks consisting of four learning trials. Observers were informed about the fact that a feature set was randomly chosen and maintained throughout a learning block. On each trial, a face from the selected feature set was displayed embedded in white Gaussian noise. Observers identified the displayed face. Feedback was provided about the individual's identity, but not as to the discriminating feature. Results: Humans learned to utilize informative features, with face identification percent correct increasing by 7–26%. Relative to the optimal observer certain features (eyes, mouth) were more efficiently used. Human efficiency, in general, remained constant or increased with learning trial. Conclusion: Human observers learn discriminating features as well as or better than the optimal observer or, more likely, humans are suboptimal when integrating information across multiple features.