Abstract
Professional forensic face examiners surpass untrained participants on various challenging face identity-matching tasks (White et al., 2015). We posited that qualitative differences between groups in White et al. would be revealed by examining performance across groups at the level of stimulus items. We developed a novel item-based analysis of the face- and body-informative stimuli from White et al. (2015), which were selected based on human and computer performance from previous work (Rice et al., 2013). For each item, we recorded the participant group with the highest accuracy as that item's "winner". Next, a wisdom-of-the-crowds approach was implemented by fusing/averaging item responses within each group, incrementally adding participants to amplify the item effects. The distribution of wins across groups for face- versus body-informative items differed strongly in the fully fused (i.e., all group participants averaged) sample (Chi-squared = 41.97, df = 2, p < .01) with examiners winning 70% of the face-informative cases, but only 32% of body-informative cases. A strong dissociation was seen also for same- versus different-identity items in the fully fused case (Chi-squared = 11.28, df = 2, p < .01). Although examiner superiority increased with fusion for same- and different-identity items, the increase for different-identities was striking, with examiner wins increasing from 31% in 1-participant samples to 78.5% in the fully fused sample. Criterion analyses eliminated response bias as a full account of the difference. In summary, professional forensic face examiners were highly skilled in using information from the internal face for identification, but failed to effectively use identity cues from the body. They were also more effective at rejecting different-identity items than at confirming same-identity matches. The use of the novel item analysis with wisdom-of-the-crowds fusion proved useful as a tool for exploring strategic differences between examiners and untrained participants.
Meeting abstract presented at VSS 2017