Abstract
With recent developments in machine learning, machine face-processing systems have achieved extraordinary accuracy, largely built upon data-driven deep-learning models. Though promising, a critical aspect that limits both the performance and social responsibility of deployed face-processing systems is the inherent diversity of human appearance. Every human appearance reflects something unique about a person, including their heritage, identity, experiences, and visible manifestations of self-expression. Here, we compare the performance of human and artificial face-processing systems when presented with faces of those who deliberately modify their appearance. Body modification is a popular technique of human appearance alteration to signal affiliation in a group, to signal socioeconomic status, to better match local norms of beauty, and to communicate information about the personality. In Study 1, we use the sub-category of body modification from Distinctive Human Appearance Dataset to evaluate human vs. state-of-the-art face-detection models, the first step for face recognition systems, in their ability to detect faces in 137 images of people with body modifications. The results show that these face-detection algorithms, but not people, perform much poorer than their reported performances on other datasets. Because these systems lack a causal understanding of how faces can be modified and altered, it leads to misidentifications and demeaning predictions by face recognition technologies. In Study 2, a collaboration between technologists and leaders in the body modification community, we draw from interviews with members of the body modification community to contextualize the community's experiences with facial recognition technologies. In the case of the body modification community, the negative effect of the technology failures depends on the context in which face-recognition technology is deployed, e.g., in a personal electronic device or in the context of state surveillance.