Abstract
There are neural mechanisms specific to face recognition in the human brain, but how perceptual and social properties of face images interact to determine the category boundary between faces and non-faces remains an interesting question. What counts as a face according to the visual system? Humans, robots, and dolls all have faces, but do all of these engage the same mechanisms for face processing? To measure the neural activity elicited by real and artificial faces relative to objects, we used event-related potentials (ERPs) to measure participants’ responses to human faces, robot faces, and non-face objects in two experiments. In each task, we measured the amplitude and latency of the P100 and N170 ERP components, both of which are known to exhibit selectivity for face images. In Experiment 1, we presented 24 participants with upright faces of real humans, robots, dolls, and computer-generated humans, and included images of clocks as a non-face control stimulus. We found that robot faces were not significantly different from either objects or human faces in terms of N170 amplitude, suggesting they were processed in a manner ‘between’ being object-like and face-like. By comparison, other artificial faces did not differ from human faces in terms of P100 and N170 responses. To further investigate the possibility to robot faces inhabit a liminal space between faces and non-faces, in Experiment 2, we recruited an additional 24 participants to measure their P100 and N170 responses to upright and inverted human, robot, and clock faces. Here, we found that face inversion effects on component amplitude and latency were only partially evident for robot faces, again suggesting that robot faces are a boundary case in terms of face processing. Overall, our results demonstrate that the category status of artificial faces may have a soft boundary for some classes of artificial social agents.