Abstract
Face-like images often appear easy to visualize within random patterns or natural scenes, suggesting that the perception of faces can be invoked by relatively simple configurations. We have examined some of the minimal configural requirements for stimuli to support face perception. In one set of experiments, we used reverse correlation to identify the configurations that led to recognizing images as a face. Frontal-view pictures of faces were presented in high levels of white noise, and observers judged on each trial whether or not a face was visible. The image elements leading to face perception were estimated from the differences between the visible vs. invisible patterns averaged over several hundred trials. Feature locations corresponding to the eyes, nose, and mouth all emerged, suggesting that they may each form a salient part of facial configurations. In a second set of experiments, observers were shown arrangements of 3 or more Gabor patches. These were chosen so that different configurations could be defined by varying the locations of identical features (e.g. all horizontal), or identical configurations could be shown with varying features (e.g. different orientations). The relative distances, positions, and symmetry of the patches, as well as the patch orientations, were systematically varied while observers rated how face-like each configuration appeared. The results of these studies serve to define the range of spatial configurations that can induce an impression of a face, and thus may point to the basic configural dimensions underlying the encoding of faces.