Abstract
We have recently developed a novel class of faces for visual experimentation. Each synthetic face is characterized by a set of 37 measurements taken from a digital photograph in either front or 20° side view, and the resultant face is optimally bandpass filtered (10 cycles/face width, 2.0 octave bandwidth). Previous experiments showed that the metric for face discrimination is Euclidean. Here we use pattern masking to explore the temporal dynamics of face discrimination.
In each experiment a synthetic face was flashed for 27 ms and was followed after a variable delay by either a noise mask, face mask, inverted face mask, or house mask. The subject then had to select which of two faces appearing on the screen had been presented. Masking by faces greatly disrupted face discrimination within a 140 ms window following stimulus presentation. Inversion of the masking face reduced this masking by half. Strikingly, however, noise had no masking effect at any delay for high contrast target faces. Reducing synthetic face contrast to 10% did lead to noise masking, but only when the noise immediately followed the target face. Finally, houses produced no masking at all.
These data can only be explained by configural pattern similarity and not by degree of contour overlap, thus indicating an extrastriate masking locus (perhaps FFA). Noise masking at low face contrasts, on the other hand, is consistent with a V1 locus for noise masking. The 140 ms required for optimal discrimination of synthetic faces provides sufficient time for feedback loops between higher cortical areas to play a role in face processing.
Supported in part by NSERC grants to HRW & FW.