Abstract
Tanaka & Farah (1993) have documented a behavioral regularity that has been used to argue for holistic representation of faces. The basic regularity is that identification of an anatomical feature of face (e.g., a nose) is aided when that feature is presented in the context of face, and is best when that feature is presented in the context of the original source face (Tanaka & Sengco, 1997). Given that physical characteristics (e.g., similarity between the facial form and that of the feature) are most likely inadequate for producing both of these regularities, the role of learning must, by hypothesis, be critical for understanding the mechanisms for such regularities. The present effort investigates the potential role of learning using stochastic linear systems models for the processing of multidimensional inputs. The models allow for dynamic representations of the presence or absence of dimensional dependencies between features, in the form of channel interactions. The models are capable of making predictions at the level of behavioral latencies and accuracies for a range of tasks, including those used in the original demonstrations of the face superiority effect. Here we highlight the potential role of learning in producing changes in both perceptual sensitivity and bias, as a function of both experience and stimulus context, and use these as predictions for a set of experiments involving multidimensional judgments, in order to show how learning can lead to behaviors that have been taken as indicators of perceptual holism.