Abstract
Computational models of human visual object recognition have been incomplete and limited in their ability to account for human behavioral phenomena. The goal of the present work was to evaluate the performance of a Simple Image-based Neural Network for Object (and face) Recognition (SINNOR) and by doing so use it as a tool for investigating the same visual processes in humans. The model is a 3-layer feed-forward RBF network that uses an image-based representation as input. To date we have evaluated the model along three different dimensions. First, simulations demonstrate that the model can perform visual object recognition at multiple levels of categorization. To assess the relationship of the model's performance relative to human visual categorization, we had subjects provide similarity ratings for the same images used in the simulations. We correlated their ratings with the model's confusion matrix across individual objects. Critically, we found that the correlation between the model's performance relative to all human subjects was comparable to the correlation for any given subject relative to all other subjects. Second, “lesioning” the model at different points in processing provides proof that a single visual recognition system can produce many different patterns of sparing and loss in visual categorization. These simulated “patients” show recognition behaviors that are consistent with the patterns of neuropsychological deficits found in visual agnosia: object agnosia, prosopagnosia, and category-specific deficits. Third, simulations involving recognition over viewpoint changes, illumination changes, other-race effects, and other generalization problems were compared to human performance to further assess the validity of the model. Given the simplicity of the current version of our model, we view the degree of correspondence to human behavior as quite promising and one step towards formulating a comprehensive model and theory of human visual object recognition.
Supported by NSF IGERT and PEN — awarded by the James S. McDonnell Foundation.