Abstract
Cortical visual object perception employs a hierarchy of visual processing steps leading to the perception of complex visual features. However, the nature of these features remains unclear. One approach to this problem has been to model neural data using extant computational models of visual processing, such as Kay's application of edge filters to BOLD data in V1 (2008) or Cadieu's application of the Riesenhuber "HMAX" model (1999) to individual V4 neurons (2007). In a related vein, we have developed a downloadable toolbox to group visual object stimuli according to several pre-defined feature sets based on a variety of models drawn from computer vision. In earlier work we compared clusters derived from toolbox feature sets with those revealed through neuroimaging studies of human object perception (VSS 2011). In contrast to the models used in this work, HMAX is hierarchical, proposing translation-invariant conjunctions of oriented edges that arise as the result of multiple stages of visual processing that parallel the human ventral pathway. To better examine the efficacy of including multiple layers in object processing, we have extended the toolbox to include a modified form of the first two layers of HMAX which we then use to model the responses of individual voxels throughout the ventral stream. More specifically, we predicted voxel responses to passive viewing of real-world object stimuli based on each voxels' response to independent training-set object stimuli, and found that HMAX accurately predicts BOLD responses in human V1 and V2. This result suggests that adding further layers from HMAX may capture more anterior activity along the ventral pathway, and offer further insight into the visual features relevant to high-level visual processing.
Meeting abstract presented at VSS 2012