Abstract
We previously developed a Bayesian model that estimated whether two pixels within an image fall on the same surface, given the chromatic and luminance differences between them (Fine et al., 2003). Here we extended this model to estimate the amount of structure in an image patch. The model convolves image patches with a bank of luminance, red-green or blue-yellow Gabor filters varying in spatial frequency, orientation and phase. The probability that a given patch has structure is calculated using a Bayesian algorithm based on the energy output across the filters. Estimates of structure by the model are consistent with human perception; model estimates of the probability of structure in 6000 patches correlate with observers' ratings of the patches as having “no”, “some” or “clear” structure.
Estimates of structure by the model are also consistent with V1 neurophysiological responses. Using data from 22 V1 cells (measured by Horwitz et al., 2005), we found a correlation between the number of spikes elicited by any given chromatic noise patch and the model's estimate of structure for that patch. The model predicts V1 spikes as well as traditional linear and non-linear receptive fields constructed using spike-triggered averaging and covariance techniques. Our results suggest that both the amount of perceived structure within an image patch, and responses within V1 cells, are associated with chromatic and luminance contrast energy within an image patch. Color-selective neurons in V1 may therefore be representing the amount of structure in stimuli, rather than their chromatic content.