Abstract
The V1 saliency theory (Li 1999, 2002) hypothesizes that the saliency of a visual location is determined by the maximum activity across V1 neurons responding to that location. It has provided qualitative predictions, various of which have been confirmed experimentally. Here, we show that, without using any free parameter, it makes quantitative predictions of the probability distributions of reaction times (RT) for searching for feature singletons. For example, a red vertical bar activates some V1 neurons tuned to red color, some V1 neurons tuned to vertical orientation, and still other V1 neurons tuned simultaneously to red color and vertical orientation. According to the theory, the saliency at its location is determined by the highest response across these three types of neurons. Consequently, when every visual input item can take only one of the two possible feature values along any feature dimension, the RT for finding a feature singleton unique in both color and orientation is statistically shorter than the RT for finding either a singleton unique in only color or orientation. V1 has no neuron tuned simultaneously to the three feature dimensions: color; orientation; and motion direction. Thus, we can derive from the V1 saliency hypothesis a mathematical relationship between the probability distributions of seven RTs. Three of the seven RTs are for finding singletons having a unique feature in only one of the three feature dimensions; three more are for finding singletons having a unique feature in two of the three feature dimensions (e.g., unique in both color and orientation); and finally one RT is to find a singleton having a unique feature in all three feature dimensions. One can thus predict quantitatively the distribution of one of the seven RTs from those of the other six. This prediction will be compared with behavioral data (Koene and Zhaoping 2007).
Meeting abstract presented at VSS 2012