Abstract
Assuming a linear signal-detection system, the square of the threshold (T^2) for detecting a signal in a Gaussian noise will increase linearly as a function of the variance of the noise (Vext), since T^2=Vext/N+Vint, where Vint is the variance of the internal noise of the system, and N its sampling rate. Barlow and others employed this relationship to quantify the internal noise and sampling efficiency of the human visual system for contrast detection, as well as for spatial tasks such as density discrimination and symmetry detection. However, we show here that a linear threshold-vs-noise (TvN) function is not predicted, and not found, for tasks where the current consensus suggests that a line-element model operates. A line-element model assumes that discrimination is based on the difference between the output of channels tuned to a particular feature, and predicts a nonlinear, accelerating TvN function. We obtained accelerating TvN functions for an orientation-defined texture segregation task, as hinted at in previous studies. We show that the accelerating function, at least at low levels of Vext, is not due to the circular nature of orientation, and that it is not restricted to tasks using stimuli with abutting texture regions. On the other hand, a re-analysis of previous data obtained using a temporal orientation discrimination tasks suggests a quasi-linear TvN function, which is inconsistent with a line-element model. We discuss the origin of this discrepancy.
Supported by JSPS (IM) and by NSERC grant ref: OGP 01217130 (FK).