Abstract
The contrast sensitivity function is perhaps the most studied function in spatial vision, and the mechanisms underlying its shape have been extensively debated. These mechanisms are commonly assumed to be neural. However, small eye movements continually occur during fixation. We have recently shown that fixational eye movements transform the spatial power of the stimulus into temporal modulations in a very specific manner (Kuang et al., 2012). Here we examine the possible influences of this space-time reformatting on human contrast sensitivity. We modeled the responses of P and M neurons in the macaque retina and lateral geniculate nucleus using rectified linear filters. Models were designed to match the neural contrast sensitivity functions recorded by neurophysiological experiments in the absence of eye movements. These functions deviate significantly from behavioral measurements of contrast sensitivity: they peak at lower spatial frequencies and do not exhibit the strong low-frequency suppression present in human contrast sensitivity. Models were exposed to the spatiotemporal input stimuli experienced by human observers during measurements of contrast sensitivity, the input signal resulting from viewing the stimulus in the continual presence of microscopic eye movements. Eye movements were recorded by means of a Dual Purkinje Image eye-tracker, a device with high spatial and temporal resolution. Our model closely predicts psychophysical measurements of human contrast sensitivity measured during normal fixation over a broad range of spatial and temporal frequencies. Furthermore, our model also predicts the contrast sensitivity function measured under retinal stabilization, a condition in which some degree of residual motion persists. These results give further support to the proposal that fixational eye movements act as a critical pre-processing stage in the representation of visual information and suggest an important role of these movements in shaping human contrast sensitivity.
Meeting abstract presented at VSS 2014