October 2008
Volume 8, Issue 14
Free
Research Article  |   October 2008
Oculomotor synchronization of visual responses in modeled populations of retinal ganglion cells
Author Affiliations
Journal of Vision October 2008, Vol.8, 4. doi:10.1167/8.14.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Martina Poletti, Michele Rucci; Oculomotor synchronization of visual responses in modeled populations of retinal ganglion cells. Journal of Vision 2008;8(14):4. doi: 10.1167/8.14.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We have recently shown that fixational eye movements improve discrimination of the orientation of a high spatial frequency grating masked by low-frequency noise, but do not help with a low-frequency grating masked by high-frequency noise (M. Rucci, R. Iovin, M. Poletti, & F. Santini, 2007). In this study, we explored the neural mechanisms responsible for this phenomenon. Models of parvocellular ganglion cells were stimulated by the same visual input experienced by subjects in our psychophysical experiments, i.e., the spatiotemporal signals resulting from viewing stimuli during eye movements. We show that the spatial organization of correlated activity in the model predicts the subjects' performance in the experiments. During viewing of high-frequency gratings, fixational eye movements modulated the responses of modeled neurons in a way that depended on the relative alignment of cell receptive fields. Responses covaried strongly only when receptive fields were aligned parallel to the grating's orientation. Such a dependence on the axis of receptive-field alignment did not occur during viewing of low-frequency gratings. In this case, the responses of cells on the parallel and orthogonal axes were similarly affected by eye movements. These results support a role for oculomotor synchronization of neural activity in the representation of visual information in the retina.

Introduction
A variety of microscopic eye movements occur during the intervals between voluntary relocations of gaze. These movements include small saccades—known as fixational saccades or microsaccades—, slow drifts, and physiological nystagmus, a high-frequency tremor with amplitude smaller than 1′ (Ditchburn, 1955; Ratliff & Riggs, 1950; Steinman, Haddad, Skavenski, & Wyman, 1973; see Martinez-Conde, Macknik, & Hubel, 2004 for a recent review). Fixational eye movements continually modulate the spatiotemporal stimulus on the retina. It has long been questioned whether the visual system uses these temporal modulations of luminance to encode spatial information (Ahissar & Arieli, 2001; Arend, 1973; Averill & Weymouth, 1925; Greschner, Bongard, Rujan, & Ammermüller, 2002; Marshall & Talbot, 1942; Rucci & Casile, 2005). 
In a series of studies, we have used computational models of neurons in the retina, lateral geniculate nucleus (LGN), and primary visual cortex (V1) to examine the influence of fixational eye movements on neural activity in the early stages of the visual system (Casile & Rucci, 2006; Desbordes & Rucci, 2007; Rucci & Casile, 2004, 2005; Rucci, Edelman, & Wray, 2000). These studies have established a link between fixational eye movements and the statistics of natural images. They have raised the hypothesis that an unstable fixation might be an advantageous strategy in acquiring information from natural scenes, as it may dynamically decorrelate the responses of retinal and geniculate neurons. A decorrelation of neural activity has important implications both regarding the neural encoding of visual information (Barlow, 1961) and the maturation of cell response characteristics during visual development (Changeux & Danchin, 1976). 
This theory makes the important prediction that fixational eye movements should improve detection/discrimination of high spatial frequencies in the face of low-frequency noise (Rucci & Casile, 2005). To test this hypothesis, we have recently examined the effect of eliminating retinal image motion, a procedure known as retinal stabilization (Ditchburn & Ginsborg, 1952; Riggs & Ratliff, 1952; Steinman & Levinson, 1990; Yarbus, 1967), during discrimination of gratings at different spatial frequencies. Using a custom-developed system for flexible gaze-contingent display control (Santini, Redner, Iovin, & Rucci, 2007), we selectively eliminated the physiological motion of the retinal image that occurs during intersaccadic periods of visual fixation. As shown in Figure 1, consistent with the predictions of our theory, retinal stabilization significantly impaired identification of the orientation of high-frequency gratings masked by low-frequency noise, but had no impact on the discrimination of low-frequency gratings perturbed by high-frequency noise (Rucci, Iovin, Poletti, & Santini, 2007). 
Figure 1
 
Summary of the retinal stabilization experiments modeled in this study. (a) Subjects reported the orientation (±45°) of a noisy grating. In Experiment 1, an 11 cycles/deg grating was perturbed by low spatial frequency noise (low-pass cut-off frequency fc = 5 cycles/deg). In Experiment 2, the stimulus was a 4 cycles/deg grating overlapped by high spatial frequency noise (high-pass fc = 10 cycles/deg). Stimuli were displayed at the onset of fixation after the subject performed a saccade toward a randomly cued location. Stimuli were either maintained at a fixed location on the screen (unstabilized condition) or were moved with the eye so as to cancel the retinal motion resulting from fixational eye movements (stabilized condition). (b) Mean performance across 6 subjects. For every subject, in each condition, percentages were evaluated over a minimum of 80 trials. Error bars represent 95% confidence intervals. (Modified from Rucci et al., 2007).
Figure 1
 
Summary of the retinal stabilization experiments modeled in this study. (a) Subjects reported the orientation (±45°) of a noisy grating. In Experiment 1, an 11 cycles/deg grating was perturbed by low spatial frequency noise (low-pass cut-off frequency fc = 5 cycles/deg). In Experiment 2, the stimulus was a 4 cycles/deg grating overlapped by high spatial frequency noise (high-pass fc = 10 cycles/deg). Stimuli were displayed at the onset of fixation after the subject performed a saccade toward a randomly cued location. Stimuli were either maintained at a fixed location on the screen (unstabilized condition) or were moved with the eye so as to cancel the retinal motion resulting from fixational eye movements (stabilized condition). (b) Mean performance across 6 subjects. For every subject, in each condition, percentages were evaluated over a minimum of 80 trials. Error bars represent 95% confidence intervals. (Modified from Rucci et al., 2007).
This study closes the loop between modeling and psychophysics in that we use neural modeling to examine the implications of our psychophysical results with respect to the mechanisms of neural encoding of visual information. We simulated the responses of populations of ganglion cells in the macaque retina during exposure to the same visual signals experienced by subjects in our experiments, i.e., the spatiotemporal patterns resulting from viewing the stimuli during eye movements. We show that fixational eye movements influence the spatial organization of covarying neuronal ensembles in the model in a way that parallels psychophysical performance. In the presence of the normal fixational motion of the retinal image, the responses of pairs of modeled ganglion cells were more likely to covary when their receptive fields were aligned parallel to the grating as opposed to when they were aligned on the orthogonal axis. This pattern of covarying activity was disrupted when high-frequency gratings were examined in the absence of retinal image motion. That is, similar to the effect of retinal stabilization in the experiments, elimination of retinal image motion in the model impaired identification of the axis with the strongest degree of neural synchronization during presentation of high-frequency gratings masked by low-frequency noise, but had no impact with low-frequency gratings perturbed by high frequency noise. These results suggest an involvement of synchronous modulations in ganglion cell responses exerted by fixational eye movements in the representation of fine spatial detail in the retina. 
Methods
Computational models simulated retinal activity during the experiments summarized in Figure 1. This section focuses on the elements of the model and the analysis of simulated neural responses. Details on the psychophysical procedures and on the collection of experimental data can be found in Rucci et al. (2007). 
Visual stimulation
Visual input to neuronal models reconstructed the retinal image motion experienced by human observers in the psychophysical experiments described in Rucci et al. (2007). In these experiments, subjects reported the orientation (±45°) of a grating embedded in a noise field. As shown in Figure 1, two separate experiments were conducted. In Experiment 1, a 11 cycles/deg grating was perturbed by noise at spatial frequencies lower than fc = 5 cycles/deg. In Experiment 2, the frequencies of the grating and the pattern of noise were inverted so that the frequency of the grating (4 cycles/deg) was lower than the frequency band of noise (high-pass cut-off frequency of fc = 10 cycles/deg). In both cases, the power of the noise decreased proportionally to the square of the spatial frequency, as occurs in the power spectrum of natural images (Field, 1987). Stimuli were either flashed at a fixed location on the display (unstabilized condition) or were moved with the eye, under real-time computer control, in order to cancel the retinal motion resulting from fixational eye movements (stabilized condition). In this latter condition, the stimulus always appeared to the observer to be immobile at the center of the fovea. Eye position was continuously recorded by means of a Generation 6 DPI eyetracker (Fourward Technologies, Inc.). 
Each trial of the simulations replicated the visual input experienced by the observer in one of the experimental trials. That is, in the simulation of the k-th trial of the psychophysical experiments, model neurons received as input the movie I k( x, t), which reconstructed the spatiotemporal signal resulting from the specific combination of the stimulus S k( x) presented in the considered trial and (in the unstabilized condition) the recorded trace of eye movements, ξ k( t). In the simulation of an unstabilized trial, the position of the stimulus changed at each frame of the movie so as to be centered at the current location of gaze: I k( x, t) = S k( x + ξ k( t)). In this way, the receptive fields of modeled neurons effectively scanned the stimulus following the recorded scanpath. In a stabilized trial, instead, since retinal image motion had been canceled out by the procedure of retinal stabilization, each frame of I k( x, t) consisted of the same image: I k( x, t) = S k( x). In the simulations, to enable analysis of correlated activity for pairs of cells with receptive fields up to 1° apart, stimuli were enlarged by removing the Gaussian window used in the experiments. As in the experiments, stimuli were displayed for 1 s and followed by a high-energy mask. 
Modeling neuronal responses
Standard firing-rate models composed of the cascade of a linear and a non-linear stage (Carandini et al., 2005) were used to simulate the visual responses of ON-center parvocellular cells in the macaque's retina. Parvocellular neurons are known to play an important role in fine-grain spatial vision. These cells respond both to achromatic and chromatic stimuli and are capable of resolving the fine gratings used in our psychophysical experiments (Benardete & Kaplan, 1999b; Derrington, Krauskopf, & Lennie, 1984; Kaplan & Shapley, 1982; Merigan, 1989; Wiesel & Hubel, 1966). 
Changes in instantaneous firing rates with respect to levels of spontaneous activity were computed by means of spatio-temporal filters, K( x, t), which convolved the visual input I( x, t):  
η x ( t ) = [ K ( x , t ) I ( x x + t t ) d x d t ] γ ,
(1)
where x is the location of the center of a cell's receptive field, and the operator [·] γ indicates rectification with threshold γ : [ z] γ = zγ if z > γ, and [ z] γ = 0 if zγ. In the context of this paper, since the instantaneous firing rate already provides statistical averaging, it is a more computationally efficient signal to simulate than the actual train of spikes. 
The cell's spatio-temporal kernel K( x, t) was designed as the product of separate spatial ( F) and temporal ( H) components:  
K ( x , t ) = F ( x ) H ( t ) .
(2)
 
The spatial receptive field, F( x), was modeled as the standard difference of two-dimensional Gaussian functions, so that its contrast sensitivity function F( u) varied with spatial frequency u as:  
F ( u ) = S c π r c 2 e ( π | u | r c ) 2 S s π r s 2 e ( π | u | r s ) 2 ,
(3)
where S c, S s, r c, and r s represent the sensitivities and radii of center and surround Gaussians, respectively. Since our experimental stimuli contained gratings at two different frequencies, we modeled two populations of parvocellular cells with different peaks in their spatial sensitivity. As shown in Figure 2, one set of neurons was highly sensitive to low spatial frequencies. These neurons had maximal sensitivity at 4.3 cycles/deg and responded strongly to the gratings of Experiment 2. Cells in the second group were sensitive to a higher range of spatial frequencies. Their contrast sensitivity function peaked at 11.4 cycles/deg, so that their responses were severely in influenced by the gratings of Experiment 1. For both neuronal populations, parameters were adjusted on the basis of neurophysiological data from Derrington and Lennie (1984) to model the contrast sensitivities of individual cells with receptive fields within the central 10° of visual field. Neurons in the first population modeled cell A in Figure 3 of Derrington and Lennie (1984). For these neurons, the values of the parameters in Equation 3 were set to Sc = 15.03 impulses s−1, rc = 0.015°, rs = 0.072°, Ss = 0.580 impulses s−1. Units in the population sensitive to high spatial frequencies modeled cell C in Figure 3 of Derrington and Lennie (1984). In this case, the parameters of Equation 3 were set to the following values: Sc = 10.74 impulses s−1, rc = 0.03°, rs = 0.202°, Ss = 0.158 impulses s−1
Figure 2
 
Response characteristics of modeled neurons. (Left) Spatial sensitivity. The two graphs show the spatial contrast sensitivity functions for the two populations of ganglion cells included in the model. (Right) Temporal sensitivity. The two neuronal populations possessed identical temporal characteristics.
Figure 2
 
Response characteristics of modeled neurons. (Left) Spatial sensitivity. The two graphs show the spatial contrast sensitivity functions for the two populations of ganglion cells included in the model. (Right) Temporal sensitivity. The two neuronal populations possessed identical temporal characteristics.
Following the model introduced by Victor (1987) for ganglion cells in the cat's retina, the temporal impulse response H(t) was computed as the inverse Fourier transform of the temporal-frequency response H(ω) to a sine grating modulated at temporal frequency ω. H(ω) was modeled by a set of low-pass filters and a high-pass stage as: 
H ( ω ) = A e ( i ω D ) ( 1 H s 1 + i ω τ s ) ( 1 1 + i ω τ L ) N L ,
(4)
where A represents the overall gain; HS, the strength of the subtractive stage; τS and τL, the time constants of the high- and low-pass stages; NL, the number of low-pass stages; and D, the brief transmission delay along the RGC axon, which was included to model the output of the retina. In this model, the cascade of filters is a mathematical commodity with the purpose of data fitting; these filters are not meant to describe individual biological components. This model has been successfully applied to fit data from macaque's ganglion cells (Benardete & Kaplan, 1997, 1999a, 1999b; Benardete, Kaplan, & Knight, 1992). Temporal parameters of the model were set to the median values measured in P cells in the macaque's retina (Benardete & Kaplan, 1999b): A = 601.48 impulses s−1, NL = 51, τS = 0.87 ms, HS = 0.77, τL = 31.73 ms, D = 4 ms. 
Since activity in the model was defined as the deviation from the spontaneous firing rate, positive (negative) values in the output of simulated cells correspond to firing rates higher (lower) than spontaneous activity. Rectification was present in the model to account for the asymmetry between the possible ranges of negative and positive responses: real firing rates can increase greatly over the spontaneous firing rate but they cannot go below zero. In the simulations, the rectification level was defined as the percentage of the range of negative activity in the linear (unrectified) response that was eliminated. Unless otherwise specified, we used 50% rectification, that is, we eliminated half of the range of negative activity present in the unrectified model. 
Measuring correlated activity
We analyzed the spatial organization of correlated activity during viewing of the experimental stimuli. In each simulated trial k, the degree of covariance between the responses of two model cells, η x( t) and η y( t), was quantified by means of their correlation coefficient:  
r x y k = ( η x ( t ) η x ) ( η y ( t ) η y ) T σ x σ y ,
(5)
where statistical values were estimated over the T = 1 s duration of the trial. Correlation coefficients were then averaged (a) over the ensemble F of all pairs of simulated cells with receptive field centers at a fixed distance d from each other and aligned at the same angle φ with respect to the grating, and (b) over all experimental trials, yielding the spatial correlation function r( d, φ) (see Figure 3):  
r ( d , φ ) = r x y k F , k F = { ( η x , η y ) : x y = d , tan 1 ( y x ) θ k = φ } .
(6)
 
Figure 3
 
Procedure for measuring the spatial organization of correlated activity in the model. (a) Cell responses were simulated while their receptive fields scanned the stimuli of the experiments summarized in Figure 1 following sequences of recorded eye movements (orange curve). The receptive fields of simulated neurons were aligned on the two axes parallel and orthogonal to the grating ( φ = 0° or 90°). For clarity, only seven cells are shown here (receptive fields not to scale). (b–c) Example of modeled neural responses during simulation of an experimental trial. Levels of covariance were evaluated over the period of stimulus presentation. (d–e) Parallel and orthogonal correlations functions r ( d) and r ( d). Data points represent the average correlation coefficient in the responses of pairs of cells with receptive fields at various separations. d 1 represents the distance between the receptive fields centers of both pairs of cells ( c 0, p 1) and ( c 0, o 1); d 2 the distance between c 0 and p 2 (as well as c 0 and o 2), and so on. r ― and r ― indicate the mean values of r ( d) and r ( d) across receptive-field separations.
Figure 3
 
Procedure for measuring the spatial organization of correlated activity in the model. (a) Cell responses were simulated while their receptive fields scanned the stimuli of the experiments summarized in Figure 1 following sequences of recorded eye movements (orange curve). The receptive fields of simulated neurons were aligned on the two axes parallel and orthogonal to the grating ( φ = 0° or 90°). For clarity, only seven cells are shown here (receptive fields not to scale). (b–c) Example of modeled neural responses during simulation of an experimental trial. Levels of covariance were evaluated over the period of stimulus presentation. (d–e) Parallel and orthogonal correlations functions r ( d) and r ( d). Data points represent the average correlation coefficient in the responses of pairs of cells with receptive fields at various separations. d 1 represents the distance between the receptive fields centers of both pairs of cells ( c 0, p 1) and ( c 0, o 1); d 2 the distance between c 0 and p 2 (as well as c 0 and o 2), and so on. r ― and r ― indicate the mean values of r ( d) and r ( d) across receptive-field separations.
In Equation 6, φ represents the angle subtended by the line intersecting the centers of the two receptive fields and the grating's axis ( θ k = ±45°). Each value of φ identifies a specific orientation axis relative to the grating. The function r( d, φ) gives the average correlation coefficient between the responses of two cells with receptive field at various distances on a selected orientation φ
As illustrated in Figure 3, we focused on arrays of ganglion cells that were either parallel ( φ = 0°) or orthogonal ( φ = 90°) to the grating's orientation. These two angles define the two possible orientations that a grating could assume in a trial. The responses of retinal ganglion cells aligned on these two axes are likely to play a critical role in the neural processes underlying the forced-choice decision of our experiments. These responses converge onto neurons selective for the two possible orientations in the primary visual cortex (Reid & Alonso, 1995). In this paper, we use the symbols r(d) and r(d) to represent the spatial correlation functions on these two orthogonal axes: 
r ( d ) = r ( d , 0 ) r ( d ) = r ( d , 90 ) .
(7)
 
For brevity, we refer to r ( d) and r ( d) as the parallel and orthogonal correlation functions. Levels of covariance in the simulations were averaged over a total of N = 200 trials, each with its unique pattern of eye movements. In each trial, means were evaluated over at least 10 pairs of cells for every value of separation d and angle φ
To obtain a measure of the overall level of covariation in neural activity on a given orientation axis, we also estimated the mean correlation coefficient over all pairs of cells available on that axis:  
r = < r ( d ) > d r = < r ( d ) > d .
(8)
 
Comparison between these two values indicates on which axis, parallel or orthogonal with respect to the grating, cells exhibited on average the strongest degree of covariance. 
In the analysis of Figure 8, retinal ganglions cells were treated as linear filters. In this case, correlated activity was estimated from the power spectrum of visual input R( u, ω), where u and ω indicate spatial and temporal frequencies, respectively. Under the assumption of linearity, r( x, t) (the correlation between the responses of pairs of cells at separation xmeasured at a time lag t) can be evaluated as F −1{∣ K( u, ω)∣ 2 R( u, ω)}, where K is the cell linear kernel, and F −1 indicates the inverse Fourier Transform operator (see for example Bendat & Piersol, 1986). Input spectra in Figure 8 were estimated by means of Welch's method over all the trials available from one subject (see Rucci et al., 2007). 
Results
The results of the experiments summarized in Figure 1 show that discrimination of the orientation of a high-frequency grating perturbed by noise at lower spatial frequency is impaired in the absence of the retinal image motion generated by fixational eye movements. In contrast, elimination of retinal image motion has no noticeable effect when the frequency band of the pattern of noise is higher than the frequency of the grating. In this study, we used neuronal models to examine the origin of this phenomenon. 
Figure 3 illustrates the approach followed by this study. Computational models simulated the responses of arrays of neurons with receptive fields aligned parallel and orthogonal to the grating. These two axes represent the two possible orientations assumed by the grating in a trial. Each trial of the simulations reconstructed the retinal image motion experienced by the subject in one of the experimental trials. That is, model cells were exposed to the same stimulus displayed in the experiment and, in simulations of normal (unstabilized) trials, their receptive fields moved following the recorded eye-movement trajectory. No movement occurred in the simulations of trials with retinal stabilization (stabilized trials). Synchronous modulations in the responses of pairs of cells were measured by means of correlation coefficients, as explained in Methods. Mean correlation coefficients for pairs of cells with receptive fields at various distances composed the parallel and orthogonal correlation functions, r ( d) and r ( d), shown in Figures 3d3e. The averages of these two functions over cells at different separations,
r
and
r
, were used as indices of the global degrees of synchronization in neural activity on the two axes. 
Figure 4 shows the spatial organization of correlated retinal activity in the simulations of the trials of Experiment 1. In this experiment, the stimulus consisted of a 11 cycles/deg grating embedded in a noise field with power at frequencies lower than 5 cycles/deg. The panels in the top and center rows in Figure 4 compare parallel and orthogonal correlation functions ( r ( d) and r ( d), in Equation 7) for both populations of simulated retinal units (cells sensitive to either low or high spatial frequencies). Both results obtained in the presence of the normal fixational motion of the retinal image and in simulations of retinal stabilization are shown in Figure 4
Figure 4
 
Spatial organization of correlated retinal activity in the trials of Experiment 1 (high-frequency grating). ( Top-Center Rows) Correlation functions on the axes parallel and orthogonal to the grating's orientation ( r ( d) and r ( d) in Equation 7) measured during the normal fixational motion of the retinal image (“Normal”) and in simulations of retinal stabilization (“Stabilized”). Shaded areas represent 95% bootstrap confidence intervals. ( Bottom Row) Mean values ±95% confidence intervals of the parallel and orthogonal correlation functions over all receptive field separations ( r ― and r ― in Equation 8). Data from the two modeled neuronal populations are organized on separate columns: ( Left) cells sensitive to high spatial frequencies; ( Right) cells sensitive to low spatial frequencies.
Figure 4
 
Spatial organization of correlated retinal activity in the trials of Experiment 1 (high-frequency grating). ( Top-Center Rows) Correlation functions on the axes parallel and orthogonal to the grating's orientation ( r ( d) and r ( d) in Equation 7) measured during the normal fixational motion of the retinal image (“Normal”) and in simulations of retinal stabilization (“Stabilized”). Shaded areas represent 95% bootstrap confidence intervals. ( Bottom Row) Mean values ±95% confidence intervals of the parallel and orthogonal correlation functions over all receptive field separations ( r ― and r ― in Equation 8). Data from the two modeled neuronal populations are organized on separate columns: ( Left) cells sensitive to high spatial frequencies; ( Right) cells sensitive to low spatial frequencies.
As shown by these data, for both neuronal populations and both viewing conditions, spatial correlation functions declined with the separation between receptive fields within a nearby range of distances. This effect originated from the noise correlation present in the stimulus, which was predominant for distances smaller than 10′. At larger separations, the influence of noise was less overwhelming, and modulations caused by the grating were visible. At this range of distance, neurons with receptive fields aligned parallel to the grating were on average simultaneously active. Thus, the parallel correlation function r ( d) varied little with the distance d between receptive fields. In contrast, on the orthogonal axis, cell responses were synchronously modulated only when the separations between their receptive fields matched the period of the grating, as shown by the oscillations in the orthogonal correlation function, r ( d). Although these modulations were visible in both neuronal populations, they were more pronounced for neurons sensitive to high spatial frequencies. Because of their spatial characteristics, these neurons responded strongly to the grating and were relatively unaffected by the low-band pattern of noise present in the stimulus. 
Comparison of the spatial correlation functions measured in the two viewing conditions reveals the impact of oculomotor activity. The only difference between retinal stimulation in these two conditions was the retinal image motion caused by eye movements. Fixational eye movements greatly enhanced the influence of the grating on the structure of correlated activity. As a consequence of this influence, the average levels of covariance in neuronal responses increased for pairs of cells aligned parallel to the grating and decreased on the orthogonal axis. As summarized by the panels on the bottom row of Figure 4, the difference between
r
and
r
, was larger during normal retinal image motion than under retinal stabilization. This effect was particularly evident for neurons sensitive to high spatial frequencies, which responded strongly to the changes in luminance introduced by fixational instability. For this neuronal population, the difference between mean levels of covariance on the parallel and orthogonal axes was significant ( z = 5.33, p < 0.01; two-tailed z-test). 
Figure 5 shows the structure of correlated activity in the simulations of Experiment 2. In this experiment, the frequency of the grating (4 cycles/deg) was lower than the frequency band of noise, as illustrated in Figure 1. As in Experiment 1, neuronal responses were synchronized when neurons received similar input. Neurons with receptive fields aligned parallel to the grating were all coactive, yielding a relatively constant parallel correlation function r ( d). Neurons with receptive fields aligned orthogonally to the grating were simultaneously active only if the separation between their receptive fields was a multiple of the period of the grating, yielding an orthogonal correlation function which oscillated at the grating's frequency. This pattern of correlated activity occurred for both neuronal populations. In this case, however, the influence of the grating was most visible in the responses of neurons sensitive to low spatial frequencies. Because of their contrast sensitivity functions, these neurons were driven strongly by the grating and largely unaffected by the pattern of noise. 
Figure 5
 
Spatial organization of correlated retinal activity in the trials of Experiment 2 (low-frequency grating). The layout of the data and use of symbols are as in Figure 4.
Figure 5
 
Spatial organization of correlated retinal activity in the trials of Experiment 2 (low-frequency grating). The layout of the data and use of symbols are as in Figure 4.
Comparison of the top and center rows in Figure 5 shows that, unlike the case of Experiment 1, the structure of correlated activity measured in Experiment 2 was minimally influenced by retinal image motion. The parallel and orthogonal correlation functions obtained during the normal fixational motion of the retinal image were very similar to those measured in simulations of retinal stabilization. As a consequence, the mean correlation values measured on the two axes,
r
and
r
, also changed little between the two conditions of presence and absence of retinal image motion. 
The data in Figures 4 and 5 show that a visual discrimination criterion based on the spatial organization of correlated activity matches psychophysical performance. In Experiment 1, fixational eye movements (a) increased the degree of synchronization in the responses of arrays of neurons with receptive fields aligned parallel to the grating, and (b) decreased synchronization on the orthogonal axis. These changes in correlated activity occurred without affecting the overall firing rates of simulated units, as summarized in Figure 6. In Experiment 2, instead, neural activity was little influenced by the presence of retinal image motion. Therefore, fixational eye movements facilitated identification of the axis with the highest degree of neural synchronization in Experiment 1 but not in Experiment 2. Consistent with a decision criterion based on the strength of covarying activity, elimination of retinal image motion by means of retinal stabilization severely affected subjects' performance in Experiment 1, but had little impact in Experiment 2. Both fixational saccades and drifts contributed to the structure of correlated activity in our simulations. Results similar to those of Figures 4 and 5 were obtained when data analysis was restricted to the pools of trials that did and did not contain microsaccades (data not shown). 
Figure 6
 
Mean firing rates measured in the simulations of Experiment 1. Data points represent the average responses of modeled neurons in the trials of Figure 4. The two panels show data from different neuronal populations. For each population, responses were normalized by the highest instantaneous firing rate measured in the simulations. Error bars represent standard deviations.
Figure 6
 
Mean firing rates measured in the simulations of Experiment 1. Data points represent the average responses of modeled neurons in the trials of Figure 4. The two panels show data from different neuronal populations. For each population, responses were normalized by the highest instantaneous firing rate measured in the simulations. Error bars represent standard deviations.
Under natural viewing conditions, two separate sources contribute to the motion of the retinal image: (a) the motion of objects in the scene, and (b) the self-motion of the observer. Both motion sources were present in the experiments of Figure 1 and contributed to the patterns of correlated activity measured in the simulations. External changes in visual stimulation occurred at the beginning of each trial, when the stimulus was initially flashed, and provided the only input change under retinal stabilization. In the unstabilized trials, input modulations resulting from fixational eye movements were also present. These oculomotor modulations were the only difference between the visual input signals of the two viewing conditions and were responsible for the effects shown in Figures 4 and 5
To further examine the influence of fixational eye movements on the statistics of neural activity, we conducted an analysis similar to that of Figures 4 and 5 for two selected temporal intervals: the first 100 ms after stimulus onset, and the last 500 ms before the trial's end. Selection of these two intervals enabled separation of the two motion sources, because (a) the onset of the stimulus was the predominant source of input change during the initial 100 ms of stimulus presentation; and (b) fixational eye movements provided the sole source of retinal image motion during the later interval, when neuronal responses were no longer influenced by the sudden appearance of the stimulus. 
Figure 7 shows the structure of correlated activity in the two selected intervals during the trials of Experiment 1. Data points in this figure represent the means of the parallel and orthogonal correlation functions, as in the bottom row of Figure 4. In this case, however, levels of covariance were only evaluated during the two intervals, instead of during the entire period of stimulus presentation. These data clarify that only the modulations caused by fixational eye movements resulted in a larger difference between parallel and orthogonal correlation functions. For both neuronal populations, the difference between the mean correlation functions on the two axes was larger during the late 500 ms than in the initial 100 ms. This difference was statistically significant in the population of neurons sensitive to high spatial frequencies ( z = −6.22, p < 0.01; two-tailed z-test). In contrast, the parallel and orthogonal correlation functions measured during the initial 100 ms were almost identical to those obtained in simulations of retinal stabilization. The reasons underlying the differential impact of the two sources of retinal image motion, modulations of stimulus contrast and oculomotor activity, are explained in Discussion
Figure 7
 
Means of parallel and orthogonal correlation functions measured in the trials of Experiment 1 during two selected intervals: The first 100 ms after display of the stimulus, and the last 500 ms before appearance of the mask. The two panels show data from different neuronal populations.
Figure 7
 
Means of parallel and orthogonal correlation functions measured in the trials of Experiment 1 during two selected intervals: The first 100 ms after display of the stimulus, and the last 500 ms before appearance of the mask. The two panels show data from different neuronal populations.
To understand the origins of the oculomotor influences shown in Figures 4, 5, 6, and 7, we simplified the model by eliminating rectification in the computation of neuronal responses (see Equation 1). The degree of rectification had little impact on the structure of correlated activity, and results obtained in the absence of rectification were highly similar to those of Figures 4 and 5 (data not shown). Without rectification, however, model neurons acted purely as linear filters. As illustrated in Figure 8a, linearity enables derivation of levels of correlation directly from visual input signals. Indeed, the correlation r( x, t) between the responses of pairs of cells at separation x measured at a time lag t is, by definition, the inverse Fourier Transform of the power spectrum of neural activity. This spectral density function can be obtained by multiplying the power spectrum of visual input by the squared amplitude of the cell linear kernel (see for example Bendat & Piersol, 1986). This analysis gives insights into the mechanisms by which fixational eye movements modulate neuronal responses, as it enables quantification of the contributions from different spatial and temporal frequencies to the patterns of correlated activity. 
Figure 8
 
Linear analysis of correlated retinal activity. (a) Under the assumption of linearity, the power spectrum of neural activity can be estimated on the basis of the power spectrum of the visual input and the spatiotemporal kernel of model neurons. (b) Space–time sections of power spectra and cell kernels. Sections were orthogonal to the spatial frequency plane and intersected the main spatial frequency diagonal (the planes α and β in (a)). Scales are in decibel. (Left) Power spectra of visual input in both experiments and viewing conditions. The yellow lines represent the contrast sensitivity of an ideal detector which responds equally to all spatial frequencies but only to temporal modulations at 15 Hz. (Center) Power spectra of neural activity for both neuronal populations. (Right) Signal-to-noise ratios of both neuronal populations (High and Low) in all experiments and viewing conditions.
Figure 8
 
Linear analysis of correlated retinal activity. (a) Under the assumption of linearity, the power spectrum of neural activity can be estimated on the basis of the power spectrum of the visual input and the spatiotemporal kernel of model neurons. (b) Space–time sections of power spectra and cell kernels. Sections were orthogonal to the spatial frequency plane and intersected the main spatial frequency diagonal (the planes α and β in (a)). Scales are in decibel. (Left) Power spectra of visual input in both experiments and viewing conditions. The yellow lines represent the contrast sensitivity of an ideal detector which responds equally to all spatial frequencies but only to temporal modulations at 15 Hz. (Center) Power spectra of neural activity for both neuronal populations. (Right) Signal-to-noise ratios of both neuronal populations (High and Low) in all experiments and viewing conditions.
Since the visual input to the retina, I( x, t) is a 3D function of space and time, its power spectrum is a 3D function of spatial and temporal frequency. The left side of Figure 8b shows the power spectra of the visual input signals in the various cases considered in this study. Each panel represents a space–time section of the corresponding 3D spectrum taken along plane α in Figure 8a. Choice of this plane enabled display of both the grating and the pattern of noise present in the stimulus. As shown by these sections, the two conditions of normal retinal image motion and retinal stabilization produced visual input signals with significantly different spectral distributions. Under retinal stabilization, the input power was confined to the spatial frequency plane at zero temporal frequency because the retinal stimulus did not change. In the normal viewing condition, instead, the motion of the eye spread the spatial power of the stimulus across temporal frequencies. Interestingly, the extent of this temporal spreading increased with the spatial frequency. It was this dependence on spatial frequency of the temporal power generated by eye movements which was ultimately responsible for the results presented in this paper. 
The center and right columns of Figure 8b examine the impact of fixational eye movements on the correlated activity of the two modeled neuronal populations. Model neurons were sensitive to fixational modulations of luminance, as retinal image motion transferred part of the power which was at zero temporal frequency under retinal stabilization to a range of nonzero temporal frequencies to which neurons were more responsive. Since the amount of temporal power produced by fixational eye movements increased with spatial frequency, this input modulation effectively enhanced the influence of high spatial frequencies within the range of neuronal sensitivity. This effect is summarized by the signal-to-noise ratios (SNRs) shown in Figure 8b. Data points in these graphs represent the ratio of the total amount of power originating from the grating to the total amount of power given by the pattern of noise in all experiments and viewing conditions. Although only neurons with contrast sensitivity centered at the grating's frequency possessed high SNRs, the influence of fixational eye movements on both neuronal populations was qualitatively similar. In Experiment 1, in which the grating was at higher spatial frequency than the pattern of noise, fixational eye movements amplified responses to the grating. The SNRs of both neuronal populations more than doubled in the presence of eye movements (SNR unstabilized/SNR stabilized high-frequency cells: 2.27; low-frequency cells: 2.38), indicating that correlated activity was more strongly driven by the grating during eye movements than under retinal stabilization. In Experiment 2, instead, the frequency of the grating was lower than the frequency band of noise, and fixational eye movements had the opposite effect of enhancing the pattern of noise (SNR unstabilized/SNR stabilized high-frequency cells: 0.85; low-frequency cells: 0.75). This decrement in SNR should not be taken to imply that the model predicts lower psychophysical performance in the presence of retinal image motion in Experiment 2. The frequency analysis of Figure 8 focuses exclusively on the modulations caused by fixational eye movements. In the experiments, however, retinal ganglion cells were also stimulated by the onset of the stimulus. As shown by the bottom panels of Figure 5, in both viewing conditions, levels of correlated activity on the two orthogonal arrays were almost identical when the input changes resulting from stimulus onset were taken into account. These data predict an equal level of performance in the two viewing conditions of Experiment 2. 
The input power spectra shown in Figure 8b also enables evaluation of the impact of fixational eye movements on the correlation of neurons with response characteristics different from those of the two modeled neuronal populations. In neurons with broad spatial frequency tuning, as it is the case with parvocellular ganglion cells, sensitivity to the fixational modulations of luminance will always result in the enhancement of neuronal responses to the high frequency range of spatial sensitivity. The magnitude of this effect will depend on the shape of the spatial contrast sensitivity function. The effect will be more pronounced in neurons with contrast sensitivity peak at high spatial frequencies, as the temporal spreading of power is substantial in this range. In neurons with contrast sensitivity peak in the low spatial frequency band, a range in which the temporal power resulting from fixational instability is modest, levels of correlated activity will be less affected by eye movements. Thus, these neurons will exhibit similar levels of correlation in the two conditions of normal retinal image motion and retinal stabilization. It is important to emphasize that these changes in spatial sensitivity originate from the temporal response preferences of neurons. To clarify this point, consider an ideal detector which responds uniformly to all spatial frequencies (i.e., constant spatial contrast sensitivity function) but only to modulations at a specific temporal frequency f d (i.e., temporal contrast sensitivity function different from zero only at f d). In Experiment 1, the SNR of this ideal neuron is 243 times larger if the neuron responds to f d = 15 Hz than if it only responds to static stimuli ( f d = 0 Hz). The opposite effect occurs in Experiment 2, in which sensitivity to f d = 15 Hz lowers the SNR by a factor of 26 with respect to sensitivity to f d = 0 Hz. Thus, the enhancement of high spatial frequencies caused by fixational eye movements tends to be more pronounced in neurons that prefer high temporal frequencies. 
Discussion
During visual fixation, small eye movements continually displace the stimulus on the retina. It has long been questioned whether modulations in neuronal responses resulting from these input changes might encode spatial information in the temporal domain (Ahissar & Arieli, 2001; Arend, 1973; Marshall & Talbot, 1942). Recent experiments with retinal stabilization have revealed a beneficial effect of the physiological instability of visual fixation in the analysis of fine spatial detail (Rucci et al., 2007). The results of this study show that visual performance in these experiments is predicted by the spatial structure of correlated retinal activity. In keeping with psychophysical results, fixational eye movements synchronously modulated the responses of retinal ganglion cells during viewing of high-frequency gratings masked by low-frequency noise, but not during viewing of low-frequency gratings masked by high-frequency noise. These results indicate that the synchronization of retinal activity resulting from fixational eye movements is a component of the neural substrate for fine-grain spatial vision. 
It is important to observe that the results described in this paper are highly robust, as they do not depend on the precise characteristics of neural models. Indeed, the oculomotor influences on retinal activity measured in our simulations originate from the basic interaction between neuronal selectivity and the spatiotemporal input to the retina during fixational instability. This interaction is well described in the space–time frequency domain, as illustrated in Figure 8. In this domain, the fluctuations of luminance caused by fixational eye movements are represented by a spreading of the stimulus power along the temporal frequency axis, a phenomenon that tends to be more pronounced at higher spatial frequencies. This effect occurs because, for a small translation of the retinal image, the change in input luminance experienced by a retinal receptor tends to increase proportionally to the stimulus' frequency. The dependence on spatial frequency of the amount of temporal power generated by eye movements is the key mechanism responsible for the results of this paper. In our simulations, this temporal power had a different impact on the two modeled populations of ganglion cells. Neurons with sensitivity peaks in the low-frequency range, a band in which the temporal spreading of power is limited, were minimally affected by fixational instability. In contrast, eye movements strongly modulated the responses of neurons sensitive to high spatial frequencies. As shown by Figure 8, these synchronous modulations were mainly driven by the grating in Experiment 1 and by the pattern of noise in Experiment 2. 
It should also be noted that the synchronous modulations reported in this paper occur only during motion of the retinal image similar to that caused by fixational instability. A non-uniform generation of temporal power with spatial frequency cannot be obtained by modulating the contrast of the stimulus. For example, the temporal spreading of power resulting from the onset of the stimulus does not vary with spatial frequency. Flashing of the stimulus at the beginning of each trial corresponds to modulating its contrast in time by means of Heaviside function. In the Fourier domain, the temporal power given by this operation declines with the square of temporal frequency and is independent of spatial frequency. In this study, to ensure the presence of realistic retinal image motion, the input stimulus to the model reproduced the visual input signals experienced by the participants in our experiments. This reconstruction was possible because eye movements were recorded in every trial during the experiments. Thus, the spatial–frequency dependence of temporal power in our simulations resulted from the oculomotor activity performed by human observers. 
The proposal of this paper, that the visual system exploits synchronous modulations in neural responses exerted by eye movements, builds upon a large body of literature. The role of the temporal structure of cortical activity in the processes of image segmentation and feature binding is one of the most debated issues of current neuroscience (for reviews see Gray, 1999; Shadlen & Movshon, 1999; Singer, 1999; Singer & Gray, 1995). Correlated cell responses, both in the form of coactivity of instantaneous firing rates (Roelfsema, Lamme, & Spekreijse, 2004) and as synchronous spikes (Singer & Gray, 1995), might signal the presence of important features in the visual scene such as an edge or an object. Furthermore, in the retina and thalamus, simultaneously active neurons are more likely than isolated cell responses to affect neural activity at later stages (Alonso, Usrey, & Reid, 1996; Usrey, Alonso, & Reid, 2000; Usrey & Reid, 1999). A strong influence of fixational eye movements on the structure of correlated activity, as observed in our model, is not surprising. Neurophysiological recordings have already revealed neuronal modulations exerted by fixational saccades in various areas of the brain (Gur, Beylin, & Snodderly, 1997; Leopold & Logothetis, 1998; Martinez-Conde, Macknik, & Hubel, 2000, 2002; Snodderly, Kagan, & Gur, 2001). In addition, eye movements are a powerful source of correlation in the visual input, as they induce synchronous changes in the stimuli covered by cell receptive fields. Indeed, a synchronization of the responses of ganglion cells during fixational eye movements has already been observed in the turtle's retina (Greschner et al., 2002). 
From a different perspective, contributions of fixational eye movements to the neural encoding of visual information have been postulated since early last century by the so-called dynamic theories of visual acuity (Arend, 1973; Averill & Weymouth, 1925; Marshall & Talbot, 1942). These theories argued that the limiting retinal factor in visual acuity was “the relation of receptor width to the highest optical gradient in a moving pattern rather than the static differential illumination in one cone, compared with its neighbors.” (Marshall & Talbot, 1942). More recently, this line of thought has found renewed interest in the proposal formulated by Ahissar and Arieli (2001), which examines the significance of a changing retinal image in the light of current knowledge on the response properties of neurons in the early visual system. According to this proposal, the precise timing in neuronal responses during the fixational motion of the retinal image provides a substrate for encoding the fine characteristics of the stimulus. Our finding that fine-scale spatial vision is indeed impaired under retinal stabilization together with the suggestion of this paper that such impairment originates from a reduction in neural synchronization provide support to the proposal by Ahissar and Arieli. 
The oculomotor influences on retinal activity described in this study should not be confused with the fixational decorrelation of neuronal responses reported in our previous articles (Desbordes & Rucci, 2007; Rucci & Casile, 2005; Rucci et al., 2000). In previous modeling studies, we have suggested that fixational eye movements attenuate the sensitivity of neurons in the retina and lateral geniculate nucleus to the broad correlations of natural images. In these models, wide ensembles of coactive units emerged immediately after the onset of fixation and shrunk during fixational eye movements. By the end of a typical 300-ms period of fixation, the responses of neurons with non-overlapping receptive fields were completely uncorrelated. A reduction in the spatial extent of correlated activity acquires importance in the light of theories on the efficiency of visual representations (Attneave, 1954; Barlow, 1961) and on the influences of the structure of neural activity during visual development (Changeux & Danchin, 1976). This neural decorrelation critically depends on the second-order statistics of natural images and occurs only during exposure to visual input with a scale-invariant power spectrum. In the presence of visual stimulation with a power spectrum different from that of natural images, as is the case with the stimuli used in this study, fixational eye movements induce correlations rather than decorrelating neural responses. In this sense, the periodic stimuli used in our psychophysical experiments can be regarded as an unnatural tool for identifying interesting neural mechanisms. 
The neural decorrelation described in our previous studies should also not be taken to imply that fixational eye movements do not induce useful correlations in the responses of retinal ganglion cells during individual fixations. The neural decorrelation resulting from fixational instability is a statistical phenomenon which occurs over multiple fixations. That is, because of the statistical characteristics of natural images, it is not possible to predict the influence of fixational eye movements on the response of a ganglion cell by examining the activity of another neuron with non-overlapping receptive field. During each individual fixation, however, a specific pattern of correlated activity exists, which is determined by the stimulus and by the observer's behavior. This pattern will differ from one fixation to the next. The global decorrelation caused by fixational eye movements guarantees that, on average, the short-lived correlations in cell responses established during individual fixations do not depend on the “uninteresting” broad correlations of natural scenes, but emphasize instead the most interesting elements of the stimulus, those that cannot be predicted from the mere knowledge of its power spectrum. Thus, fixational eye movements appear to be part of an efficient scheme of acquisition and encoding of visual information in the presence of naturalistic stimuli. 
Even though the eye movements that occur during visual fixation are often labeled as microscopic, the motion of the retinal image caused by this behavior is large relative to the size of retinal receptors. Under natural viewing conditions, movements of the head and body further amplify the instability of visual fixation (Skavenski, Hansen, Steinman, & Winterson, 1979; see Murakami & Cavanagh, 1998 for a striking demonstration of fixational jitter). The resulting fixational motion of the retinal image is therefore likely to be one of the most important contributors to the temporal structure of retinal activity. Our simulations show that the degree of synchronization in the responses of ganglion cells predicts the performance of human observers in discriminating between two orthogonally-oriented gratings. This finding does not exclude that the visual system might also take advantage of other aspects of the correlated structure of retinal activity besides zero-delay covariance. For example, delays in the firing of neuronal ensembles might encode other features of the stimulus, such as the fine shape of a pattern or the frequency of a grating (Ahissar & Arieli, 2001). Further experiments need to be conducted to investigate this hypothesis. 
Acknowledgments
This work was supported by the National Institute of Health Grants Y18363 and EY015732 and by the National Science Foundation Grant BCS-0719849. 
Commercial relationships: none. 
Corresponding author: Dr. Michele Rucci. 
Email: rucci@cns.bu.edu. 
Address: Boston University, 677 Beacon Street, Boston, MA 02215, USA. 
References
Ahissar, E. Arieli, A. (2001). Figuring space by time. Neuron, 32, 185–201. [PubMed] [Article] [CrossRef] [PubMed]
Alonso, J. M. Usrey, W. M. Reid, R. C. (1996). Precisely correlated firing in cells of the lateral geniculate nucleus. Nature, 383, 815–819. [PubMed] [CrossRef] [PubMed]
Arend, Jr., L. E. (1973). Spatial differential and integral operations in human vision: Implications of stabilized retinal image fading. Psychological Review, 80, 374–395. [PubMed] [CrossRef] [PubMed]
Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review, 61, 183–193. [PubMed] [CrossRef] [PubMed]
Averill, H. I. Weymouth, F. W. (1925). Visual perception and the retinal mosaic: II The influence of eye movements on the displacement threshold. Journal of Comparative Psychology, 5, 147–176. [CrossRef]
Barlow, H. B. Rosenblith, W. A. (1961). Possible principles underlying the transformations of sensory messages. Sensory communication. (pp. 217–234). Cambridge, MA: MIT Press.
Benardete, E. A. Kaplan, E. (1997). The receptive field of the primate P retinal ganglion cell, I: Linear dynamics. Visual Neuroscience, 14, 169–185. [PubMed] [CrossRef] [PubMed]
Benardete, E. A. Kaplan, E. (1999a). Dynamics of primate P retinal ganglion cells: Responses to chromatic and achromatic stimuli. The Journal of Physiology, 519, 775–790. [PubMed] [Article] [CrossRef]
Benardete, E. A. Kaplan, E. (1999b). The dynamics of primate M retinal ganglion cells. Visual Neuroscience, 16, 355–368. [PubMed] [CrossRef]
Benardete, E. A. Kaplan, E. Knight, B. W. (1992). Contrast gain control in the primate retina: P cells are not X-like, some M cells are. Visual Neuroscience, 8, 483–486. [PubMed] [CrossRef] [PubMed]
Bendat, J. S. Piersol, A. G. (1986). Random data: Analysis and measurement procedures. New York: John Wiley and Sons.
Carandini, M. Demb, J. B. Mante, V. Tolhurst, D. J. Dan, Y. Olshausen, B. A. (2005). Do we know what the early visual system does? Journal of Neuroscience, 25, 10577–10597. [PubMed] [Article] [CrossRef] [PubMed]
Casile, A. Rucci, M. (2006). A theoretical analysis of the influence of fixational instability on the development of thalamocortical connectivity. Neural Computation, 18, 569–590. [PubMed] [CrossRef] [PubMed]
Changeux, J. P. Danchin, A. (1976). Selective stabilisation of developing synapses as a mechanism for the specification of neuronal networks. Nature, 264, 705–712. [PubMed] [CrossRef] [PubMed]
Derrington, A. M. Krauskopf, J. Lennie, P. (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. The Journal of Physiology, 357, 241–265. [PubMed] [Article] [CrossRef] [PubMed]
Derrington, A. M. Lennie, P. (1984). Spatial and temporal contrast sensitivities of neurones in lateral geniculate nucleus of macaque. The Journal of Physiology, 357, 219–240. [PubMed] [Article] [CrossRef] [PubMed]
Desbordes, G. Rucci, M. (2007). A model of the dynamics of retinal activity during natural visual fixation. Visual Neuroscience, 24, 217–230. [PubMed] [CrossRef] [PubMed]
Ditchburn, R. W. (1955). Eye movements in relation to retinal action. Optica Acta, 1, 171–176. [CrossRef]
Ditchburn, R. W. Ginsborg, B. L. (1952). Vision with a stabilized retinal image. Nature, 170, 36–37. [PubMed] [CrossRef] [PubMed]
Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America A, Optics and Image Science, 4, 2379–2394. [PubMed] [CrossRef] [PubMed]
Gray, C. M. (1999). The temporal correlation hypothesis of visual feature integration: Still alive and well. Neuron, 24, 31–47. [PubMed] [Article] [CrossRef] [PubMed]
Greschner, M. Bongard, M. Rujan, P. Ammermüller, J. (2002). Retinal ganglion cells synchronization by fixational eye movements improves feature estimation. Nature Neuroscience, 5, 341–347. [PubMed] [CrossRef] [PubMed]
Gur, M. Beylin, A. Snodderly, D. M. (1997). Response variability of neurons in primary visual cortex (V1 of alert monkeys. Journal of Neuroscience, 17, 2914–2920. [PubMed] [Article] [PubMed]
Kaplan, E. Shapley, R. M. (1982). X and Y cells in the lateral geniculate nucleus of macaque monkeys. The Journal of Physiology, 330, 125–143. [PubMed] [Article] [CrossRef] [PubMed]
Leopold, D. A. Logothetis, N. K. (1998). Microsaccades differentially modulate neural activity in the striate and extrastriate visual cortex. Experimental Brain Research, 123, 341–345. [PubMed] [CrossRef] [PubMed]
Marshall, W. H. Talbot, S. A. Kluver, H. (1942). Recent evidence for neural mechanisms in vision leading to a general theory of sensory acuity. Biological symposia—Visual mechanisms. (7, pp. 117–164). Lancaster, PA: Cattel.
Martinez-Conde, S. Macknik, S. L. Hubel, D. H. (2000). Microsaccadic eye movements and firing of single cells in the striate cortex of macaque monkeys. Nature Neuroscience, 3, 251–258. [PubMed] [CrossRef] [PubMed]
Martinez-Conde, S. Macknik, S. L. Hubel, D. H. (2002). The function of bursts of spikes during visual fixation in the awake primate lateral geniculate nucleus and primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 99, 13920–13925. [PubMed] [Article] [CrossRef] [PubMed]
Martinez-Conde, S. Macknik, S. L. Hubel, D. H. (2004). The role of fixational eye movements in visual perception. Nature Reviews, Neuroscience, 5, 229–240. [PubMed] [CrossRef]
Merigan, W. H. (1989). Chromatic and achromatic vision of macaques: Role of the P pathway. Journal of Neuroscience, 9, 776–783. [PubMed] [Article] [PubMed]
Murakami, I. Cavanagh, P. (1998). A jitter after-effect reveals motion-based stabilization of vision. Nature, 395, 798–801. [PubMed] [CrossRef] [PubMed]
Ratliff, F. Riggs, L. A. (1950). Involuntary motions of the eye during monocular fixation. Journal of Experimental Psychology, 40, 687–701. [PubMed] [CrossRef] [PubMed]
Reid, R. C. Alonso, J. M. (1995). Specificity of monosynaptic connections from thalamus to visual cortex. Nature, 378, 281–284. [PubMed] [CrossRef] [PubMed]
Riggs, L. A. Ratliff, F. (1952). The effects of counteracting the normal movements of the eye. Journal of the Optical Society of America, 42, 872–873.
Roelfsema, P. R. Lamme, V. A. Spekreijse, H. (2004). Synchrony and covariation of firing rates in the primary visual cortex during contour grouping. Nature Neuroscience, 7, 982–991. [PubMed] [CrossRef] [PubMed]
Rucci, M. Casile, A. (2004). Decorrelation of neural activity during fixational instability: Possible implications for the refinement of V1 receptive fields. Visual Neuroscience, 21, 725–738. [PubMed] [CrossRef] [PubMed]
Rucci, M. Casile, A. (2005). Fixational instability and natural image statistics: Implications for early visual representations. Network, 16, 121–138. [PubMed] [CrossRef] [PubMed]
Rucci, M. Edelman, G. M. Wray, J. (2000). Modeling LGN responses during free-viewing: A possible role of microscopic eye movements in the refinement of cortical orientation selectivity. Journal of Neuroscience, 20, 4708–4720. [PubMed] [Article] [PubMed]
Rucci, M. Iovin, R. Poletti, M. Santini, F. (2007). Miniature eye movements enhance fine spatial detail. Nature, 447, 851–854. [PubMed] [CrossRef] [PubMed]
Santini, F. Redner, G. Iovin, R. Rucci, M. (2007). EyeRIS: A general-purpose system for eye‐movement‐contingent display control. Behavior Research Methods, 39, 350–364. [PubMed] [CrossRef] [PubMed]
Shadlen, M. N. Movshon, J. A. (1999). Synchrony unbound: A critical evaluation of the temporal binding hypothesis. Neuron, 24, 67–77. [PubMed] [Article] [CrossRef] [PubMed]
Singer, W. (1999). Time as coding space? Current Opinion in Neurobiology, 9, 189–194. [PubMed] [CrossRef] [PubMed]
Singer, W. Gray, C. M. (1995). Visual feature integration and the temporal correlation hypothesis. Annual Review of Neuroscience, 18, 555–586. [PubMed] [CrossRef] [PubMed]
Skavenski, A. A. Hansen, R. M. Steinman, R. M. Winterson, B. J. (1979). Quality of retinal image stabilization during small natural and artificial body rotations in man. Vision Research, 19, 675–683. [PubMed] [CrossRef] [PubMed]
Snodderly, D. M. Kagan, I. Gur, M. (2001). Selective activation of visual cortex neurons by fixational eye movements: Implications for neural coding. Visual Neuroscience, 18, 259–277. [PubMed] [CrossRef] [PubMed]
Steinman, R. M. Haddad, G. M. Skavenski, A. A. Wyman, D. (1973). Miniature eye movement. Science, 181, 810–819. [PubMed] [CrossRef] [PubMed]
Steinman, R. M. Levinson, J. Z. Kowler, E. (1990). The role of eye movements in the detection of contrast and spatial detail. Eye movements and their role in visual and cognitive processes. –212). Amsterdam: Elsevier Science Publisher BV.
Usrey, W. M. Alonso, J. M. Reid, R. C. (2000). Synaptic interactions between thalamic inputs to simple cells in cat visual cortex. Journal of Neuroscience, 20, 5461–5467. [PubMed] [Article] [PubMed]
Usrey, W. M. Reid, R. C. (1999). Synchronous activity in the visual system. Annual Review of Physiology, 61, 435–456. [PubMed] [CrossRef] [PubMed]
Victor, J. D. (1987). The dynamics of the cat retinal X cell centre. The Journal of Physiology, 386, 219–246. [PubMed] [Article] [CrossRef] [PubMed]
Wiesel, T. N. Hubel, D. H. (1966). Spatial and chromatic interactions in the lateral geniculate body of the rhesus monkey. Journal of Neurophysiology, 29, 1115–1156. [PubMed] [PubMed]
Yarbus, A. L. (1967). Eye movements and vision. New York: Plenum Press.
Figure 1
 
Summary of the retinal stabilization experiments modeled in this study. (a) Subjects reported the orientation (±45°) of a noisy grating. In Experiment 1, an 11 cycles/deg grating was perturbed by low spatial frequency noise (low-pass cut-off frequency fc = 5 cycles/deg). In Experiment 2, the stimulus was a 4 cycles/deg grating overlapped by high spatial frequency noise (high-pass fc = 10 cycles/deg). Stimuli were displayed at the onset of fixation after the subject performed a saccade toward a randomly cued location. Stimuli were either maintained at a fixed location on the screen (unstabilized condition) or were moved with the eye so as to cancel the retinal motion resulting from fixational eye movements (stabilized condition). (b) Mean performance across 6 subjects. For every subject, in each condition, percentages were evaluated over a minimum of 80 trials. Error bars represent 95% confidence intervals. (Modified from Rucci et al., 2007).
Figure 1
 
Summary of the retinal stabilization experiments modeled in this study. (a) Subjects reported the orientation (±45°) of a noisy grating. In Experiment 1, an 11 cycles/deg grating was perturbed by low spatial frequency noise (low-pass cut-off frequency fc = 5 cycles/deg). In Experiment 2, the stimulus was a 4 cycles/deg grating overlapped by high spatial frequency noise (high-pass fc = 10 cycles/deg). Stimuli were displayed at the onset of fixation after the subject performed a saccade toward a randomly cued location. Stimuli were either maintained at a fixed location on the screen (unstabilized condition) or were moved with the eye so as to cancel the retinal motion resulting from fixational eye movements (stabilized condition). (b) Mean performance across 6 subjects. For every subject, in each condition, percentages were evaluated over a minimum of 80 trials. Error bars represent 95% confidence intervals. (Modified from Rucci et al., 2007).
Figure 2
 
Response characteristics of modeled neurons. (Left) Spatial sensitivity. The two graphs show the spatial contrast sensitivity functions for the two populations of ganglion cells included in the model. (Right) Temporal sensitivity. The two neuronal populations possessed identical temporal characteristics.
Figure 2
 
Response characteristics of modeled neurons. (Left) Spatial sensitivity. The two graphs show the spatial contrast sensitivity functions for the two populations of ganglion cells included in the model. (Right) Temporal sensitivity. The two neuronal populations possessed identical temporal characteristics.
Figure 3
 
Procedure for measuring the spatial organization of correlated activity in the model. (a) Cell responses were simulated while their receptive fields scanned the stimuli of the experiments summarized in Figure 1 following sequences of recorded eye movements (orange curve). The receptive fields of simulated neurons were aligned on the two axes parallel and orthogonal to the grating ( φ = 0° or 90°). For clarity, only seven cells are shown here (receptive fields not to scale). (b–c) Example of modeled neural responses during simulation of an experimental trial. Levels of covariance were evaluated over the period of stimulus presentation. (d–e) Parallel and orthogonal correlations functions r ( d) and r ( d). Data points represent the average correlation coefficient in the responses of pairs of cells with receptive fields at various separations. d 1 represents the distance between the receptive fields centers of both pairs of cells ( c 0, p 1) and ( c 0, o 1); d 2 the distance between c 0 and p 2 (as well as c 0 and o 2), and so on. r ― and r ― indicate the mean values of r ( d) and r ( d) across receptive-field separations.
Figure 3
 
Procedure for measuring the spatial organization of correlated activity in the model. (a) Cell responses were simulated while their receptive fields scanned the stimuli of the experiments summarized in Figure 1 following sequences of recorded eye movements (orange curve). The receptive fields of simulated neurons were aligned on the two axes parallel and orthogonal to the grating ( φ = 0° or 90°). For clarity, only seven cells are shown here (receptive fields not to scale). (b–c) Example of modeled neural responses during simulation of an experimental trial. Levels of covariance were evaluated over the period of stimulus presentation. (d–e) Parallel and orthogonal correlations functions r ( d) and r ( d). Data points represent the average correlation coefficient in the responses of pairs of cells with receptive fields at various separations. d 1 represents the distance between the receptive fields centers of both pairs of cells ( c 0, p 1) and ( c 0, o 1); d 2 the distance between c 0 and p 2 (as well as c 0 and o 2), and so on. r ― and r ― indicate the mean values of r ( d) and r ( d) across receptive-field separations.
Figure 4
 
Spatial organization of correlated retinal activity in the trials of Experiment 1 (high-frequency grating). ( Top-Center Rows) Correlation functions on the axes parallel and orthogonal to the grating's orientation ( r ( d) and r ( d) in Equation 7) measured during the normal fixational motion of the retinal image (“Normal”) and in simulations of retinal stabilization (“Stabilized”). Shaded areas represent 95% bootstrap confidence intervals. ( Bottom Row) Mean values ±95% confidence intervals of the parallel and orthogonal correlation functions over all receptive field separations ( r ― and r ― in Equation 8). Data from the two modeled neuronal populations are organized on separate columns: ( Left) cells sensitive to high spatial frequencies; ( Right) cells sensitive to low spatial frequencies.
Figure 4
 
Spatial organization of correlated retinal activity in the trials of Experiment 1 (high-frequency grating). ( Top-Center Rows) Correlation functions on the axes parallel and orthogonal to the grating's orientation ( r ( d) and r ( d) in Equation 7) measured during the normal fixational motion of the retinal image (“Normal”) and in simulations of retinal stabilization (“Stabilized”). Shaded areas represent 95% bootstrap confidence intervals. ( Bottom Row) Mean values ±95% confidence intervals of the parallel and orthogonal correlation functions over all receptive field separations ( r ― and r ― in Equation 8). Data from the two modeled neuronal populations are organized on separate columns: ( Left) cells sensitive to high spatial frequencies; ( Right) cells sensitive to low spatial frequencies.
Figure 5
 
Spatial organization of correlated retinal activity in the trials of Experiment 2 (low-frequency grating). The layout of the data and use of symbols are as in Figure 4.
Figure 5
 
Spatial organization of correlated retinal activity in the trials of Experiment 2 (low-frequency grating). The layout of the data and use of symbols are as in Figure 4.
Figure 6
 
Mean firing rates measured in the simulations of Experiment 1. Data points represent the average responses of modeled neurons in the trials of Figure 4. The two panels show data from different neuronal populations. For each population, responses were normalized by the highest instantaneous firing rate measured in the simulations. Error bars represent standard deviations.
Figure 6
 
Mean firing rates measured in the simulations of Experiment 1. Data points represent the average responses of modeled neurons in the trials of Figure 4. The two panels show data from different neuronal populations. For each population, responses were normalized by the highest instantaneous firing rate measured in the simulations. Error bars represent standard deviations.
Figure 7
 
Means of parallel and orthogonal correlation functions measured in the trials of Experiment 1 during two selected intervals: The first 100 ms after display of the stimulus, and the last 500 ms before appearance of the mask. The two panels show data from different neuronal populations.
Figure 7
 
Means of parallel and orthogonal correlation functions measured in the trials of Experiment 1 during two selected intervals: The first 100 ms after display of the stimulus, and the last 500 ms before appearance of the mask. The two panels show data from different neuronal populations.
Figure 8
 
Linear analysis of correlated retinal activity. (a) Under the assumption of linearity, the power spectrum of neural activity can be estimated on the basis of the power spectrum of the visual input and the spatiotemporal kernel of model neurons. (b) Space–time sections of power spectra and cell kernels. Sections were orthogonal to the spatial frequency plane and intersected the main spatial frequency diagonal (the planes α and β in (a)). Scales are in decibel. (Left) Power spectra of visual input in both experiments and viewing conditions. The yellow lines represent the contrast sensitivity of an ideal detector which responds equally to all spatial frequencies but only to temporal modulations at 15 Hz. (Center) Power spectra of neural activity for both neuronal populations. (Right) Signal-to-noise ratios of both neuronal populations (High and Low) in all experiments and viewing conditions.
Figure 8
 
Linear analysis of correlated retinal activity. (a) Under the assumption of linearity, the power spectrum of neural activity can be estimated on the basis of the power spectrum of the visual input and the spatiotemporal kernel of model neurons. (b) Space–time sections of power spectra and cell kernels. Sections were orthogonal to the spatial frequency plane and intersected the main spatial frequency diagonal (the planes α and β in (a)). Scales are in decibel. (Left) Power spectra of visual input in both experiments and viewing conditions. The yellow lines represent the contrast sensitivity of an ideal detector which responds equally to all spatial frequencies but only to temporal modulations at 15 Hz. (Center) Power spectra of neural activity for both neuronal populations. (Right) Signal-to-noise ratios of both neuronal populations (High and Low) in all experiments and viewing conditions.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×