Over the brief time intervals available for processing retinal output, the number of spikes generated by individual ganglion cells can be quite variable. Here, two examples of extreme synergy are used to illustrate how realistic long-range spatiotemporal correlations can greatly improve the quality of retinal images reconstructed from computer-generated spike trains that are 25–400 ms in duration, approximately the time between saccadic eye movements. Firing probabilities were specified both explicitly: using time-varying waveforms consistent with stimulus-evoked oscillations measured experimentally, and implicitly: by superimposing realistic fixational eye movements on a biophysical model of primate outer retina. Synergistic encoding was investigated across arrays of model neurons up to 32 × 32 in extent, containing over 1 million pairwise correlations. The difficulty of estimating pairwise, spatiotemporal correlations on single trials from only a few events was overcome by using oscillatory, local multiunit activity to weight contributions from all spike pairs. Stimuli were reconstructed using either an independent rate code or the first principal component of the single-trial, pairwise correlation matrix. Spatiotemporal correlations mediated dramatic improvements in signal/noise without eliminating fine spatial detail, demonstrating how extreme synergy can support rapid image reconstruction using far fewer spikes than required by an independent rate code.

*T*and temporal resolution Δ

*t*was constructed by first defining the discrete frequencies

*f*

_{ k}:

*f*

_{0}is the central oscillation frequency, here set to 80 Hz,

*σ*is the width of the spectral peak in the associated power spectrum, here set to 10 Hz, and

*r*

_{ k}is a uniform random deviate between 0 and 1 that randomized the phases of the individual Fourier components (generated by the Matlab intrinsic function RAND). The procedure was then repeated using a different pseudorandom sequence for each stimulus trial. The coefficients,

*C*

_{ k}, were used to convert back to the time domain using the discrete inverse Fourier transform (generated by the Matlab intrinsic function IFFT), so that the firing rate

*R*

_{ n}

^{( i)}at each time step,

*t*

_{ n}=

*n*Δ

*t,*was given by

*R*

_{ n}

^{( i)}depends only on the real part of the sum on the RHS of Equation 3,

*I*

^{( i)}and

*F*

_{0}

^{( i)}are scale factors used to encode the stimulus intensity—denoted by the superscript

*i,*and

*N*=

*T*/Δ

*t*.

*R*

_{ n}

^{( i)}was used to generate oscillatory spike trains via a pseudorandom process:

*R*

_{ n}

^{( i)}Δ

*t*is the probability of a spike in the

*n*th time bin,

*θ*is a step function,

*θ*(

*x*< 0) = 0,

*θ*(

*x*≥ 0) = 1, and

*r*is again a uniform random deviate. In the limit that

*R*

_{ n}

^{( i)}Δ

*t*≪ 1, the above procedure, which in general produces a rate-modulated binomial distribution, reduces to a rate-modulated Poisson process. The same time series,

*R*

_{ n}

^{( i)}, was used to modulate the firing probabilities of each stimulated, or foreground, element contributing to the artificially generated multiunit spike train, thus producing oscillatory spatiotemporal correlations due to common input.

*foreground*refers to any pixel whose mean firing rate is above baseline, whereas

*background*refers to any pixel whose firing rate is equal to or below baseline. Consistent with the lack of phase-locking between experimentally recorded ganglion cells responding to spatially separated stimuli, background spike trains (i.e., events arising from pixels outside the stimulus) were not subject to oscillatory modulations, except where noted in control experiments. Coherent oscillations in response to diffuse stimuli have been reported in many species, including frog (Ishikane, Gangi, Honda, & Tachibana, 2005; Ishikane et al., 1999), mudpuppy (Wachtmeister & Dowling, 1978), rabbit (Ariel, Daw, & Rader, 1983; Yokoyama, Kaneko, & Nakai, 1964), cat (Doty & Kimura, 1963; Laufer & Verzeano, 1967; Neuenschwander, Castelo-Branco, & Singer, 1999; Neuenschwander & Singer, 1996; Steinberg, 1966), monkey (Frishman et al., 2000; Laufer & Verzeano, 1967), and humans (De Carli et al., 2001; Wachtmeister, 1998). In both frog (Ishikane et al., 1999) and cat (Neuenschwander & Singer, 1996), the relative phases of such oscillations have been shown be sensitive to global stimulus topology; oscillations arising from simply connected regions remain coherent whereas oscillations arising from noncontiguous regions rapidly become uncorrelated as a function of time from stimulus onset. In control experiments, the sharp distinction between foreground and background spikes trains was relaxed, so that all model ganglion cells were subjected to oscillatory modulations whose mean amplitudes were always proportional to the local firing rate and which were either (1) uncorrelated with, or (2) in phase with, the foreground modulations.

*I*

^{( i)}and the constant offset

*F*

_{0}

^{( i)}. These two free parameters were determined empirically so that the mean

*foreground*and

*background*firing rates 〈

*R*

^{( i)}〉 and standard deviations

*σ*

_{ R}

^{( i)}were given by the following relations:

*i*= {0, 1, 2, 3, 4, 5} and the different stimulus intensities are denoted by the set {0%, 25%, 50%, 100%, 200%, 400%}, with the percentages giving the change from baseline (note that not all stimulus intensities are displayed in each figure). At all background pixels, the baseline intensity was given by

*I*

^{(0)}= 0,

*F*

_{0}

^{(0)}= 25 impulses per second (ips), so that 〈

*R*

^{(0)}〉 =

*F*

_{0}

^{(0)},

*σ*

_{ R}

^{(0)}= 0. For

*i*> 0, the methodology was complicated by the fact that negative values of

*R*

_{ n}

^{( i)}Δ

*t*were truncated at zero, and values of

*R*

_{ n}

^{( i)}Δ

*t*> 1 likewise saturated at 1, making it necessary to determine

*I*

^{( i)}and

*F*

_{0}

^{( i)}empirically via an iterative procedure. This was accomplished by explicitly calculating 〈

*R*

^{( i)}〉 and

*σ*

_{ R}

^{( i)}after setting all values of

*R*

_{ n}

^{( i)}Δ

*t*< 0 to zero and values of

*R*

_{ n}

^{( i)}Δ

*t*> 1 to 1, and adjusting

*I*

^{( i)}and

*F*

_{0}

^{( i)}so that Equations 5 and 6 were satisfied. Values of

*R*

_{ n}

^{( i)}Δ

*t*were then again truncated between zero and one and the process repeated until the discrepancy from the exact equality expressed by Equations 5 and 6 was less than 0.5%.

*r*th-order Gamma distribution, for which the Fano factor scales as 1/

*r,*was used to model the inverse relationship between trial-to-trial variability and contrast (Stein, 1965). An instantiation of an

*r*th-order Gamma distribution was constructed from a set of binomial distributions by choosing every

*r*th spike (Stein, 1965). In all other respects, spikes trains based on an

*r*th-order Gamma distribution were constructed following the same procedure as described above.

*f,*where

*f*is the temporal frequency (Eizenman et al., 1985). Drift was therefore modeled by an amplitude spectrum of the following functional form:

*C*

_{k}

^{(drift)}is the Fourier amplitude of the drift at the discrete temporal frequency

*f*

_{k},

*f*

_{0}= 1 Hz denotes the frequency at which the spectral amplitude declines by half,

*C*

_{0}

^{(drift)}= 3.33′ (min of arc) determined the average drift amplitude, and

*r*

_{k}is a uniform random deviate.

*f*

_{ k}by the following expression:

*C*

_{ k}

^{(tremor)}is the Fourier amplitude of the tremor at the discrete temporal frequency

*f*

_{ k},

*C*

_{0}

^{(tremor)}is the peak tremor amplitude, which is equal to 6.66 s of arc,

*f*

^{(tremor)}= 70 Hz denotes the central tremor frequency, and

*σ*

_{(tremor)}= 15 Hz is the Gaussian width of the central peak in the tremor amplitude spectrum.

*f*

_{0}, was qualitatively similar to published power spectra from fixating human subjects (Eizenman et al., 1985). Taking the discrete inverse Fourier transform of the combined amplitude spectrum and centering on zero yielded horizontal fixational eye movements (Figure 2, middle panel) that were also qualitatively similar in size and temporal structure to those recorded from human subjects (Eizenman et al., 1985). Vertical movements (Figure 2, bottom panel) were modeled similarly using an independent set of random deviates.

*e*

^{−1}fall-off radius of 4 pixels. For uniform stimuli, the additional term was identically zero (i.e., horizontal cell coupling had no effect), so that the modified model reduced exactly to the original model. For additional details regarding the biophysical outer retinal model, the reader is referred to the original report (van Hateren, 2005).

*e*

^{−1}fall-off radius equal to 8 pixels. The maximum contrast of the grating was 100% and the background luminance was equivalent to 100 trolands (as represented in the biophysical outer retinal model). Initial values were established by running the model for several seconds with only uniform background illumination.

*X*

_{ ij}, was given by

*S*

_{ i}denotes the spike train of the

*i*th neuron, either 0 or 1 at the corresponding time step,

*t*

_{ k}, and

_{ i}is the single-trial average. Autocorrelations were computed identically, such that each event was treated as “synchronous” with itself, thereby preserving rate-coded information along the diagonals and setting the maximum correlation amplitude.

*γ*MUA*

*γ*MUA*, where the asterisk serves as a reminder that this quantity is merely analogous to the band-pass-filtered multiunit activity that would be measured experimentally. The

*γ*MUA*, in turn, was used to weight each event, giving positive weight to events occurring at the peaks of the

*γ*MUA* and negative weights to events occurring in the troughs. The average weight of randomly distributed events was zero, because the

*γ*MUA* itself had zero mean.

*γ*MUA* evaluated at the target pixel. Mathematically, the

*γ*MUA*-based cross-correlation, Γ

_{ ij}, was given by

*γ*

_{ i}is a short-hand for

*γ*MUA* at the

*i*th location (target pixel) and the sum is over all pairs of events, or equivalently, over all pairs of time steps,

*t*

_{ k}and

*t*

_{ l}, regardless of their relative timing. Note that if the multiunit measure

*γ*

_{ i}is replaced by the single unit measure

*S*

_{ i}, then Equation 10 reduces to Equation 9 to within an additive constant plus higher order terms. The

*γ*MUA*-based procedure for estimating spatiotemporal correlations described by Equation 10 was motivated in part by encoding schemes in which events are weighted by their time of occurrence relative to an underlying oscillation (Brody & Hopfield, 2003), only here the weights are combined into a nonlinear product.

*γ*MUA*. For Poisson distributed activity, the autocorrelations computed according to Equation 10 would typically be distributed about zero, except for the contribution of each spike train to its own

*γ*MUA*, which effectively ensured some residual autocorrelation.

*γ*MUA*-based procedure was that it allowed correlation strengths to be estimated from only a few events based on their relative timing, regardless of whether such events were synchronous or not. Attempts to estimate oscillatory correlation strengths using conventional Fourier analysis (Kenyon, Harvey, Stephens, & Theiler, 2004), presented here as a control, were less accurate due to the extremely limited number of events available from very short spike train segments. By comparison, the

*γ*MUA* permitted the instantaneous phase of the stimulus-dependent oscillation to be reliably estimated by averaging over a small neighborhood, yielding a meaningful estimate of the correlation strength from as little as a single pair of spikes.

*γ*MUA*-based reconstructions

*γ*MUA* method described above. To establish an absolute scale for comparison across stimulus intensities, each eigenimage was multiplied by its corresponding eigenvalue, this being the first term in a complete orthonormal expansion. To the extent that pairwise correlations were determined solely by the common oscillatory modulation, eigenimages depended only on

*foreground*pixels, allowing the original stimulus to be estimated via PCA. To obtain single-trial estimates that scaled linearly with stimulus intensity, the square roots of the individual pixel values were used for all correlation-based reconstructions. Pixel values were further normalized by the average reconstruction in response to baseline activity. The first principal component was obtained using the Matlab intrinsic function EIGS, with the pairwise correlation matrix, either

*X*

_{ ij}or Γ

_{ ij}, replaced with the explicitly symmetrical construction

*γ*MUA*-based correlation matrix, preserving information encoded in the temporal alignment of individual spike trains with the

*γ*MUA* but discarding terms that depended on the relative timing between spike pairs from different pixels. Stimuli were also reconstructed after replacing the

*γ*MUA*-based weighting of each spike by a uniform weight of one, thereby preserving spatial correlations in the total number of spikes while discarding information regarding relative spike timing or oscillatory temporal structure.

*P,*expressed as a fraction of pixels correctly classified, is given by the following formula:

*A*

_{overlap}denotes the total area of the overlap between the two distributions and the maximum value of

*A*

_{overlap}is normalized to one. Error bars on the estimated values of

*P*were determined by assuming the pixel values to either side of the Bayes discriminator obeyed binomial statistics. Error bars were always negligible and therefore omitted on semilogarithmic plots.

*γ*MUA*. The

*γ*MUA* was then used to weight each spike, assigning positive weights to pairs aligned with the peaks of the local oscillation and negative weights to pairs in which one of the component spikes fell in a trough. The sum over the product of all weighted spike pairs provided an estimate of the oscillatory correlations between any two spike trains that was much more sensitive than conventional Fourier analysis when only a few events were available.

*γ*MUA*), stimuli were reconstructed by computing the first principal component of the corresponding pairwise correlation matrix (specifically, as the product of the largest eigenvector and eigenvalue, representing the leading term in an orthonormal expansion). Using synchrony to measure pairwise correlation strengths ( Figure 3, column SYNC), the first principal component yielded a noticeable improvement over the reconstructions mediated by an independent rate code but only at higher stimulus intensities.

*γ*MUA* was instead used to estimate pairwise correlation strengths ( Figure 3, column

*γ*MUA*), the product of the largest eigenvector and eigenvalue, or first principal component, yielded dramatic improvements in stimulus reconstruction over an independent rate code across a range of intensities spanning nearly 4 log

_{2}units (16-fold). This improvement was quantified by measuring relative performance on the ON/OFF pixel classification task. Whereas a doubling of the firing rate supported only 73% correct classifications using a binomial distribution to model an independent rate code, an oscillatory correlation code, based on a rate-modulated binomial distribution, supported an average of 92% correct classifications for the same average increase in firing activity. Reliability also increased with stimulus intensity, with Fano factors falling to a minimum value of 0.38. However, separate control experiments showed that this reduction in trial-to-trial variability did not fully account for the improvements in

*γ*MUA*-based reconstructions ( Figures 10, top panel, and 12).

*x*-axis was scaled logarithmically to better separate the distributions at low stimulus intensities, plotted here in normalized units. Vertical dashed lines indicate the discrimination thresholds used in the ON/OFF classification task (note some thresholds were zero and thus off scale). All distributions were normalized to unity (the apparent area was distorted by the logarithmic

*x*-scale).

*γ*MUA*-based reconstructions were relatively well separated at nearly all intensities.

_{2}units ( Figure 4, middle row).

*γ*MUA*). This was especially true for the more sensitive

*γ*MUA*-based correlation measure, for which performance levels exceeded 90% for intensity differences between 1 and 3 log

_{2}units, whereas an intensity difference of 3 or more log

_{2}units was required to ensure the same level of performance using the rate-based reconstructions.

*γ*MUA*-based reconstructions ranged from 1 to 2.5 log

_{2}units lower than for the rate-based reconstructions. Thus, on a pixel-by-pixel basis, spatiotemporal correlations permitted greatly superior discrimination between different stimulus intensity levels.

*γ*MUA*-based reconstructions (dashed line), performance increased monotonically with patch size, saturating at approximately 24 × 24 total pixels, or at 12 × 12 stimulated pixels. This latter number is in reasonable agreement with estimates of the size of redundant cell neighborhoods in the salamander retina (Puchalla et al., 2005).

*γ*MUA*-based reconstructions was either equivalent to, or for higher stimulus intensities measurably better than, the performance mediated by the rate-based reconstructions.

*γ*MUA*-based reconstructions required simultaneous processing of hundreds of spike trains, corresponding to hundreds of thousands of pairwise correlations.

*product*of event pairs, was dependent on local firing activity. The preservation of substantial fine spatial detail was demonstrated implicitly in the ability to reproduce sharp edges in the correlation-based reconstructions of uniform square spots (e.g., Figure 3). When individual pixels were randomly deleted from the stimulus, thus creating an arbitrary complex shape,

*γ*MUA*-based reconstructions continued to preserve substantial spatial structure at the level of individual pixels ( Figure 6).

*γ*MUA*, on the other hand, required only the information available on single trials from the spike trains themselves. The present results demonstrate that even over very short time scales, from tens to hundreds of milliseconds, enough information may be available in the pairwise correlation matrix, especially when referenced to the local multiunit activity (i.e.,

*γ*MUA*), to construct nonlinear spatial filters on the fly, thereby conferring the main advantages of spatial averaging (i.e., improved signal/noise) without eliminating fine spatial detail.

*γ*MUA* at each target cell were quantized at 40-Hz intervals, so that periodic structure could at best be only poorly resolved. Correlation-based reconstructions were particularly accurate at higher intensities, with the first principal component becoming nearly perfect as the stimulus strength approached maximum values.

*γ*MUA*-based reconstructions depended on the length of the analysis window, optimal theoretical performance on the ON/OFF pixel classification task was plotted as a function of spike train duration, ranging from 25 to 400 ms ( Figure 8). As expected, performance for all three reconstruction methods declined as the length of the analysis window was decreased, yet even for the shortest spike trains tested, the

*γ*MUA*-based reconstructions mediated substantially superior ON/OFF pixel discrimination across a nearly 16-fold range of intensities.

*γ*MUA*-based reconstructions supported performance levels exceeding 90% in less than 100 ms, whereas nearly 400 ms were required to achieve the same level of performance using rate-based reconstruction methods. Thus,

*γ*MUA*-based reconstructions required approximately 1/4 the number of events to achieve accuracy comparable to an independent rate code. Given that the signal/noise in response to slowly varying stimuli presented at low to moderate contrast generally improves as the square root of the number of events (Barlow & Levick, 1969), the present findings imply that utilizing spatiotemporal correlations approximately doubles the potential signal/noise, an improvement comparable to quadrupling the number of spikes.

*γ*MUA* ( Figure 10). Reconstruction quality was assessed as optimum performance on the ON/OFF pixel classification task, plotted as a function of stimulus intensity.

*γ*-band oscillations ( Figure 10, top panel, gray-solid line) and (2) rate-based reconstructions using nonmodulated spike trains (black-solid line). Oscillatory temporal structure, by reducing the variability in the number of spikes generated over the 100-ms trial, yielded only a small improvement in signal/noise. Although retinal spike trains can exhibit significant temporal structure (Reich et al., 1997; Rodieck, 1967; Troy, Schweitzer-Tong, & Enroth-Cugell, 1995), these findings indicate that the oscillatory temporal structure employed here cannot, by itself, account for the superior quality of the

*γ*MUA*-based reconstructions.

*γ*MUA*-based reconstructions ( Figure 10, top panel, dashed gray line). Although

*γ*MUA*-based reconstructions were adversely affected by the loss of rate-coded information, oscillatory spatiotemporal correlations—even with the firing rate held at baseline—supported performance levels roughly equivalent to those mediated by a stationary rate code (gray-dashed vs. black-solid lines).

*γ*MUA*-based reconstructions. A new correlation matrix was computed, which ignored precise temporal structure while preserving spatial correlations in the total number of spikes. Foreground pixels were still subjected to oscillatory modulations but the recomputed correlation matrix discarded all dependence on relative spike timing ( Figure 10, middle panel, gray-solid vs. black-solid lines).

*γ*MUA*-based reconstructions. This finding is reasonable given that spatial correlations in the total number of events were not explicitly tied to the stimulus intensity.

*γ*MUA*-based reconstructions were obtained using only the diagonal terms in the pairwise correlation matrix, thus preserving the autocorrelation of each spike train with the local phase of the multiunit activity but discarding all cross-correlations. Such diagonal-only constructions were markedly inferior to the standard

*γ*MUA*-based reconstructions that utilized off-diagonal correlations as well ( Figure 10, middle panel, gray-dashed vs. black-dashed lines), except at high stimulus intensities where differences may have been masked by saturation effects. Spatiotemporal correlations thus make a critical contribution to the superior quality of

*γ*MUA*-based reconstructions, but only when both spatial correlations and precise temporal structure are simultaneously taken into account.

*γ*MUA*-based reconstruction technique, which allowed pairwise correlations to be estimated from only a limited number of events. A new pairwise correlation matrix was computed by performing a conventional Fourier analysis of each spike train segment. Correlation strengths were estimated by the peak in the cross-power spectra between 60 Hz and 100 Hz (given by product of the Fourier amplitudes from the two spike trains) multiplied by the cosine of the difference in relative phase at the peak frequency. The dependence on the relative phase ensured that synchronous oscillatory activity corresponded to a positive correlation, whereas activity that was out of phase corresponded to a negative pairwise correlation. Taking the average of all such products over the interval 60 Hz to 100 Hz, roughly corresponding to the area under the peak, yielded equivalent results (data not shown). Reconstructions based on a pairwise cross-power spectra analysis were greatly inferior to the

*γ*MUA*-based reconstructions ( Figure 10, bottom panel, gray-dashed vs. black-dashed lines) but were slightly better than SYNC-based reconstructions (gray-dotted curve).

*γ*MUA*, by averaging over a local neighborhood containing multiple cells, yielded an estimate of the instantaneous phase of the common oscillation that allowed a meaningful weight to be assigned to every interspike interval. Thus, by taking into account temporal structure linking cells responding to the same stimulus, the oscillatory component of the local multiunit activity permitted an estimate of the correlation strength to be obtained for each pair of events.

*γ*MUA*-I) or whether the same waveform (rescaled) was used for both regions (

*γ*MUA*-II), reconstructions based on oscillatory spatiotemporal correlations were superior to rate-based reconstructions, except at low stimulus intensities. At higher stimulus intensities, the difference in pairwise correlation strength due to the difference in absolute modulation amplitude was sufficient to permit reasonable discrimination between foreground and background pixels, despite the presence of relatively strong oscillatory modulations across the entire image, even when such modulations were in phase with the foreground modulations.

*r*th-order Gamma distribution, in which the Fano factors scale as 1/

*r*(Stein, 1965), was used to generate artificial spike trains whose trial-to-trial variability could be specified independently of pairwise correlation strength (Figure 12).

*r*-values were set equal to the superscript

*i*denoting stimulus intensity in Equations 3–6, yielding Fano factors that ranged from 1.0 down to 0.18. Compared to spike trains derived from a stationary binomial distribution, reductions in trial-to-trial variability resulting from using an

*r*th-order Gamma distribution yielded quantitative improvements in rate-based reconstructions at all stimulus intensities tested (Figure 12, RATE-II vs. RATE). When a common oscillatory modulation was superimposed on an

*r*th-order Gamma distribution, further improvements in rate-based reconstructions were again observed (Figure 12, RATE-III vs. RATE-I). Here, an

*r*th-order, oscillatory Gamma distribution was constructed from a rate-modulated binomial process by taking every

*r*th spike, after scaling the initial instantaneous firing rate by a factor of

*r*. Despite improvements in rate-based reconstructions due to increased reliability of the underlying spike trains, reconstructions based on oscillatory pairwise correlations remained superior at all stimulus intensities (Figure 12,

*γ*MUA*-III). The superiority of the

*γ*MUA*-III-based reconstructions demonstrates that intrinsic reliability can work in concert with extensive pairwise correlations to yield greater reductions in trial-to-trial variability than would result from either mechanism alone.

*a priori*. Although specifying the spatiotemporal correlations explicitly permitted a more detailed analysis, it also reduces the generality of the results. To investigate whether the above findings were dependent on the details of a particular set of mathematical assumptions, a final set of experiments was conducted in which spatiotemporal correlations were produced implicitly through the action of a completely different yet well-established physiological mechanism, namely, the rapid translation of images across the retina due to fixational eye movements.

*γ*MUA* were relatively poor, as there was little narrow-band oscillatory energy in the model spike trains (results not shown). Performance on the ON/OFF pixel discrimination task was effectively at chance for the rate-based reconstructions; in part due to the use of graded as opposed to binary images. Performance on the same ON/OFF pixel discrimination task was substantially better using the reconstructions based on pairwise synchrony, which reflected the preservation of fine spatial detail. Previously, it had been demonstrated that fixational eye movements can improve visual perception (Rucci, Iovin, Poletti, & Santini, 2007b) and associated modeling results suggest a critical role for spatiotemporal correlations between retinal neurons (Poletti & Rucci, 2008). Consistent with previous studies, the present findings suggest that the spatiotemporal correlations resulting from intersaccadic fixational eye movement encode useful pixel-by-pixel information about visual stimuli not present in the time-averaged response at each location.

*per se*do not impose any particular order of firing within any given cycle. Similarly, spatiotemporal correlations are not inconsistent with evidence that synchronous spikes can encode higher spatial resolution (Schnitzer & Meister, 2003), as the short-range synchrony postulated to underlie spatial hyperacuity may involve different, although possibly overlapping, synaptic mechanisms (Meister & Berry, 1999) to those responsible for long-range synchrony (Kenyon et al., 2003). Finally, optimal predefined filters, especially those that take account of overlapping ganglion cell surrounds (Stanley et al., 1999; Warland et al., 1997), would likely provide additional intensity information to the explicitly nonlinear, synergistic encoding scheme examined here. Although extreme synergy could not produce reliable reconstructions at lower stimulus intensities, there remains the intriguing possibility that a combination of retinal mechanisms, including fixational eye movements, long-range oscillations, latency codes, and specialized filters, could collectively explain the vividness of visual perception even at low contrast.

*In vitro*preparations also lack long-range spatiotemporal correlations likely to result from high-frequency fixational eye movements (Martinez-Conde et al., 2004), which theoretical models indicate may tap into high-frequency resonant circuitry (Miller, Denning, George, Marshak, & Kenyon, 2006). Even in the absence of a contribution from resonance circuitry in the inner retina, the present findings suggest that the spatiotemporal correlations generated by fixational eye movements are by themselves sufficient to convey precise pixel-by-pixel intensity information in an extremely synergistic manner. A fundamental assumption of the extreme synergy hypothesis is that retinal mechanisms exist to produce useful, long-range correlations in response to natural stimuli.

*Israel Journal of Medical Science*, 18, 83–92. [PubMed]

*Vision Research*, 11, 1135–1146. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 23, 1485–1493. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 45, 1459–1469. [PubMed] [Article] [CrossRef] [PubMed]

*The Journal of Physiology*, 200, 1–24. [PubMed] [Article] [CrossRef] [PubMed]

*Proceedings of the National Academy of Sciences of the United States of America*, 94, 5411–5416. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Computational Neuroscience*, 12, 165–174. [PubMed] [Article] [CrossRef] [PubMed]

*Neuron*, 37, 843–852. [PubMed] [CrossRef] [PubMed]

*Nature*, 371, 70–72. [PubMed] [CrossRef] [PubMed]

*The Journal of Physiology*, 228, 649–680. [PubMed] [Article] [CrossRef] [PubMed]

*Nature Neuroscience*, 1, 501–507. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 381, 610–613. [PubMed] [CrossRef] [PubMed]

*The Journal of Physiology*, 168, 205–218. [PubMed] [Article] [CrossRef] [PubMed]

*Pattern classification*. New York: Wiley.

*Vision Research*, 25, 1635–1640. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 94, 119–135. [PubMed] [Article] [CrossRef] [PubMed]

*Documenta Ophthalmologica*, 100, 231–251. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Neuroscience*, 26, 2088–2100. [PubMed] [Article] [CrossRef] [PubMed]

*Advances in neural information processing systems 11: Proceedings of the 1998 Conference*(pp. 111–117). Cambridge, Massachusetts: MIT Press.

*Science*, 319, 1108–1111. [PubMed] [CrossRef] [PubMed]

*PLoS ONE*, 2, e871.

*Journal of Neuroscience*, 25, 10299–10307. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 29, 1095–1101. [PubMed] [CrossRef] [PubMed]

*Science*, 310, 863–866. [PubMed] [CrossRef] [PubMed]

*Independent component analysis*. New York: J Wiley.

*Nature Neuroscience*, 8, 1087–1095. [PubMed] [CrossRef] [PubMed]

*Visual Neuroscience*, 16, 1001–1014. [PubMed] [CrossRef] [PubMed]

*Neuron*, 27, 635–646. [PubMed] [CrossRef] [PubMed]

*Advances in neural information processing systems 2*. (pp. 141–148). San Mateo, CA: Morgan Kaufmann.

*Dynamic segmentation of gray-scale images in a computer model of the mammalian retina.*Paper presented at the Proceedings of SPIE: Applications of digital image processing XXVII, Denver.

*Visual Neuroscience*, 20, 465–480. [PubMed] [CrossRef] [PubMed]

*Biology Cybernetics*, 67, 133–141. [PubMed] [CrossRef]

*Neural Computation*, 16, 2261–2291. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 46, 1762–1776. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Neuroscience*, 25, 5195–5206. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 7, 215–229. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 1, 36–41. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 8, (14):28, 1–16, http://journalofvision.org/8/14/28/, doi:10.1167/8.14.28. [PubMed] [Article] [CrossRef] [PubMed]

*Nature Reviews. Neuroscience*, 5, 229–240. [PubMed] [CrossRef] [PubMed]

*Neuron*, 22, 435–450. [PubMed] [CrossRef] [PubMed]

*Visual Neuroscience*, 23, 779–794. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience*, 25, 4207–4216. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 39, 2485–2497. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 379, 728–732. [PubMed] [CrossRef] [PubMed]

*Nature*, 411, 698–701. [PubMed] [CrossRef] [PubMed]

*Neuron*, 47, 739–750. [PubMed] [CrossRef] [PubMed]

*Proceedings of Biological Science*, 266, 1001–1012. [PubMed] [Article] [CrossRef]

*Journal of Neurophysiology*, 92, 1023–1033. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 8, (14):4, 1–15, http://journalofvision.org/8/14/4/, doi:10.1167/8.14.4. [PubMed] [Article] [CrossRef] [PubMed]

*Nature Neuroscience*, 7, 621–627. [PubMed] [CrossRef] [PubMed]

*Neuron*, 46, 493–504. [PubMed] [CrossRef] [PubMed]

*Europe Journal of Neuroscience*, 10, 1856–1877. [PubMed] [Article] [CrossRef]

*Investigative Ophthalmology and Visual Science*, 44, 3233–3247. [PubMed] [Article] [CrossRef] [PubMed]

*Science*, 294, 2566–2568. [PubMed] [CrossRef] [PubMed]

*Network*, 10, 341–350. [PubMed] [CrossRef] [PubMed]

*Science*, 278, 1950–1953. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 89, 2810–2822. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Cognitive Neuroscience*, 11, 300–311. [PubMed] [CrossRef] [PubMed]

*Nature*, 447, 851–854. [PubMed] [CrossRef]

*Nature*, 447, 851–854. [PubMed] [CrossRef]

*Proceedings of the National Academy of Sciences of the United States of America*, 101, 6722–6727. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 440, 1007–1012. [PubMed] [Article] [CrossRef] [PubMed]

*Neuron*, 37, 499–511. [PubMed] [CrossRef] [PubMed]

*Current Opinion Neurobiology*, 4, 569–579. [PubMed] [CrossRef]

*Journal of Neuroscience*, 29, 5022–5031. [PubMed] [Article] [CrossRef] [PubMed]

*Neuron*, 24, 49–65, 111–125. [PubMed] [CrossRef] [PubMed]

*Biophysical Journal*, 5, 173–194. [PubMed] [CrossRef] [PubMed]

*Biology Cybernetics*, 95, 327–348. [PubMed] [Article] [CrossRef]

*Nature*, 381, 520–522. [PubMed] [CrossRef] [PubMed]

*Experimental Brain Research*, 93, 383–390. [PubMed] [CrossRef] [PubMed]

*Visual Neuroscience*, 9, 535–553. [PubMed] [CrossRef] [PubMed]

*Visual Neuroscience*, 12, 285–300. [PubMed] [CrossRef] [PubMed]

*Nature*, 394, 179–182. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 92, 780–789. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 373, 515–518. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 5, (4):5, 331–347, http://journalofvision.org/5/4/5/, doi:10.1167/5.4.5. [PubMed] [Article] [CrossRef]

*Neural Computation*, 13, 1255–1283. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 90, 431–443. [PubMed] [Article] [CrossRef] [PubMed]

*Progress in Retinal and Eye Research*, 17, 485–521. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Neuroscience*, 24, 9067–9075. [PubMed] [Article] [CrossRef] [PubMed]

*Trends in Neurosciences*, 9, 193–198. [Article] [CrossRef]