**Using an apparent visual motion stimulus with motion energies limited to specific separations in space and time, we study the computational structure of wide-field motion sensitive neurons in the fly visual brain. There is ample experimental evidence for correlation-based motion computation in many biological systems, but one of its central properties, namely that the response is proportional to the product of two bilocal signal amplitudes, remains to be tested. The design of the apparent motion stimuli used here allows us to manipulate the amplitudes of the bilocal input signals that serve as inputs to the computation. We demonstrate that the wide-field motion response of H1 and V1 neurons indeed shows bilinear behavior, even under contrast sign reversal, as predicted. But the response also varies inversely with contrast variance, an effect not described by the correlator operation. We also quantify the correlator contributions for different spatial and temporal separations. With suitable modification, the apparent motion stimuli used here can be applied to a broad range of neurophysiological as well as human psychophysical studies on motion perception.**

*Drosophila*(Gal4-UAS system) has identified specific neurons in the medulla region that correlate signals derived from neighboring spatial locations (Schnell, Raghu, Nern, & Borst, 2012). The reverse-phi illusion, which induces robust optomotor response in flies, can also be explained using a correlation-type nonlinearity (Tuthill, Chiappe, & Reiser, 2011). Although these findings lend credence to the correlation scheme, it remains unclear to what extent correlation-based nonlinearities dominate visual motion computation under more complex visual conditions experienced during natural flight. It is known, for example, that variability in the spatial structure of the input, the speed of movement, and contrast substantially modulate gain and time-course of the response to motion (Egelhaaf & Borst, 1989; Maddess & Laughlin, 1985; O'Carroll, Barnett, & Nordstrom, 2011). Furthermore, there is ample evidence for various adaptive mechanisms in motion processing (Brenner, Strong, Koberle, Bialek, & de Ruyter van Steveninck, 2000; de Ruyter van Steveninck, Zaagman, & Mastebroek, 1986; Fairhall, Lewen, Bialek, & de Ruyter Van Steveninck, 2001; Harris, O'Carroll, & Laughlin, 1999; Maddess & Laughlin, 1985).

*Calliphora vicina*) were captured in traps set outdoors. They were then transferred inside where they were housed in an enclosed arena and supplied with sugar, water, tomato juice, and dry protein. The temperature was controlled at 21°C and humidity maintained at approximately 60%. The ambient light was set to alternate between on and off with a 12-hr cycle. To prepare the fly for the experiment, its wings and legs were immobilized with wax, after which it was placed in a plastic holder such that its head protruded out. At the back of the head, an incision was made with a razor blade and a small piece of integument was surgically removed. Excess fat and some air sac membrane were removed from the superficial layers, and a muscle in the ventrolateral region was cut to prevent large twitches and their associated electrical interference during extracellular recording. The proboscis was kept free for fluid intake during the experiment. The holder containing the fly was placed on a goniometer platform, which allowed initial adjustments to the azimuth and elevation angles of the fly's eye. A stand-mounted movable Nikon SMZ460 optical microscope (Nikon Instruments Inc., Melville, NY) was used to view the fly from the back. We used both male and female flies for our experiments. The responses were cross-validated against at least four flies, and the results shown here correspond to a single H1 or V1 neuron with the clearest response behavior.

*μ*m and 3 MΩ impedance were used to record differential voltage, which was subsequently low pass filtered by an amplifier and discretized by a window discriminator (World Precision Instruments, Inc., Sarasota, FL). The discriminator pulses were time-stamped at 10

*μ*s resolution by a National Instruments PCI-6259 Data Acquisition Card (National Instruments Corporation, Austin, TX) and stored in the computer for analysis. Recordings were only made from clear isolated spikes with amplitude exceeding 250

*μ*V, such that the amplitude remained at least five times above the baseline voltage fluctuations. To prevent tissue desiccation, the fly was fed sugar solution in between experiments.

^{−2}. At mean radiance this produces of order 5 × 10

^{4}absorbed photons per second in each photoreceptor. The image on the screen consists of 827 pixels arranged in a hexagonal grid that extends 38° along the horizontal direction and 44° along the vertical direction. The frame covers approximately 8% of the total angular visual field and 15% of the angular field of H1 (Krapp & Hengstenberg, 1997). Angular distortions resulting from projection on a flat 2-D screen were corrected to first order by increasing the local raster spacing for pixels farther away from the center, following a gnomonic projection (Coxeter, 1969). The distance between the screen and the fly was adjusted to match the projection of the screen pixel raster onto the angular pitch of the fly's ommatidial raster (see section on Nyquist experiment under Results).

*μ*s, 4% of its maximum in 279

*μ*s, and 1.4% of its maximum in 569

*μ*s (Supplementary Figure 1A, B). At 2 ms, the residual luminance is 0.04%. Note that the decay of residual luminance has a long tail that persists for several milliseconds. This leads to temporal averaging of luminance signals causing a reduction in the stimulus contrast. Using the average pixel luminance over the 2 ms period (Supplementary Figure 1A) and the maximum and minimum luminance at the onset and at the end of the 2 ms period, respectively, we find that the upper bound of the overall effect on contrast (according to Michelson contrast definition) is 1.24%. So, for all practical purposes, the reduction in contrast is negligible compared to the actual stimulus contrast, and therefore unlikely to impact the neural response to motion.

*r⃗*, showing identical temporal intensity sequences over time with a delay Δ

*t*between them, contains motion information consistent with velocity

*v⃗*= Δ

*r⃗*/ Δ

*t*. Such a bilocal signal would be detected by a correlator with an input span of Δ

*r⃗*. By tiling the visual field with a superposition of such bilocal signals, we create a wide-field version of this stimulus. As the basis for the stimuli in this study, we use spatiotemporal white noise by drawing a time series of spatial 2-D random values,

*I*

_{0}(

*r⃗*,

*t*), from an identically independent distribution (IID). Here,

*r⃗*and

*t*represent the spatial coordinate and time, respectively. The spatiotemporal autocorrelation function of

*I*

_{0}(

*r⃗*,

*t*) is a delta function in space and time:

*r⃗*,

*t*) = (0,0). The second term represents a pair of peaks, located at (Δ

*r⃗*, Δ

*t*) and (−Δ

*r⃗*, −Δ

*t*), with identical heights

*m*

_{1}·

*m*

_{2}(see Figure 1). The central peak describes the random flicker component in the stimuli, with flicker contrast variance of + . The pair of satellite peaks carries motion signal of strength

*m*

_{1}·

*m*

_{2}, with apparent velocity

*v⃗*

_{app}= Δ

*r⃗*/ Δ

*t*. Displaying

*I*(

*r⃗*,

*t*) as a space–time sequence of intensities produces a strong impression of wide-field motion superposed on a flickering background. For brevity, we will refer to the product term

*m*

_{1}·

*m*

_{2}as “stimulus motion energy” and the term + as “stimulus flicker energy.” Here, energy is defined in units of the product of two contrast parameters, consistent with the definition originally proposed by Adelson and Bergen (1985). The stimulus parameters are chosen to have zero mean intensity and maximum contrast. The values for

*I*

_{0}are drawn from a binary distribution such they are either −1/2 or +1/2. The constraint |

*m*

_{1}+

*m*

_{2}| ≤ 2 ensures that

*m*

_{1}and

*m*

_{2}always have real values and contrast lies in the range [−1, 1]. For clarity, we omit from our equations an additive constant mean intensity, numerically equal to 1. We can ignore this constant term since it does not contain motion information and does not play any role in our analyses.

**Figure 1**

**Figure 1**

*T*, with values close to the photoreceptor integration time, typically

_{p}*T*= 4 − 8 ms.

_{p}*T*determines the time for which a particular random sample

_{p}*I*

_{0}remains on the screen, which can be formalized as a rectangle function (Bracewell, 2000).

*I*

_{0}is obtained by convolving the rectangle function Π(

*t*) with itself, resulting in a triangle function Λ(

*t*) (Bracewell, 2000):

*δ*(

*r⃗*). With finite

*T*the cross-correlation of intensity (Equation 2) is now:

_{p}*T*, as illustrated in Figure 1. Note that each of the two components of

_{p}*I*(

*r⃗*,

*t*), that is, the original and its echo, persist for a time

*T*. Our setup, however, allows us to manipulate the values of Δ

_{p}*t*in increments of 2 ms frame time, independently of the choice of

*T*. In our experiments we found an overall increase in the response magnitude with values of

_{p}*T*in the 4–8 ms range, consistent with the idea that increasing

_{p}*T*increases the effective contrast for the fly. We did not notice any qualitative changes in other response properties with values of

_{p}*T*in the 2–8 ms range (not shown). Supplementary movies MVI_m0.2_f0.4, MVI_m0.2_f2.0, MVI_m0.8_f2.0, and MVI_m0.8_f1.6_pos_neg, show different stimulus realizations obtained by manipulating the contrast parameters {

_{p}*m*

_{1},

*m*

_{2}} and the time offset Δ

*t*(see Supplementary Materials).

*r*≅ 1.8 mm. The column to column separation Δ

*x*= Δ

*r*· cos(30°) ≅ 1.5 mm defines the angular spacing in the horizontal direction (

*x*axis in Figure 6A) and the row to row separation Δ

*y*= Δ

*r*/ 2 ≅ 0.9 mm defines the angular spacing in the vertical direction (

*y*axis in Figure 6A). The ommatidial raster of the fly's eye samples space in a hexagonal array with angular spacing Δ

*φ*; its precise value varies from fly to fly, but is typically of the order Δ

*φ*≅ 1.5°. With these parameters, the pitch of the stimulus array matches the fly's angular spacing at a fly-to-screen distance of

*D*= Δ

*r*/ tan

^{−1}(Δ

*φ*) ≅ 70 mm. The photoreceptor point spread function (PSF; Snyder, Stavenga, & Laughlin, 1977) admits spatial frequencies higher than the raster's Nyquist limit, leading to aliasing of high spatial frequencies. This effect can be measured in motion sensitive neurons when the fly is presented with moving patterns at high enough spatial frequencies to induce a reverse response (Götz, 1964). We use this effect to adjust the visual stimulus to the fly's sampling raster. The horizontal and the vertical projections of interommatidial separation are (see also Figure 3):

*l*of exactly one quarter of the horizontal projection: Δ

*l*= Δ

*x*/ 4. For H1, the fly's motion response was tested using bar patterns of spatial wavelength

*λ*equal to 6, 8, and 12 lines, or

_{s}*λ*= 1.5 · Δ

_{s}*x*, 2 · Δ

*x*, 3 · Δ

*x*, respectively. The correct match was found by adjusting the fly-to-screen distance such that there was no change in response to motion in the preferred and null directions for

*λ*= 2 · Δ

_{s}*x*. For H1 recordings, the pattern was moved along the horizontal axis alternatingly in the preferred and null directions for 4 s each, and repeated across 50 trials. For V1, the rectangular pixel grid was rotated by 90°, with the spatial wavelengths appropriately scaled and the distance was adjusted while presenting vertical movement. Because the motion response of tangential cells decreases at high temporal frequencies (Hausen & Egelhaaf, 1989), the velocity of the stimulus for this experiment was kept relatively low, to about 6°/s.

*G*(

*t*) and then multiplied by the instantaneous signal of the other arm. The signals from the mirror-symmetric arms after multiplication are subtracted to produce an antisymmetric, direction-selective output (Figure 2A). The wide-field response can be modeled by replicating the correlator structure over the visual field and integrating the local outputs over the entire array of detectors. The inputs of the correlator can span the nearest neighbor distance, but can also be separated by multiple raster spacings, and in general, the motion response can be thought of as a weighted sum of correlator populations with different spans. To mimic the fly's optics and make the model physiologically relevant, we introduce a spatial filter

*S*(

*x*) that represents the point-spread function (PSF) of the photoreceptors (Land, 1997; Snyder et al., 1977), a linear temporal filter

*F*(

*t*) representing the photoreceptor impulse response and a linear temporal delay filter

*G*(

*t*) (Figure 2A). Further details about the functional form and parametrization of filters can be found in the section, Correlator response to apparent motion stimuli (below). More filters can, in principle, be added, for example in the cross-arms. However, because filtering in the model is treated as linear, the properties of the cross-arm filter can be accounted for by appropriate compensations of

*F*(

*t*) and

*G*(

*t*). We expect the model to be rich enough to describe correlator behavior in our experiment, and therefore refrain from using more complex elaborations of the HRC model (van Santen & Sperling, 1985).

**Figure 2**

**Figure 2**

**Figure 3**

**Figure 3**

*S*that represents the photoreceptor PSF, a temporal filter

*F*that represents the impulse response of the photoreceptor, and a delay filter

*G*that is approximated by a first-order exponential (Figure 2A). For ease of notation, we also limit space to one dimension. Light intensity as a function of space and time is given by

*I*(

*x*,

*t*); the PSF of the correlator centered at

*x*= 0 is given by

*S*(

*x*), and PSFs of the correlators centered at

*x*= −Δ

*φ*/ 2 and

*x*= Δ

*φ*/ 2 are given by

*S*

_{1}and

*S*

_{2}, respectively. Here Δ

*φ*is the span, or the angular distance between the two correlator inputs. The linear spatial and temporal filters transform the signals as follows: where ⊗ represents convolution,

*H*is defined as the convolution of

*G*and

*F*, and the indices of

*u*,

*v*, and

*S*identify the two correlator inputs (Figure 2A). The output,

*R*, of the correlator after the multiplication stage is the difference of two products defined as

*R*

_{+}and

*R*

_{−}, respectively: where “·” represents multiplication and

*R*is a function of

*x*and

*t*. By averaging the response over time and space, we obtain the equivalent wide-field averaged response of the correlator system. If we just focus on one branch of the correlator:

*(*

_{II}*ζ*,

*η*) = Θ

*(−*

_{II}*ζ*,−

*η*), the calculated response of the correlator is always antisymmetric in both time and space separately, and thus symmetric under joint exchange of space and time. This implies that we get the same outcome for the inner product if we define the receptive field Γ

_{corr}(

*ζ*,

*η*) as a sum of its regular and time-reversed parts: and with this definition we have:

_{corr}(

*ζ*,

*η*) is shown in Figure 3, along with its spatial and temporal components. Note that this derivation is not limited to the stimuli we use in our experiment, but applies generally to wide field motion stimuli with constant velocity.

*S*(

*x*) is approximated by a Gaussian of standard deviation

*σ*= 0.51° (Smakman et al., 1984), the temporal filter

*F*(

*t*) is approximated by a log-normal function

*K*· exp[−{log(

*t*/

*t*)}

_{p}^{2}/ 2

*s*

^{2}] with parameter values

*t*= 15 ms and

_{p}*s*= 0.23 chosen from photoreceptor measurements (Howard, Dubs, & Payne, 1984; Payne & Howard, 1981), and the temporal delay filter

*G*(

*t*) is approximated by a first order exponential (1 /

*τ*)

*e*

^{−}

^{t}^{/}

*(Srinivasan, 1983), where*

^{τ}*τ*can range from few tens of milliseconds to hundreds of milliseconds (de Ruyter van Steveninck et al., 1986; Harris et al., 1999).

*t*, exhibits an initial transient, followed by a peak and an exponential decay (Figure 3A, B). A small value of time constant

*τ*= 10 ms is associated with a sharper transient and smaller peak latency (Figure 3A) compared to a larger time constant

*τ*= 300 ms (Figure 3B). Note that the correlator's response qualitatively matches the responses of H1 and V1 to change in Δ

*t*(see Figure 6D, E). A positive response in quadrants I and III of the energy space indicates excitation from preferred motion, while negative response in quadrants II and IV indicates suppression from null motion. The temporal response profile is symmetric about the vertical axis (Figure 3A, B, gray projection to the right) because the filters in the two correlator arms are identical. This, however, need not be true for an actual biological correlator, where different gain values in the two arms based on increment or decrement of signal may lead to asymmetric response curves (Borst & Egelhaaf, 1990). The spatial response profile is a difference of two Gaussians centered respectively at the location of the two detectors, where each Gaussian represents the convolution of the photoreceptor PSFs (Figure 3A, B, gray projection to the left). Together, these define the 3-D landscape of the full receptive field of the correlator Γ

_{corr}(

*ζ*,

*η*).

*r⃗*and Δ

*t*and amplitudes

*m*

_{1},

*m*

_{2}. So, by measuring the response of the cell as a function of Δ

*r⃗*and Δ

*t*, keeping

*m*

_{1}·

*m*

_{2}constant, we essentially sample the surface of the correlator receptive field at those locations, as long as the persistence time

*T*remains small enough. If we vary

_{p}*m*

_{1}and

*m*

_{2}the response is predicted to scale in proportion to

*m*

_{1}·

*m*

_{2}.

*φ*, where Δ

*φ*is the angular spacing between neighboring ommatidia. For blowfly, this spacing is of the order of ≈1.5°. If the stimulus contains frequencies higher than 1 / (2Δ

*φ*), it leads to aliasing, and this effect can be measured in motion sensitive neurons that elicit reverse response when the fly is presented with moving patterns at high enough spatial frequencies (Götz, 1964). We exploit this response behavior to map the visual stimulus to the fly's sampling raster.

*l*= Δ

*x*/ 4. The fly-to-screen distance was adjusted so that the pattern with spatial wavelength of 8 lines produced equal responses to preferred and null directions of horizontal movement. This criterion resulted in an adjusted eye to screen distance (

*D*) of about 78 mm with a standard deviation of 5 mm across four flies tested. From the spacing of lines on the screen (Δ

*l*≈ 0.45 mm), we compute Δ

*φ*tan

_{x}^{−1}(4 · Δ

*l*/

*D*) ≅ 1.3°. As an additional check, and also to verify the quality of the fly's image, we used spatial wavelengths of 6 and 12 lines. In this condition, the neuron is expected to show a reverse response for

*λ*

_{s}= 1.5 · Δ

*x*(6 lines) because of aliasing and a normal response for

*λ*

_{s}= 3 · Δ

*x*(12 lines), which is what we found. H1 produced a reverse and a normal response corresponding to these wavelengths, with rates differing by a factor of approximately 5 in both cases (Figure 4). Test for Nyquist response was also conducted on V1 using the same pixel grid but rotated 90° about its center.

**Figure 4**

**Figure 4**

**Figure 5**

**Figure 5**

**Figure 6**

**Figure 6**

*t*= 8 ms. Each trial consists of 2.5 s of apparent motion in the preferred direction (i.e., +

*X*for H1 and −

*Y*for V1), followed by 2.5 s of pure flicker. To generate the pure flicker condition, we set Δ

*t*to 200 ms, which is experimentally equivalent to Δ

*t*→ ∞ (see stimulus movie clips in Supplementary Materials). For clarity, we shall use units of column separation (Δ

*φ*≅ 1.3°) and row separation (Δ

_{x}*φ*≅ 0.75°) to refer to the spatial offsets in the (

_{y}*x*,

*y*) coordinate system (Figure 6A), which in matched conditions correspond to the (Δ

*x*, Δ

*y*) values of the raster displayed on screen.

*x*, Δ

*y*) = (1, ±1), with no noticeable difference in response for the two values of Δ

*y*. This indicates that the largest contribution to the motion response comes from nearest-neighbor interactions (Figure 6B). The excess firing rate drops from 29 spikes/s to 21 spikes/s as the column separation increases from 1 to 2. At a column separation of 3, the excess firing rate drops to 8 spikes/s. For larger column separations, the response was indistinguishable from the pure flicker response. For V1, the excess firing rate rises to a maximum of 25 spikes/s at (Δ

*x*, Δ

*y*) = (0, 2), (Figure 6C). For the nearest neighbor pixels oriented at ±60° about the vertical axis, that is, with (Δ

*x*, Δ

*y*) = (±1, 1), the excess firing rate reduces to 17 spikes/s. With Δ

*y*= 3, the response further diminishes to 14 spikes/s. This indicates that the strongest excitation to V1 comes from nearest-neighbor interactions with velocity in the preferred direction (downward).

*r⃗*, keeping Δ

*t*constant. But apparent velocity depends on both Δ

*r⃗*and Δ

*t*. Further, the HRC model predicts a response dependence on Δ

*t*, also formalized in Equation 20 where the stimulus autocorrelation term Θ

*depends on Δ*

_{II}*t*. To examine the temporal response properties, we chose the following set of values for Δ

*t*: 4 ms, 8 ms, 12 ms, 16 ms, and 20 ms for each spatial offset described in Figure 6B and C. To fit the response values, we used the temporal part of the correlator receptive field, Γ

_{corr}(

*ζ*,

*η*) (see Equations 19 and 20) with filters

*F*(

*t*) and

*G*(

*t*) parametrized by a log-normal function and an exponential decay function described earlier in the section, Correlator response to apparent motion stimuli. As a function of Δ

*t*, the excess firing rate of both H1 and V1 neurons rises to a peak value and then decays (Figure 6D, E). For the three curves, the initial rise is slightly steeper for V1 than for H1, which can be attributed to a difference in response gain between V1 and H1 and a stronger excitation from correlated signals separated by a smaller absolute distance (Δ

*y*= 1 ≈ 0.75° compared to Δ

*x*= 1 ≈ 1.3°). For both H1 and V1, the dominant responses correspond to the nearest-neighbor spatial correlations, that is, with (Δ

*x*, Δ

*y*) = (±1, 1) for H1 and with (Δ

*x*, Δ

*y*) = {(1, ±1),(0, 2)} for V1. As noted earlier, the shape of the response curves in Figure 6D and E reflects the temporal response profile predicted by the HRC model (Götz, 1972; Hausen & Egelhaaf, 1989), as described by Equation 19.

*t*values because the response at large Δ

*t*becomes weak and quite hard to measure. Figure 6 shows that the response rates peak at Δ

*t*= 12 ms and 8 ms for H1 and V1 respectively, equivalent to apparent speeds of |

*v⃗*

_{app}| = 1.3° / 0.012s ≅ 110°/s and |

*v⃗*

_{app}| = 1.5° / 0.008s ≅ 190°/s. This roughly corresponds to the velocities that generate peak rates for conventional moving patterns in many lab conditions. For larger Δ

*t*, the rate decreases as Δ

*t*increases; in other words, for large values of Δ

*t*the rate is an increasing function of |

*v⃗*

_{app}|. The decrease in response for the smaller values of Δ

*t*is presumably due to the finite bandwidths of the various filters in the system. Based on measurements of the photoreceptor response under similar illumination conditions (not shown), photoreceptor filtering alone will cause a decrease in rate for values of Δ

*t*below 6 ms. Additional filtering operations may shift that criterion to the somewhat higher values of Δ

*t*we find here.

*σ*= 0.51° (Land, 1997; Smakman et al., 1984), and the optical contribution from the neighboring locations to a correlator of given span is found by the overlap between the area of a pixel given by

*P*(

*ζ*) (which is a disk with diameter ≈0.30°) and the correlator's spatial receptive field. The latter is described by the difference between two convolved PSFs separated in space, expressed as the spatial part, {Ψ

_{SS}(

*ζ*+ Δ

*φ*) − Ψ

_{SS}(

*ζ*− Δ

*φ*)}, of the correlator “receptive field” (see Equation 19 and also Figure 3, gray curve on the left). The optical contribution of a pixel pair at stimulus offset

*s*to a correlator with span Δ

*φ*

_{i}_{,}

*is then defined as where Ψ*

_{j}*is the self-overlap of the pixel disk. If*

_{PP}*β*

_{i}_{,}

*represents the actual unknown contribution of that set of correlators to the neural response, then the total response can be formulated as: where indices (*

_{j}*i*,

*j*) correspond to spatial locations in units of (Δ

*φ*, Δ

_{x}*φ*). In the analysis we limit the set of spans to {(

_{y}*i*,

*j*)} = {(1,1),(1,–1),(2,0),(3,1),(3,−1)} for H1 and {(

*i*,

*j*)} = {(−1,1),(1,1),(0,2),(−1,3),(1,3)} for V1 (see Figure 6). The offsets

*s*

_{i}_{,}

*used in the experiment correspond to the same raster points as given in these expressions, but now in units of (Δ*

_{j}*x*, Δ

*y*). To find the

*β*

_{i}_{,}

*(Δ*

_{j}*t*), we treat them as fitting parameters in a standard linear least squares minimization of

*χ*

^{2}, defined as:

*t*= 12 ms, we obtain the following correlator contributions for the five different spans:

*β*

_{1,1}= 35%,

*β*

_{1,–1}= 34%,

*β*

_{2,0}= 16%,

*β*

_{3,1}= 7% and

*β*

_{3,−1}= 8%. Thus, the two nearest-neighbor correlators at (1,±1) together contribute about 70%, the next–nearest-neighbor correlator at (2,0) contributes about 15%, while the next–next–nearest-neighbor correlators at (3,±1) together contribute about 15%, to the overall motion response. When we plot the values of

*β*

_{i}_{,}

*(Δ*

_{j}*t*) in Figure 6F and D, we find that, due to the correction, the true summed contribution from the two nearest-neighbor correlators rises above 60 spikes/s, which is higher than twice the measured average response of H1 (Figure 6F). The contributions from the second–, and the two third–nearest-neighbor correlators do not show significant deviation from the corresponding measured responses. Similarly, for V1, the correlator contributions are higher than the corresponding measured responses (Figure 6G, E). The response in both cases peaks at 8–12 ms and then decreases monotonically with Δ

*t*, characteristic of the model correlator's response. These results corroborate previous findings that the nearest-neighbor correlator provides the dominant input to motion response (Buchner, 1976), and shows that optical blurring due to finite PSF impacts the overall magnitude and quality of response to motion.

*m*

_{1}and

*m*

_{2}independently, we can investigate that prediction more directly, and of course, a bilinear dependence of response on signal amplitudes would provide a more direct validation of a multiplicative nonlinearity.

*m*

_{1}and

*m*

_{2}such that in one case the flicker energy changes but the motion energy remains constant at 0.2 (Figure 7A), and vice versa in the other case with flicker energy constant at 1.0 (Figure 7B). For each of these cases, the excess firing rate was obtained from two reference conditions: (1) null motion (Figure 6, blue color) and (2) pure flicker (Figure 7, orange color). The pure flicker condition was approximated by setting Δ

*t*= 200 ms, whereas null motion was generated by altering the sign of either Δ

*t*or Δ

*x*.

**Figure 7**

**Figure 7**

*m*

_{1}·

*m*

_{2}and not on the flicker term + . We also observe a slight difference in response gain for the two reference conditions. A slightly steeper blue curve relative to the orange curve (Figure 7C) means that with increasing motion energy, suppression from null motion becomes stronger, whereas response to pure flicker remains largely unchanged. This causes a larger excess rate for the null motion condition. In contrast, when the motion energy remains constant at a low value of 0.2 (Figure 7D), the response is largely dominated by flicker, which causes an approximately equal change in the response to null motion and pure flicker, thus keeping the response gain equal for the two conditions.

*Chlorophanus*. This method of generating negative motion is qualitatively different from the conventional method; in the latter, we change the sign of either Δ

*x*or Δ

*t*, shifting the locus of peak motion energy from quadrant I to II and from quadrant III to IV (Figure 1). By reversing the polarity of contrast, the locus does not change, but the sign of the motion energy flips from positive to negative (see Figure 1B and Adelson & Bergen, 1985).

*m*

_{1}or

*m*

_{2}, but not both, in alternate segments of a trial (Figure 8A). This leads to a change in the sign of the motion energy

*m*

_{1}·

*m*

_{2}but of course does not affect the flicker energy + . H1 shows a clear reduction in firing rate when

*m*

_{1}·

*m*

_{2}< 0 compared to the pure flicker response (last segment of Figure 8B). This reduction is of the same order, though somewhat smaller in magnitude than the excitatory response to

*m*

_{1}·

*m*

_{2}> 0 (Figure 8B). Note that this behavior is in qualitative agreement with the prediction of bilinearity. The response also shows a linear dependence on the stimulus motion energy, where the slope for positive motion energy is steeper than for negative motion energy (Figure 8C, filled circles). This indicates that the visual system responds to stimuli with positive and negative motion energies with different flicker normalizations. To differentiate the effect of motion reversal caused by contrast polarity reversal from reversal in the direction of apparent velocity, we changed the sign of Δ

*t*in the stimuli keeping the sign of

*m*

_{1}·

*m*

_{2}unchanged. This leads to a slight reduction in slope of the response curves for both positive (Δ

*t*> 0) and negative (Δ

*t*< 0) apparent velocities (Figure 8C, open circles), relative to the curves with contrast reversal (Figure 8C, filled circles). The response of V1 to contrast polarity reversal showed characteristics that were similar to those obtained from H1 (results not shown).

**Figure 8**

**Figure 8**

*m*

_{1}·

*m*

_{2}and + with constraints (see Methods), to determine a set of real positive values for

*m*

_{1}and

*m*

_{2}. The allowed range of flicker and motion energies ([0, 2] and [-1, 1] respectively), covers a large part of the motion-flicker space (Figure 9A, gray region). The absolute bound for allowed and nonallowed values is set by the constraint |

*m*

_{1}+

*m*

_{2}| ≤ 2 (Figure 9A). The response curves for H1 show a notable decrease with increasing flicker energy, and a clear increase with an increase in motion energy (Figure 9B). The curves in Figure 9B demonstrate both that the response increases with increasing motion energy, and decreases with increasing flicker energy. To describe the response dependence on

*m*

_{1}·

*m*

_{2}, we define a radius

*r*and an angle

*θ*to represent the locus of

*m*

_{1}and

*m*

_{2}in the polar coordinate plane (Figure 9C).

**Figure 9**

**Figure 9**

*θ*= tan

^{−1}(

*m*

_{2}/

*m*

_{1}), we observe that the response curves virtually overlap with each other and with the scaled sin(2

*θ*) / 2 curve (Figure 9D). Because the flicker energy is different for different points on a particular curve, the overlap of the different response curves implies that the response scales inversely with the flicker energy (see also Figure 9B). The observation that the response curves follow the sin(2

*θ*) / 2 curve shows that the response is also proportional to the motion energy. Part of this result is consistent with the predictions of the HRC model, in that the neural response varies linearly with the product term

*m*

_{1}·

*m*

_{2}. But the other part suggests that the neuron is sensitive to the background flicker, based on which it adjusts its motion response. Computationally, this implies that the fly visual system uses a flicker-normalized correlation for movement detection at least across the set of conditions explored in these experiments. Conceptually, this can be understood as a response dependence not on pure motion energy, but on the ratio of motion to flicker.

*relative*phenomenon in the sense that the response is normalized by the contrast flicker variance (Figure 7D, Figure 9). It is important to note that this is valid for the limited range of contrasts tested here: Clearly, this normalization must break down in the limit of very low levels of contrast variance. Currently we are testing this for a wider range of stimulus conditions.

*Drosophila*visual system (Clark, Bursztyn, Horowitz, Schnitzer, & Clandinin, 2011). By construction, our apparent motion stimuli allow independent contrast reversals of the signals arriving at the detectors. A change in response from excitatory to inhibitory when the sign of

*m*

_{1}or

*m*

_{2}changes from positive to negative (Figure 8C), indicates that the neural system identifies this as a reversal in the direction of motion. There are slight differences in response gain to positive and negative motion energy. These may be due to the fact that the mechanisms that “measure” flicker and motion energy have finite integration times, which implies that they must average the energy around the peaks of the stimulus autocorrelation function over finite areas in the space–time plane (Figure 1A, B). It is conceivable that these areas overlap somewhat and because of the resulting “leaking” of energy, may lead to different values for the weighting of the normal and reversed stimulus and flicker energies. If the sign of Δ

*x*or Δ

*t*is changed, but contrast is

*not*reversed, only the direction of perceived velocity changes. In this case, we also observe a change in gain for H1, which may be explained by a differential strength of excitation and suppression to motion observed in earlier work (Borst & Egelhaaf, 1990).

*Journal of the Optical Society of America A: Optics, Image Science, and Vision*, 2, 284–299.

*Vision Research*, 10, 1411–1430.

*Perception*, 14, 167–179.

*Journal of Physiology*, 178, 477–504.

*Biological Cybernetics*, 56, 209–215.

*Trends in Neurosciences*, 12, 297–306.

*Proceedings of the National Academy of Sciences, USA*, 87, 9363–9367.

*The Fourier transform and its applications*(3rd ed.). Boston, MA: McGraw Hill.

*Vision Research*, 14, 519–527.

*Experimental Brain Research*, 3, 271–298.

*Neural Computation*, 12, 1531–1552.

*Journal of Neurosciences*, 12, 4745–4765.

*Biological Cybernetics*, 24, 85–101.

*Neuron*, 70, 1165–1177.

*Progress in Neurobiology*, 68, 409–437.

*Statistical adaptation and optimal estimation in movement computation by the blowfly visual system*, Paper presented at Systems, Man, and Cybernetics. IEEE International Conference on Humans, Information, and Technology, San Antonio, TX.

*International Journal of Neural Systems*, 7, 437–444.

*Biological Cybernetics*, 54, 223–236.

*Journal of the Optical Society of America A: Optics, Image Science, and Vision*, 18, 241–252.

*Journal of the Optical Society of America A: Optics, Image Science, and Vision*, 6, 116–127.

*Journal of the Optical Society of America A: Optics, Image Science, and Vision*, 6, 1070–1087.

*Nature*, 412, 787–792.

*Photoreceptor optics*(pp. 98–125). Berlin, Germany: Springer-Verlag.

*Kybernetik*, 2 (2), 77–91.

*Bibliotheka Ophthalmologica*, 82, 251–259.

*Proceedings of the National Academy of Sciences, USA*, 101, 16333–16338.

*Vision Research*, 39, 2603–2613.

*Neuron*, 28 (2), 595–606.

*Chlorophanus*[Translation: System-theoretical analysis of the evaluation of time, sequence and sign in movement perception by the weevil

*Chlorophanus*].

*Zeitschrift Fuer Naturforschung Part B-Chemie Biochemie Biophysik Biologie Und Verwandten Gebiete*, 11 (9–10), 513–524.

*Calliphora erythrocephala*.

*Zeitschrift Fur Naturforschung C-a Journal of Biosciences*, 31 (9-10), 629–633.

*Facets of vision*(pp. 391–424): Berlin, Germany: Springer.

*Drosophila melanogaster*.

*Journal of Comparative Physiology*, 117, 127–162.

*Journal of Comparative Physiology*, 154, 707–718.

*Journal of General Physiology*, 104, 593–621.

*Vision Research*, 37, 225–234.

*Annual Review of Entomology*, 42, 147–177.

*Proceedings of the Royal Society Series B: Biological Sciences*, 225, 251–275.

*Vision Research*, 20, 431–435.

*Journal of Experimental Biology*, 214, 4000–4009.

*Transactions of the American Institute of Electrical Engineers*, 47, 617–644.

*Nature*, 290, 415–416.

*Musca domestica*.

*Journal of Comparative Physiology*, 134, 45–54.

*Kybernetik*, 13 (4), 223–227.

*Journal De Physique I*, 4 (11), 1755–1775.

*Sensory communication*(pp. 303–318). New York: Wiley.

*Drosophila*.

*Journal of Comparative Physiology A*, 198, 389–395.

*Proceedings of the Institute of Radio Engineers*, 37 (1), 10–21.

*Journal of Comparative Physiology A*, 155, 239–247.

*Journal of Comparative Physiology*, 116, 183–207.

*Vision Research*, 23, 659–663.

*Journal of Comparative Physiology A*, 189 (1), 1–17.

*The visual neuroscience*(Vol. 1. pp. 234–259). Cambridge, MA: MIT Press.

*Drosophila*.

*Proceedings of the National Academy of Sciences, USA*, 108, 9685–9690.

*Neuron*, 82, 887–895.

*Vision Research*, 26, 797–810.

*Experimental Brain Research*, 45, 189–195.

*Experimental Brain Research*, 45, 179–188.

*Perception*, 14, 209–224.

*Journal of the Optical Society of America A*, 2, 300–321.

*Vision Research*, 29, 1309–1317.

*Vision Research*, 33, 2337–2338.

*Spatial Vision*, 10, 323–333.

*Journal of Neuroscience Methods*, 141, 1–7.