We have previously presented a model of how neurons in the primate middle temporal (MT/V5) area can develop selectivity for image speed by using common properties of the V1 neurons that precede them in the visual motion pathway (J. A. Perrone & A. Thiele,2002). The motion sensor developed in this model is based on two broad classes of V1 complex neurons (sustained and transient). The S-type neuron has low-pass temporal frequency tuning,*p*(*ω*), and the T-type has band-pass temporal frequency tuning,*m*(*ω*). The outputs from the S and T neurons are combined in a special way (weighted intersection mechanism [WIM]) to generate a sensor tuned to a particular speed,*ν*. Here I go on to show that if the S and T temporal frequency tuning functions have a particular form (i.e.,*p*(*ω*)/*m*(*ω*) =*k/ω*), then a motion sensor with variable speed tuning can be generated from just two V1 neurons. A simple scaling of the S- or T-type neuron output before it is incorporated into the WIM model produces a motion sensor that can be tuned to a wide continuous range of optimal speeds.

*ν*, it needs to respond maximally to combinations of spatial (

*u*) and temporal (

*ω*) frequencies that are related by the equation

*ω*= −

*νu*(Watson & Ahumada,1983). It is well established that neurons in the MT area respond best to a particular edge or bar speed (Felleman & Kaas,1984; Maunsell & Van Essen,1983) and that some of them are capable of coding image speed independently of changes to the stimulus pattern (i.e., they follow the

*ω*= −

*νu*rule) (Perrone & Thiele,2001; Priebe, Cassanello, & Lisberger,2003). However, until recently, it was not clear how MT neurons could have acquired these abilities from the V1 neurons that provide their inputs. The V1 neurons are not speed tuned; their responses are dependent on the spatial frequency content of the stimulus, and they are broadly tuned for temporal frequency (Foster, Gaska, Nagler, & Pollen,1985).

*p*(

*ω*), and a transient type (T) with band-pass temporal frequency tuning,

*m*(

*ω*), (see red and blue lines inFigure 1a).

*f*(

*u*), used in the model is based on actual V1 neuron data (Hawken & Parker,1987) (see dashed red line inFigure 1b). The T sf function (blue line inFigure 1b),

*f*′(

*u*), differs from the S type by an amount determined by the shape of the temporal frequency tuning functions (seeEquation 1 below). Let S(

*u, ω*) represent the combined spatiotemporal frequency sensitivity function of the sustained V1 neuron (or equivalently, its spatiotemporal energy output) and T(

*u, ω*) represent the transient neuron sensitivity [i.e., S(

*u, ω*) =

*f*(

*u*)

*p*(

*ω*) and T(

*u, ω*) =

*f*′(

*u*)

*m*(

*ω*)]. Note that this multiplication operation (and the steps that follow) assumes that the temporal function retains its shape as the spatial frequency changes and vice versa. There is evidence to support this “separability” assumption in V1 monkey (Foster et al.,1985) and cat (Tolhurst & Movshon,1975) neurons. The issue of separability will be raised again in theDiscussion.

*ν*be the optimal speed (velocity) that elicits a maximal response from a sensor made up from an S- and T-type V1 neuron. We have previously demonstrated that if then S(

*u*,

_{i}*ω*) = T(

_{i}*u*,

_{i}*ω*) for all

_{i}*u*,

_{i}*ω*, such that

_{i}*ω*/

_{i}*, = −*

_{i}*ν*(Perrone & Thiele,2002). In other words, if the sf tuning of the transient-type V1 neuron differs from the sustained sf tuning in the manner specified byEquation 1, then the two V1 neurons (S and T) will respond equally to a particular set of spatial and temporal frequencies corresponding to a stimulus speed

*ν*.

*u, ω*) frequency space and which is maximally sensitive to a particular edge speed,

*ν*(seeFigure 1c). This is because, in frequency space, a moving edge has a Fourier spectrum that is oriented relative to the (

*u, ω*) axes and which passes through the origin (i.e., the equation for the spectral line is given by

*ω*= −

*νu*) (Watson & Ahumada,1983). For the particular temporal and spatial functions chosen inFigure 1a and1b, the WIM sensor (Equation 2) has a spectral receptive field with a slope that is maximally responsive to edges moving at 2 deg/s to the left.

*ν*. For each optimum speed required in a WIM sensor tuned to a particular spatial frequency, u

_{i}_{i}, separate matched pairs of S and T inputs are required: (S

_{0}, T

_{0}), (S

_{0}, T

_{1}), (S

_{0}, T

_{1}), etc. Given the multitude of speeds that need to be registered in a typical retinal image sequence, this is a resource intensive mechanism for achieving speed tuning. It would be more efficient if we could use the same S-T pair for a range of speed tunings. It turns out that a judicious selection of the V1 temporal frequency tuning functions enables this economy to be achieved.

*τ*

_{1}and

*τ*

_{2}are time constants, measured in seconds. As can be seen fromFigure 1a (red dashed line), a good fit to data such as those shown inFigure 2a can be obtained by setting inEquation 3 to (0.0072, 0.0043).

*ζ*) that in-creases the degree of band-pass tuning (Perrone & Thiele,2002). However, I have since discovered that a more useful function for the transient V1 neuron temporal frequency tuning is one given by the following equation: where

*k*is a constant (set to 4.0 forFigure 1a).

*R*= t-log

*w*+ log

*k*(see dotted line inFigure 1a, but note that it has been shifted upwards for clarity). This ratio function possesses a unique property: If

*φ*is any real number, then fromEquation 5, i.e., This property turns out to be very useful in the new speed tuning mechanism. UsingEquation 5 again, we can rewriteEquation 6 as .

*ν*

_{1}, we require the following relationship to exist between the different spatial and temporal frequency functions (seeEquation 1): .

*ν*

_{2}using the current version of the WIM model (Perrone,2004; Perrone & Thiele,2002), it is necessary to incorporate a new transient-type V1 neuron (T

_{2}) with new spatial frequency tuning,

*f*

_{2}′(

*u*), also controlled byEquation 1, i.e., where

*f*

_{1}(

*u*) is the sustained spatial frequency tuning function of the original WIM sensor, tuned to speed

*ν*

_{1}. If we let

*ν*

_{2}=

*φν*

_{1},Equation 9 can be rewritten as Using the result fromEquation 7 gives Combining this result withEquation 8 gives

*f*

_{2}′(

*u*), to generate speed tuning

*ν*

_{2}. We can simply scale the original transient neuron spatial frequency function. This is a powerful result and it enables a great saving in the number of V1 neurons required to generate different speed tunings.Equation 12 shows that if we start with a

*single*pair of complex V1 neurons, S and T, and scale the T output by a factor = 1/

*φ*prior to the WIM algorithm (Equation 2), we will produce a sensor tuned to speed

*ν*

_{2}=

*νν*

_{1}.

*φ*= 2 and 0.5 forFigures 3a and3b, respectively.Figure 3c shows the speed tuning curves for the three different sensors. These were generated using a moving bar (20 pixels wide) and two-dimensional image-based versions of the WIM sensors (Perrone,2004). By changing the size of the scaling parameter,

*φ*, a wide continuous range of speed tuning values can be generated.

*φ*that range from 0.3 to 4. Note how the two surfaces of the S and T functions intersect on a straight line in the (

*u, ω*) plane. This is the basis of the WIM model, and it comes about because of the special way the transient spatial function,

*f*′(

*u*), is constructed (Equation 1). No other spatial function will generate a locus of intersection that is exactly straight and oriented in this manner. Notice also how the slope of this line changes with different values of

*φ*. The locus of intersection remains straight only for different values of

*φ*because of the special relationship between the

*p*(

*ω*) and

*m*(

*ω*) temporal functions (Equation 5). Other temporal functions without this property will not retain the exact linear intersection as

*φ*changes.

*k/ω*rule (Equation 5). The currently available physiological data from V1 neurons (e.g.,Figure 2) can certainly accommodate the functions required for the variable speed tuning mechanism to work.

*u, ω*) =

*f*(

*u*)

*p*(

*ω*) and T(

*u, ω*) =

*f*′(

*u*)

*m*(

*ω*). The data from some V1 neurons show that this assumption is not unreasonable (Foster et al.,1985; Tolhurst & Movshon,1975). However, mathematical convenience should not be mistaken for biological practicality. In the end, the basic WIM mechanism requires only that the S and T neuron spatiotemporal frequency functions overlap along a line given by

*ν*= −

*ω*/

_{i}*u*. One way of achieving this is to assume separability and to use

_{i}_{i}, but there are other options. Two inseparable functions S′(

*u, ω*) and T′(

*u, ω*) could also be made to intersect along the

*ν*= −

*ω*/

_{i}*u*line by changing their overall shape. Similarly, the primate brain may have evolved S′ and T′ (non-separable) spatiotemporal frequency functions for its V1 neurons that enable the variable speed tuning mechanism to work. I have simply shown that if separability is a property of these neurons, then the theoretical ideal temporal frequency tuning curves for variable speed tuning will be ones based on the

_{i}_{i/ui}relationship (

_{i}). The WIM model and the variable speed tuning concept are not invalidated if further physiological studies reveal that the majority of V1 complex neurons are (one quadrant) inseparable in (

*u, ω*) space.

*φ*value). However, the animated sequence does raise the possibility of a dynamical system in which the speed tuning of the sensor could be altered rapidly in response to events occurring in other parts of the visual field or from extraretinal sources, such as eye movements.

*Journal of Physiology*,365,331–363. [PubMed] [CrossRef] [PubMed]

*Proceedings of the Royal Society of London B*,231 (1263),251–288. [PubMed] [CrossRef]

*Visual Neuroscience*,13,477–492. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*,9 (2),177–194. [PubMed] [CrossRef]

*Motion vision: Computational, neural, and ecological constraints*(pp.169–179).Heidelberg:Springer-Verlag.

*Vision Research*,44 (15),1733–1755. [PubMed] [CrossRef] [PubMed]

*Vision Research*,34 (21),2917–2938. [PubMed] [CrossRef] [PubMed]

*Vision Research*,42 (8),1035–1051. [PubMed] [CrossRef] [PubMed]

*Vision Research*,38 (5),743–761. [PubMed] [CrossRef] [PubMed]

*Nature*,257 (5528),674–675. [PubMed] [CrossRef] [PubMed]

*Handbook of perception and human performance*(Vol. 1, pp.6.1–6.42).New York:Wiley.

*Motion:Perception and representation*(pp.1–10).New York:Association for Computing Machinery.