Free
Review  |   December 2011
Advancement of motion psychophysics: Review 2001–2010
Author Affiliations
Journal of Vision December 2011, Vol.11, 11. doi:https://doi.org/10.1167/11.5.11
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shin'ya Nishida; Advancement of motion psychophysics: Review 2001–2010. Journal of Vision 2011;11(5):11. https://doi.org/10.1167/11.5.11.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This is a survey of psychophysical studies of motion perception carried out mainly in the last 10 years. It covers a wide range of topics, including the detection and interactions of local motion signals, motion integration across various dimensions for vector computation and global motion perception, second-order motion and feature tracking, motion aftereffects, motion-induced mislocalizations, timing of motion processing, cross-attribute interactions for object motion, motion-induced blindness, and biological motion. While traditional motion research has benefited from the notion of the independent “motion processing module,” recent research efforts have been also directed to aspects of motion processing in which interactions with other visual attributes play critical roles. This review tries to highlight the richness and diversity of this large research field and to clarify what has been done and what questions have been left unanswered.

Introduction
This paper provides an overview of psychophysical studies of visual motion processing. In comparison to a recent excellent review of similar topics (Burr & Thompson, 2011), this paper places the emphasis on the advancement of our functional understanding of visual motion processing over the last 10 years. According to my survey, about 300 papers published in Journal of Vision are related in some way to visual motion processing. The number increases more than fivefold when all motion-related papers published during the same period are counted. Although it is impossible to review all of them, this paper tries to cover the relevant topics as broadly as possible. My intention is to highlight the richness and diversity of this large research field and clarify what has been done and what has been left unanswered. 
This review is organized as follows. The Local motion detection section describes how local motion signals are detected and how they are affected by luminance contrast polarity and luminance level. The Local motion interactions section describes interactions of local motion signals between different directions, between spatial scales, and between center and surround. The Aperture problem section describes how two-dimensional (2D) motion signals are computed from one-dimensional (1D) motion signals and by other methods. The Global motion section addresses global random-dot motion, motion transparency, and complex motion. The Higher order motion section covers second-order motion and feature tracking. The Motion aftereffects section summarizes topics concerning motion aftereffects (MAEs). The Motion-induced position shift section describes several types of motion-induced mislocalization effects. The Temporal properties of motion processing section discusses topics related to perceptual latency of motion perception and discrete sampling. The Interactions with motor systems section briefly summarizes how visual motion interacts with eye movements and other motor systems. The Object motion and cross-attribute integration section describes the mechanisms for perception of objects in motion, some of which include interactions with form, color, and non-visual information. The Three-dimensional motion processing section describes three-dimensional (3D) motion processing, including biological motion. 
Local motion detection
Motion detection mechanisms
The front end of visual motion processing is a bank of direction-selective sensors sensitive to local luminance movements. To start with, as background for explaining more recent studies, I will briefly explain how local motion signals are detected (see also Burr & Thompson, 2011; Krekelberg, 2008 for detailed recent reviews on motion detection mechanisms). 
A motion trajectory (along the x-axis in space) can be described as a slanted pattern (orientation) in a 2D space–time (xt) plane or a slanted plane in a 3D xyt space. The basic concept of the motion energy model (Adelson & Bergen, 1985) is to regard motion sensing as the detection of space–time slants. A linear filter with a slanted kernel (receptive field) can be made from a quadrature pair of spatiotemporal band-pass filters (Watson & Ahumada, 1985). The behavior of a linear filter can also be understood in frequency space (Watson & Ahumada, 1985). The space–time plot of a drifting grating is a diagonal grating, whose amplitude spectrum is a pair of pulses located at its spatiotemporal frequency, with the direction determining the quadrants in which the pulses appear. By decomposing a moving pattern into drifting sine-wave components and specifying the location of each component in spatiotemporal frequency space, the system is able to estimate the parameters of motion. This is called the principle of Motion From Fourier Components (MFFC; Chubb & Sperling, 1988). Three famous motion models proposed in the mid-1980s—the motion sensor (Watson & Ahumada, 1985), the motion energy model (Adelson & Bergen, 1985), and the elaborated Reichard detector (van Santen & Sperling, 1985)—are mathematically related to one another, and all follow the MFFC principle. The basic concept of these models is supported from the standpoints of both psychophysics and physiology. Gradient models are also sensitive to the same luminance flow, and an elaborated version that includes a non-linear operation for robust speed estimation, which does not exactly follow the MFFC principle, is able to detect translating pattern and motion classified as second-order motion (Johnston, McOwan, & Buxton, 1992; see also Relation between first-order motion and second-order motion subsection). 
Effects of contrast polarity
Motion illusions can be useful probes to test the mechanism of motion detection. 
When a pattern jumps with reversal of its luminance contrast polarity, one can see motion in the direction opposite to the physical jump. This effect is known as reversed phi (Anstis, 1970). By combining forward phi and reversed phi, one can make four-stroke apparent motion that gives an impression of continuous forward motion (Anstis & Rogers, 1986). 
In the early stages of the visual pathway, positive and negative luminance contrasts are separately represented by ON-center and OFF-center channels. On the other hand, the standard motion models, as well as the MFFC principle, assume that a motion detector can directly combine positive and negative luminance contrast signals to produce motion signals of the opposite sign. Perception of reversed phi seems to support this assumption, but this issue is still contentious, since reversed phi is not always perceived (Bours, Kroes, & Lankheet, 2007, 2009; Edwards & Badcock, 1994; Edwards & Metcalf, 2010; Edwards & Nishida, 2004; Mo & Koch, 2003). 
When an anti-Glass pattern consisting of local pairs of light and dark dots is presented briefly, one can see illusory motion in the direction from the dark to light dots. This is considered as a variant of reversed phi with the apparent temporal delay being created by the latency difference between light and dark dots (Brooks, van der Zwan, & Holden, 2003; Del Viva & Gori, 2008; Del Viva, Gori, & Burr, 2006). 
When a pattern jumps without changing its contrast but a uniform field of the mean luminance of the pattern is presented during the interstimulus interval (ISI), motion is perceived in the opposite direction to the jump (Braddick, 1980). This is also considered a variant of reversed phi, with luminance contrast polarity of the first pattern being reversed by the biphasic contrast response of the visual system (Shioiri & Cavanagh, 1990). By combining this effect with the four-stroke apparent motion, Mather et al. (Challinor & Mather, 2010; Mather, 2006; Mather & Challinor, 2009) devised a two-stroke apparent motion display, in which repeated presentation of a two-frame pattern displacement followed by a brief interstimulus interval can create an impression of continuous forward motion (Figure 1). The ISI reversal effect can be used as a psychophysical tool for estimating the temporal impulse response of visual response, as described below. 
Figure 1
 
Two-stroke apparent motion sequence. Two pattern frames (Frames 1 and 2) are presented repeatedly. An interstimulus interval (ISI) intervenes at one of the two frame transitions. The Frame 1–Frame 2 transition in this example should generate a rightward motion signal in the visual system (arrows). The Frame 2–Frame 1 transition would normally generate a leftward motion signal, but the effect of the ISI reverses this signal, so the sequence appears unidirectionally rightward. Reproduced with permission from Mather and Challinor (2009).
Figure 1
 
Two-stroke apparent motion sequence. Two pattern frames (Frames 1 and 2) are presented repeatedly. An interstimulus interval (ISI) intervenes at one of the two frame transitions. The Frame 1–Frame 2 transition in this example should generate a rightward motion signal in the visual system (arrows). The Frame 2–Frame 1 transition would normally generate a leftward motion signal, but the effect of the ISI reverses this signal, so the sequence appears unidirectionally rightward. Reproduced with permission from Mather and Challinor (2009).
Effects of luminance level
As the adapting light level decreases, the visual response becomes sluggish. The peak sensitivity of the band-pass temporal channel shifts to a lower temporal frequency (Snowden, Hess, & Waugh, 1995), and the negative lobe of the biphasic impulse response shrinks. In agreement with this change, the ISI reversal effect is reduced under low luminance levels (Mather & Challinor, 2009; Sheliga, Chen, FitzGibbon, & Miles, 2006; Takeuchi & De Valois, 1997). The perceived direction changes to the forward direction under scotopic vision, which is presumably due to the additional contribution of feature tracking (Takeuchi & De Valois, 2009). This technique also revealed that the temporal response function changes quickly (<1 s) in response to a sudden luminance increment or decrement (Takeuchi, De Valois, & Motoyoshi, 2001). In addition to motion detection, luminance changes have significant effects on the subsequent stages of motion processing, including speed perception (Hammett, Champion, Thompson, & Morland, 2007; Takeuchi & De Valois, 2000b; Vaziri-Pashkam & Cavanagh, 2008; see also Speed perception subsection), motion coherence detection (Lankheet, van Doorn, Bouman, & van de Grind, 2000; Lankheet, van Doorn, & van de Grind, 2002), heading, and biological motion (Billino, Bremmer, & Gegenfurtner, 2008). 
Local motion interactions
Motion detection by local motion sensors is similar to color detection by photoreceptors, in the sense that the activity of one sensor alone might yield the sensation of something, but is far from sufficient to produce a meaningful percept. Useful information is encoded in the distributed population activity, and the brain decodes the useful information through interactions of local motion signals in various dimensions, including space, orientation, and spatiotemporal frequency. Furthermore, in order to estimate the motion of an object, the brain has to integrate the motion signals relevant to the object movement and segregate the motion signals irrelevant to the object movement. 
There are many kinds of integration and segregation at multiple levels of visual motion processing. Among them, this section reviews psychophysical phenomena that are considered to mainly reflect local interactions among early motion sensors with different stimulus tunings. 
Interaction across different directions
Motion opponency is a local motion interaction between opposite directions. A counterphase grating, consisting of two oppositely drifting gratings, simultaneously activates motion sensors responsible for the two directions (Levinson & Sekuler, 1975; Qian & Andersen, 1994). As a result of the local motion opponency, however, the counterphase grating is dominantly perceived as a motionless flicker rather than as transparent motion of the two directions (Qian, Andersen, & Adelson, 1994a). This mechanism makes the visual system sensitive to the difference in the motion signal strength between opposing directions (Stromeyer, Kronauer, Madsen, & Klein, 1984). 
It has been suggested that opponent motion energy normalized by flicker energy (motion contrast), rather than opponent motion energy per se, is the best predictor of the human direction discrimination (Georgeson & Scott-Samuel, 1999). Follow-up studies suggest that the stimulus specificity (orientation, scale, space) of flicker normalization is not broad but narrow and similar to the specificity of motion detection (Rainville, Makous, & Scott-Samuel, 2005; Rainville, Scott-Samuel, & Makous, 2002). 
It is possible to consider local motion opponency as a special form of local motion pooling (see Aperture problem and Global motion sections), since pooling of opposite directions results in mutual cancellation. When orthogonal directions are locally paired, a diagonal motion is perceived, as predicted by the notion of directional pooling (Curran & Braddick, 2000). 
Interaction across different spatial scales
Image motion is detected in parallel at multiple scales by spatial frequency-selective motion sensors (Anderson & Burr, 1985). One type of interaction among different-scale motion signals is motion capture, in which motion at a coarse scale causes fine-scale textures to move together (Ramachandran & Cavanagh, 1987). Motion capture is an assimilation effect. It remains unclear whether this effect reflects an early interaction or late processing. 
Another type of cross-scale motion interaction is a contrast effect. The perceived direction of motion of a brief visual stimulus that contains fine features reverses when static coarser features are added to it (Derrington, Fine, & Henning, 1993; Derrington & Henning, 1987; Serrano-Pedraza & Derrington, 2010; Serrano-Pedraza, Goddard, & Derrington, 2007). Similar reversal effects were obtained when a high-frequency drifting grating was added to a low-frequency counterphase grating (Yanagi, Nishida, & Sato, 1995) and when cross-frequency interactions on motion direction perception were estimated by a psychophysical reverse correlation method in which a number of drifting gratings of random spatiotemporal frequency were presented simultaneously (Hayashi, Sugita, Nishida, & Kawano, 2010). This illusory reversal can be explained by suppressive interactions between fine and course motion signals. In particular, coarse motion signals are strongly suppressed by the presence of fine motion signals in the same direction. Our data (Yanagi & Nishida, unpublished) indicate that the inhibitory interaction in the opposite direction (from course to fine) also exists, but it tends to be masked by motion capture. Cross-frequency inhibition may contribute to the asymmetric spatial frequency tuning (peaking at about 1 octave below the test frequency) observed in investigations of the effect of a jittering mask on direction identification (Hutchinson & Ledgeway, 2007) as well as with static MAEs (Ledgeway & Hutchinson, 2009). 
Cross-scale interactions have also been extensively studied using the ocular following response, a rapid and involuntary eye movement driven by retinal motion (Miles, Kawano, & Optican, 1986). This visuomotor response indicates the presence of winner-take-all inhibitory interactions across different spatial scales (Sheliga, Fitzgibbon, & Miles, 2008; Sheliga, Kodaka, FitzGibbon, & Miles, 2006). 
Center–surround interactions
It is often observed that the neural response to a visual motion stimulus is suppressed when the target stimulus is surrounded by another stimulus moving in the same direction. This center–surround antagonism in cortical visual motion processing has been linked with psychophysical phenomena concerning relative motion processing and contextual modulation (Golomb, Andersen, Nakayama, MacLeod, & Wong, 1985; Ido, Ohtani, & Ejima, 1997; Murakami & Shimojo, 1993, 1996; Sachtler & Zaidi, 1995; Shioiri, Ito, Sakurai, & Yaguchi, 2002; Shioiri, Ono, & Sato, 2002; Watson & Eckert, 1994). Surround suppression also affects a variety of visual motion phenomena, such as motion adaptation (Sachtler & Zaidi, 1995; Tadin, Lappin, Gilroy, & Blake, 2003; Tadin, Paffen, Blake, & Lappin, 2008), perceived speed (Baker & Graf, 2008, 2010a; van der Smagt, Verstraten, & Paffen, 2010), motion direction sensitivity (Takemura & Murakami, 2010), perceived direction of bistable motion stimuli (Baker & Graf, 2010b), and binocular rivalry (Baker & Graf, 2008; Paffen, Alais, & Verstraten, 2005; Paffen, Tadin, te Pas, Blake, & Verstraten, 2006; Paffen, van der Smagt, te Pas, & Verstraten, 2005). 
Among the many psychophysical correlates of the center–surround antagonism, a paradoxical size effect (Tadin & Lappin, 2005; Tadin et al., 2003) has attracted the broadest interest in the last decade. The effect is an increase in the minimum stimulus duration needed to identify the stimulus motion direction as the size of a moving pattern is increased (Figure 2). It is observed for high-contrast luminance stimuli but not for low-contrast luminance stimuli nor for equiluminant chromatic stimuli. The paradoxical size effect, like neural surround suppression, is evident for brief presentations (Churan, Khawaja, Tsui, & Pack, 2008). The magnitude of the effect is reduced for the elderly (Betts, Sekuler, & Bennett, 2009; Betts, Taylor, Sekuler, & Bennett, 2005; Tadin & Blake, 2005; see also Karas & McKendrick, 2009 for a related aging effect) and for schizophrenia patients (Tadin, Kim et al., 2006), possibly reflecting a reduction in cortical inhibition. Psychophysical reverse correlation analysis has been applied to reveal the temporal dynamics of center–surround interaction (Tadin, Lappin, & Blake, 2006). There is an ongoing debate on whether the paradoxical size effect can be ascribed to contrast sensitivity change with changing stimulus size (Aaen-Stockdale, Thompson, Huang, & Hess, 2009; Glasser & Tadin, 2010). The paradoxical size effect may not simply reflect low-level hard-wired center–surround antagonism, since it is affected by the perceived surface layout (surface depth relations; Tadin et al., 2008). Center–surround suppression may be present at multiple stages of visual motion processing, including, but not limited to, early local motion detection. 
Figure 2
 
Center–surround interaction indicated by effects of size and contrast on motion perception. (A) Duration thresholds of direction discrimination as a function of stimulus size at different contrasts. (B) Log threshold change relative to the optimal size at each contrast level. Reproduced with permission from Tadin and Lappin (2005).
Figure 2
 
Center–surround interaction indicated by effects of size and contrast on motion perception. (A) Duration thresholds of direction discrimination as a function of stimulus size at different contrasts. (B) Log threshold change relative to the optimal size at each contrast level. Reproduced with permission from Tadin and Lappin (2005).
Aperture problem
While the last section mainly focused on inhibitory interactions among local motion signals, this section considers a problem in which local motion integration plays a critical role. 
A goal of early visual motion processing is to estimate 2D motion vectors. This estimation has to resolve the ambiguity of local motion signals, which are detected by 1D motion sensors with spatially oriented receptive fields. Due to the aperture problem, a 1D motion signal cannot fully specify the true 2D motion vector (Fennema & Thompson, 1979). This is the case even when the moving image feature is 2D. How the 2D vectors are computed from outputs of local 1D motion sensors has been a major question for visual motion investigations. 1  
The visual system seems to take multiple strategies to solve the aperture problem. One is to integrate 1D local motion signals across different orientations and different locations. Another is to directly compute 2D direction of 2D features, such as terminators, and propagate the 2D direction to the connected 1D signals. 
Cross-orientation integration of 1D motion signals
Coherent motion perception for plaid patterns has been extensively studied in order to reveal the mechanism of cross-orientation integration (Adelson & Movshon, 1982). Two different algorithms have been proposed. One computes the mathematically correct solution from the integration of 1D component motion signals across different orientations based on the intersection of constraints (IOC) rule (Adelson & Movshon, 1982). The other one computes an approximate solution from the vector sum or vector average (VA) of the orthogonal vectors of the two components and of a second-order contrast modulation produced by the interaction of the components (Wilson, Ferrera, & Yo, 1992; Wilson & Kim, 1994). The IOC hypothesis was criticized on the grounds that the IOC appears to be computationally complex and therefore biologically less plausible than VA and that the perceived direction of type II plaids (whose pattern/IOC vector does not fall between the two orthogonal component vectors, unlike type I plaids) deviated significantly from the IOC prediction in the direction of VA prediction under some conditions. However, Heeger and Simoncelli (Heeger, 1987; Simoncelli & Heeger, 1998) proposed a simple and powerful model that computes an IOC solution through a connection from V1 and MT that selectively integrates 1D motion signals consistent with a given 2D vector, across different spatiotemporal frequencies. Subsequent physiological studies broadly support this cascade model (Perrone & Thiele, 2001; Rust, Mante, Simoncelli, & Movshon, 2006; but also see Priebe, Cassanello, & Lisberger, 2003). Furthermore, Weiss, Simoncelli, and Adelson (2002) proposed that the perceived bias for the type II plaid could be interpreted as resulting from Bayesian estimation with a prior favoring slow speeds. Note that the VA may yield an approximately correct 2D direction, but it does not yield correct speed (Bradley & Goyal, 2008). Clarifying whether the rule for solving the aperture problem is IOC or VA is associated with the essential issue of the exactitude of our perceptual computation. 
In addition to cross-orientation integration, tracking of 2D features, such as blobs and contrast modulations, may also contribute to plaid motion perception (Alais, Wenderoth, & Burke, 1997; Bowns, 1996, 2006; Cox & Derrington, 1994; Derrington, Badcock, & Holroyd, 1992). An effective way to uniquely examine the mechanism of cross-orientation integration is to distribute non-overlapping 1D motion signals over space. Earlier studies using such stimuli (Mingolla, Todd, & Norman, 1992; Rubin & Hochstein, 1993) concluded that the integration rule is VA, not IOC. However, a recent study using global Gabor motion and global plaid motion indicates that both the IOC and VA mechanism operate in spatial motion pooling (Amano, Edwards, Badcock, & Nishida, 2009a). The global Gabor motion consists of numerous Gabor patches, each having a drifting sinusoidal grating carrier of random orientation, and a stationary Gaussian envelope. The blurred element window, low-contrast presentation, and peripheral presentation minimize the contribution of terminator motion and facilitate spatial integration (Lorenceau & Boucart, 1995; Takeuchi, 1998). The carrier orientation is randomly determined, and the carrier drifting speeds are made consistent with a global target 2D vector. The resulting global Gabor motion is perceived to move coherently and rigidly in the direction and with a speed close to that of the target 2D vector, as predicted by IOC (see also Lorenceau, 1998, for a similar observation). If the global motion perception were produced by the VA of the local orthogonal motion vector of each Gabor patch, the perceived motion would be non-rigid and much slower. On the other hand, when each local motion patch is changed from a 1D Gabor to a 2D Gabor plaid (global plaid motion), the motion percept can be better explained by VA. Amano et al. (2009a) proposed the idea that the human visual system does not have a fixed strategy but adaptively switches between two types of motion pooling depending on the stimulus (Figure 3). One is 1D motion pooling, in which local 1D motion signals are integrated over orientation and space at the same time. The other is 2D pooling, in which local 1D motion signals are first integrated across different orientations at each location, and then the resulting local 2D vector signals are integrated over space. When local moving features have one orientation (e.g., lines, Gabors) and the aperture problem cannot be solved without pooling 1D signals over space, the visual system performs 1D motion pooling. It follows an integration principle similar to the IOC rule, but a non-rigid interpretation may be chosen in cases where doing so is more plausible. On the other hand, when local moving features have more than one orientation (e.g., dots, Gabor plaids), so that 2D motion vectors can be locally determined by cross-orientation integration, the visual system performs 2D motion pooling following an integration principle similar to the VA rule. The idea of cooperation between IOC and VA mechanisms has also been suggested for standard plaid perception. Bowns and Alais (2006) have shown that both the IOC and VA solutions can be computed for plaids and that adapting to one of the solutions switches the perceived direction to the other solution. 
Figure 3
 
Two types of spatial motion pooling (Amano et al., 2009a). (Left) One-dimensional motion pooling. When local motion elements are directionally ambiguous 1D patterns, as in the case of global Gabor motion, 1D local motion signals are integrated across orientation and space at the same time. IOC: intersection of constraints. (Right) Two-dimensional motion pooling. When local motion elements are 2D patterns, as in the case of global plaid motion, 1D local motion signals are first locally integrated across orientation (stage in red), and the resulting local 2D motion signals are integrated over space (stage in blue). VA: vector average. Modified with permission from Amano et al. (2009a).
Figure 3
 
Two types of spatial motion pooling (Amano et al., 2009a). (Left) One-dimensional motion pooling. When local motion elements are directionally ambiguous 1D patterns, as in the case of global Gabor motion, 1D local motion signals are integrated across orientation and space at the same time. IOC: intersection of constraints. (Right) Two-dimensional motion pooling. When local motion elements are 2D patterns, as in the case of global plaid motion, 1D local motion signals are first locally integrated across orientation (stage in red), and the resulting local 2D motion signals are integrated over space (stage in blue). VA: vector average. Modified with permission from Amano et al. (2009a).
Stimulus specificity of 1D motion integration
Stimulus differences between 1D local component motions affect 1D motion integration. For standard overlapping grating or plaid stimuli, when there is a significant difference in spatial frequency between components, the component motions are often seen separately in transparency without being bound into a coherent motion. This is not a general rule, however, since component motions of different spatial frequency can be integrated when they are similar in orientation and direction (Kim & Wilson, 1993). For non-overlapping stimuli, when there is a significant difference in spatial frequency, the component motions are rarely bound into a coherent motion, and this is so even when component motions are similar in orientation and direction (Alais & Lorenceau, 2002; Maruya, Amano, & Nishida, 2010). In apparent disagreement with this finding, however, noise masking occurs despite large differences in spatial frequency between signal and noise (Amano, Edwards, Badcock, & Nishida, 2009b; see also Bex & Dakin, 2002; Yang & Blake, 1994, for similar broadband masking for random-dot stimuli). 
There are at least two different interpretations of the stimulus specificity of motion integration (Stoner & Albright, 1993). One is that the stimulus specificity reflects the structure of processing channels. The other is that the stimulus specificity only indicates whether the visual system uses a given stimulus parameter as a motion segmentation cue. According to the former view, 1D motion pooling should only occur within narrow spatial frequency bands. The findings currently available, such as the effects of spatial frequency described in the previous paragraph, seem to be too complicated for a simple interpretation of this type. Furthermore, in agreement with the idea that spatial frequency is just one of many motion segmentation cues, weak 1D motion pooling is observed between widely separated spatial frequencies when spatial configurations are optimized (Maruya et al., 2010). The specificity of motion integration for other stimulus parameters, such as luminance contrast and color (Krauskopf & Farell, 1990), can be also interpreted as reflecting the effectiveness of image segmentation cues (Stoner & Albright, 1993). 
Like first-order motion, second-order motion signals are integrated across orientation and space into a global 2D motion. In addition, second-order motion signals integrate with first-order motion signals (Maruya & Nishida, 2010; Stoner & Albright, 1992a). These findings suggest that 1D motion pooling is, at least partially, cue-invariant, although it has also been shown that the cross-order integration is much weaker than integration within first-order or second-order motion signals (Cassanello, Edwards, Badcock, & Nishida, 2011; Victor & Conte, 1992). 
When two components of plaid motion are separately presented to different eyes, binocular rivalry suppresses perceptual integration into a plaid pattern, but, paradoxically, monocular motion signals are perceptually integrated into a global motion (Cobo-Lewis, Gilroy, & Smallwood, 2000; Saint-Amour, Walsh, Guillemot, Lassonde, & Lepore, 2005; Tailby, Majaj, & Movshon, 2010). No neural correlate for dichoptic motion integration has been found in MT (Tailby et al., 2010). 
Propagation of local 2D vector signals
It is known that an unambiguous 2D vector estimated from the movement of 2D features, such as a corner and a line terminator, propagates over a contour or surface and disambiguates the 1D motion signals attached to the object (Nakayama & Silverman, 1988; Shimojo, Silverman, & Nakayama, 1989). A typical pattern used for the investigation of this process is the barber pole stimulus, in which a drifting diagonal grating appears to move along the long side of a rectangle window. A drifting diagonal bar segment has also been used as a simplified version of the barber pole pattern. At the onset of these stimuli, the apparent motion direction is initially perpendicular to the orientation of the bar, and then it gradually rotates toward the terminator's direction over a period of time. The temporal dynamics of the propagation process has been shown for perception, eye movement, and the neural response in MT (Lorenceau, Shiffrar, Wells, & Castet, 1993; Masson, Rybarczyk, Castet, & Mestre, 2000; Pack & Born, 2001), and a model of this temporal dynamic has also been proposed (Tlapale, Masson, & Kornprobst, 2010). 
Even though the 2D vector of a 2D feature is physically unambiguous, the aperture problem arises in the case of 1D motion sensors, and it should be solved through cross-orientation integration. Recent studies, however, suggest alternative methods that may allow the visual system to compute movements of 2D terminators more directly. First, physiological and modeling studies indicate that direction-selective end-stopped V1 neurons can isolate terminators and have broad but veridical 2D direction tunings to their motion (Pack, Livingstone, Duffy, & Born, 2003; Tsui, Hunter, Born, & Pack, 2010). This is a novel and promising mechanism of the initial stage of motion processing. Note, however, that terminator motion sensors do not work for other 2D features, such as blobs in plaid motion (Alais et al., 1997; Bowns, 1996, 2006). In addition, a single terminator motion sensor cannot directly specify the 2D vector of terminator motion (both direction and speed) unless the outputs of several sensors are properly integrated in a subsequent stage as in the case of 1D motion integration (Bradley & Goyal, 2008). Second, the orientation of the outer boundary of the field, or the motion streak generated by terminator motion, can provide 2D direction information, although it is unsigned (i.e., only specifies the axis of motion) and speed-independent (Badcock, McKendrick, & Ma-Wyatt, 2003). 
Interactions with form information
Separate processing of motion from other visual attributes is no longer an acceptable proposition. It is now recognized that form processing assists 2D vector estimation at least in three ways. 
First, for cross-orientation integration, form information controls whether the component motions should be integrated or segmented. A transparency cue given by luminance relationships facilitates motion transparency (Stoner & Albright, 1992b). Image grouping cues also affect motion integration. For instance, motion binding is easier for a closed configuration than an open one (Lorenceau & Alais, 2001; Lorenceau & Lalanne, 2008). The effects of stimulus difference (e.g., spatial frequency) on motion integration can also be interpreted as an effect of form information. Many models of motion integration have already included interactions with form mechanisms (Beck & Neumann, 2010; Berzhanskaya, Grossberg, & Mingolla, 2007; Grossberg, Mingolla, & Viswanathan, 2001; Lidén & Pack, 1999; Mingolla, 2003; Tlapale et al., 2010). Not all form information seems to be available to the motion system, however. For instance, multiple-window viewing of quasi-natural pattern movement indicates little contribution of second-order statistics (i.e., connectability across windows; Kane, Bex, & Dakin, 2009). 
Second, by modulating the border ownership, form information controls whether a terminator in motion should be included in or excluded from motion integration. Perceived occlusion affects whether terminators' 2D motion disambiguates the inner 1D motion signals of barber pole patterns (Anstis, 1990; Duncan, Albright, & Stoner, 2000; Lorenceau & Shiffrar, 1992; Shimojo et al., 1989). The effect of occlusion on motion integration is controlled not only by local junctions but also by global contextual configurations (McDermott & Adelson, 2004a, 2004b; McDermott, Weiss, & Adelson, 2001). 
Third, as noted in the Propagation of local 2D vector signals subsection, orientation information is used to estimate motion direction (for a detailed review, see Burr & Thompson, 2011). Motion streaks or speed lines produced by moving dots are used to judge 2D motion direction when motion speed is high (Apthorp & Alais, 2009; Apthorp, Cass, & Alais, 2010; Apthorp, Wenderoth, & Alais, 2009; Burr, 2000; Edwards & Crane, 2007; Geisler, 1999). Local orientation information of dynamic Glass patterns produces illusory global motion along the flow of local orientations in the absence of corresponding motion energy (Badcock & Dickinson, 2009; Ross, Badcock, & Hayes, 2000). It should be noted that the use of orientation signals for direction judgment is consistent with the notion of IOC, since 1D motion whose edge orientation is parallel to the true motion vector should have zero speed. Albright (1984) addressed this point when he reported that a significant proportion of MT neurons had an orientation preference to a stationary bar roughly parallel to the preferred motion direction. 
Speed perception
Psychophysical studies of speed perception, in particular those on the effect of motion adaptation (Thompson, 1981) and stimulus contrast (Stone & Thompson, 1992), have led to the idea that the perceived speed is computed from a comparison of a few temporal frequency channels (Hammett, Champion, Morland, & Thompson, 2005; Smith & Edgar, 1994). Integration across different spatial frequencies is also a critical component of the processing for speed perception (otherwise, it should be called temporal frequency perception). The broadband spatial frequency tuning of speed encoding is supported by the stimulus specificity of speed aftereffects (Thompson, 1981) and flicker MAE (Ashida & Osaka, 1995). Whether MT is a neural correlate of this representation is a matter of ongoing debate (Perrone & Thiele, 2001; Priebe et al., 2003). One may regard this mechanism as a part of the local motion integration mechanism for a solution of the aperture problem (Simoncelli & Heeger, 1998). It has been suggested that the reduction of perceived speed at low luminance contrast (Stone & Thompson, 1992) is consistent with Bayesian estimation with the assumption of a prior preferring slow speeds (Stocker & Simoncelli, 2006; Weiss et al., 2002). An effect apparently inconsistent with this argument is that fast motion becomes faster at low contrasts (Thompson, Brooks, & Hammett, 2006), but this effect may not be very robust (Gegenfurtner & Hawken, 1996). 
A horizontal gray bar that drifts horizontally across a surround of black and white vertical stripes appears to stop and start as it crosses each stripe. This footstep illusion was ascribed to an effect of contrast on perceived speed (Anstis, 2001, 2004), but a subsequent study proposed an alternative account based on the contrast-weighted speed average (Howe, Thompson, Anstis, Sagreiya, & Livingstone, 2006). The luminance contrast dependency of apparent speed explains the mismatch in apparent speed between luminance edges and equiluminant color and texture edges (Arnold & Johnston, 2003; Carlson, Schrater, & He, 2006). 
Under low luminance levels (in the mesopic and photopic range), the perceived speed of fast-moving patterns is overestimated (Hammett et al., 2007; Vaziri-Pashkam & Cavanagh, 2008). This is in curious contrast to the reduction of the perceived velocity of rod-mediated stimuli relative to cone-mediated stimuli (Gegenfurtner, Mayser, & Sharpe, 2000). 
The orientation of the moving element affects apparent speed. An element collinear to the motion path appears to be faster than one orthogonal to the path (Seriès, Georges, Lorenceau, & Frégnac, 2002). The authors ascribed this effect to V1 horizontal connections. 
Since a vector consists of 2D direction and speed, correct speed estimation is a part of the aperture problem. I therefore include the topic of speed perception in this section about 2D vector estimation. However, speed perception (for 1D motion patterns) has been studied almost independently of 2D direction perception. Some studies further suggest that speed and direction may be separately processed in the brain. The evidence includes dissociations in the effects of axis of motion (Matthews & Qian, 1999) and TMS (Matthews, Luber, Qian, & Lisanby, 2001) on speed and direction discriminations, perceptual learning specific to speed and direction tasks (Saffell & Matthews, 2003), a speed–direction dissociation in hitting action (Brouwer, Middelburg, Smeets, & Brenner, 2003), and a dissociation in a dynamic MAE (Curran & Benton, 2006). 
Many studies show that the human visual system is poor at processing the rate of speed change, i.e., acceleration, regardless of whether the sensitivity is evaluated in terms of the accuracy of perceptual judgments (Gottsdanker, 1956; Werkhoven, Snippe, & Toet, 1992) or in terms of the accuracy of eye movements or other visuomotor tasks (Brouwer, Brenner, & Smeets, 2002; Watamaniuk & Heinen, 2003). Although some neurons show acceleration sensitivity, this does not necessarily imply acceleration tuning; it could be a result of neural adaptation (Price, Crowder, Hietanen, & Ibbotson, 2006). However, it may be too much to say that acceleration is not processed at all, since some studies suggest effective use of acceleration information for target interception (Dubrowski & Carnahan, 2002), ball catching (Fink, Foo, & Warren, 2009), estimation of time to contact (Capelli, Berthoz, & Vidal, 2010; Kerzel, Hecht, & Kim, 2001), and perception of the walking direction of a point-light walker (Chang & Troje, 2009a). 
Global motion
Global random-dot motion
Global random-dot motion is one of the most popular motion stimuli in current vision research. A typical stimulus consists of signal dots that move in one direction and noise dots that move in random directions (Newsome & Paré, 1988; Williams & Sekuler, 1984). A local motion detector as found in V1 can only detect the motion of one or a small number of dots. For the perception of global coherent motion, the local motion signals should be integrated over space. There are neurons in MT that respond in proportion to the strength of the coherent motion signal, which suggest that local motion integration takes place somewhere between V1 and MT (McCool & Britten, 2008). 
Human observers can detect the signal direction even when the motion coherence is fairly low (e.g., 5%), although the absolute detection threshold is dependent on the algorithm that generates the global motion (Pilly & Seitz, 2009). As described in the Cross-orientation integration of 1D motion signals subsection, the spatial pooling for random-dot global motion is considered to be 2D pooling (integration of local 2D vectors, each computed by local integration of 1D signals) rather than 1D pooling (direct integration of local 1D signals over space), because the motion of a single dot provides a directionally unambiguous 2D vector, and the rule of motion integration is not IOC. The perceived global motion direction is approximately the VA of local dot motion. The purpose of 2D motion pooling for the visual system may be to produce an ensemble representation (e.g., average) of a crowd of local movements (Alvarez, 2011). 
Note, however, that VA may not be the best description of the integration rule of 2D local motion. Webb, Ledgeway, and McGraw (2007) showed that for asymmetric direction distributions, a maximum likelihood decoder of direction-selective neurons predicts the perceived direction better than other measures, including VA. Jazayeri and Movshon also suggested that task-dependent neural decoding might play a critical role in global random-dot motion perception. They developed a model of optimal decoding of sensory information (Jazayeri & Movshon, 2006), which correctly explains a change of the critical motion directions between coarse and fine direction discrimination tasks. That is, the observers are most sensitive to the directions around the two targets for coarse direction discriminations but the directions slightly away from the two target directions for fine direction discriminations (Jazayeri & Movshon, 2007a, 2007b). 
The spatial integration range of global motion pooling is fairly large, having been estimated to be at least 9 deg in terms of the diameter of a circular summation area (63 deg2 in terms of the area; Watamaniuk & Sekuler, 1992). Effective ideal spatial signal summation is observed up to 30–70 deg (Morrone, Burr, & Vaina, 1995), with a slightly larger pooling range for expansion and rotation than for translation (Burr, Morrone, & Vaina, 1998). This large-field integration is not compulsory, since human observers can combine motion signals from cued regions or patches in an optimal manner (Burr, Baldassi, Morrone, & Verghese, 2009). Effective integration of speed signals is larger in the direction of motion than in the orthogonal direction (Vreven & Verghese, 2002). 
Temporal integration duration is also fairly long, with estimates ranging from 100–200 ms (Lee & Lu, 2010) or ∼500 ms (Watamaniuk & Sekuler, 1992) to 2–3 s (Burr & Santoro, 2001). The longest integration time was comparable to that of biological motion (Neri, Morrone, & Burr, 1998). Integration time of the order of seconds is long enough to tap intrasaccadic motion integration (Melcher & Morrone, 2003), but it is dramatically reduced when attention is directed to a concurrent task (Melcher, Crespi, Bruno, & Morrone, 2004), and to some extent, it may reflect non-perceptual decision processes (Morris et al., 2010). 
Several studies attempted to characterize global motion processing independent of initial local motion processing. The results suggest that global motion processing is binocular (Hess, Hutchinson, Ledgeway, & Mansouri, 2007), invariant with retinal eccentricity (Hess & Aaen-Stockdale, 2008), invariant with mean luminance (Hess & Zaharia, 2010), and broadband in spatial frequency tuning (Bex & Dakin, 2002; Yang & Blake, 1994). Equivalent noise analysis could be a useful tool for separately assessing local and global limitations on direction integration (Dakin, Mareschal, & Bex, 2005). 
Motion coherence thresholds are reduced when signal and noise dots have different colors (Croner & Albright, 1997). This is presumably not because global motion pooling is color selective but because color acts as a cue for signal-dot segmentation (Edwards & Badcock, 1996; Li & Kingdom, 2001; Snowden & Edmunds, 1999). It has recently been shown that global motion perception with equiluminant chromatic stimuli is mediated by luminance-sensitive motion mechanisms (Michna & Mullen, 2008; see also Equiluminant chromatic motion subsection). Speed selectivity of noise masking suggests the presence of multiple speed-tuned channels in visual processing (Edwards, Badcock, & Smith, 1998; Khuu & Badcock, 2002; van Boxtel & Erkelens, 2006). In addition, binocular disparity helps segmentation of global motion pooling (Grigo & Lappe, 1998; Hibbard, Bradshaw, & DeBruyn, 1999; Khuu, Li, & Hayes, 2006; Poom & Börjesson, 2005; Snowden & Rossiter, 1999). 
Motion transparency
There are two types of motion transparency. One is seen with plaid stimuli, where the transparency is simply taken as the failure of integration. The other, which is considered here, is motion transparency seen with random dots. Many studies have investigated motion transparency of this type. 
For perception of transparent motion, local dots moving in different directions should be separated in space (Qian, Andersen, & Adelson, 1994b); otherwise, they are averaged into a single vector (Curran & Braddick, 2000). This explains why transparent motion induces a unidirectional MAE, except when separate speed mechanisms are driven (Snowden & Verstraten, 1999; van der Smagt, Verstraten, & van de Grind, 1999). On the other hand, for perception of transparent motion, different directions should not be separated in time. Asynchronous direction changes produce a perception of two layers, while synchronous ones do not (Kanai, Paffen, Gerbino, & Verstraten, 2004; Watamaniuk, Flinn, & Stohr, 2003) unless alternation is very rapid (van Doorn & Koenderink, 1982). 
When transparent motion is defined purely by direction differences, no more than two signal directions can be detected simultaneously. This can be ascribed to signal intensity. A signal intensity of about 42% is required in order to perceive a bidirectional transparent motion stimulus (Edwards & Greenwood, 2005). Adding differences in speed and binocular disparity between component motions enables observers to simultaneously perceive three signal directions but not four (Greenwood & Edwards, 2006a, 2006b). This limit may reflect a higher order perceptual cost to see motion transparency (Suzuki & Watanabe, 2009; Wallace & Mamassian, 2003). 
Perception of motion transparency includes the reorganization of perceptual representations. The formation of surfaces affects how motion information is combined with other visual attributes (Clifford, Spehar, & Pearson, 2004; Moradi & Shimojo, 2004). 
Direction repulsion is the overestimation of angles between two motion directions in motion transparency (Marshak & Sekuler, 1979). It is considered to reflect repulsive interactions between two directions (Wilson & Kim, 1994) or functional computation of target motion relative to the background motion (Dakin & Mareschal, 2000). Although direction repulsion was reported to survive under dichoptic presentation (Marshak & Sekuler, 1979), subsequent studies suggest that it is suppressed by binocular rivalry (Chen, Matthews, & Qian, 2001) and that it is primarily a monocular effect (Grunewald, 2004; Wiese & Wenderoth, 2007, 2010). With regard to spatial frequency selectivity, direction repulsion between 1D component motions in plaid motion is spatial frequency selective (Kim & Wilson, 1996), while direction repulsion between motions of band-pass 2D patterns is not (Lindsey, 2001). Direction repulsion is modulated by attention (Chen, Meng, Matthews, & Qian, 2005; Tzvetanov, Womelsdorf, Niebergall, & Treue, 2006). It has been suggested that it takes place at the global motion level where local motion information is integrated by broadband speed channels (Benton & Curran, 2003; Curran & Benton, 2003). While there is ongoing debate as to the neural origins of direction repulsion and the direction aftereffect, the two effects are suggested to occur at different levels of motion processing (Curran, Clifford, & Benton, 2006; Wiese & Wenderoth, 2007). 
Complex global motion
There are several lines of psychophysical evidence that visual motion processing includes mechanisms that are sensitive to complex global motion patterns, such as circular motion (rotation) and radial motion (expansion and contraction), in addition to global translation. One is a phantom MAE, in which adaptation to two segments that contain upward and downward motion induces the perception of leftward and rightward motion in another part of the visual field (Snowden & Milne, 1997). Likewise, a motion assimilation effect induces global circular and radial motion (Ohtani, Tanigawa, & Ejima, 1998). The presence of complex motion mechanisms is also indicated by the findings that detection sensitivity is higher (Freeman & Harris, 1992; Lee & Lu, 2010), the MAE is stronger (Bex, Metha, & Makous, 1999), and crowding is stronger (Bex & Dakin, 2005) for circular and radial motion than for translation. 
Whereas monkey physiology indicates a special role for MSTd in complex motion processing (Duffy & Wurtz, 1991; Graziano, Andersen, & Snowden, 1994; Tanaka & Saito, 1989), recent human imaging studies suggest the contribution of a wide range of cortical areas to various stages of optic flow processing (Holliday & Meese, 2008; Koyama et al., 2005; Morrone et al., 2000; Wall, Lingnau, Ashida, & Smith, 2008; Wall & Smith, 2008). 
It is possible to consider circular and radial motion as two cardinal directions of an optic-flow coordinate space, with variations of spirals corresponding to intermediate directions (Graziano et al., 1994). Sensitivity tuning functions and masking effects suggest that optic flow detectors are tuned to these two cardinal directions (Burr, Badcock, & Ross, 2001; Morrone, Burr, Di Pietro, & Stefanelli, 1999), although this hypothesis is not perfectly in agreement with subthreshold summation data (Meese & Anderson, 2002) and the physiology of MST (Graziano et al., 1994). 
It has been reported that humans can precisely estimate parameters of optic flow components, such as the angular velocity of a circular motion and the rate of expansion of a radial motion (Barraza & Grzywacz, 2002, 2003, 2005; Wurfel, Barraza, & Grzywacz, 2005). Under some conditions, however, apparent rotation speed is affected by the stimulus configuration (Caplovitz, Hsieh, & Tse, 2006; Kohler, Caplovitz, & Tse, 2009). 
Perception of expansion is not necessarily preceded by the detection of local motion flow, since it arises for stochastic texture stimuli in which the scale of image elements increases gradually over time, with no local correlations between successive images (Schrater, Knill, & Simoncelli, 2001). These pure scale changes can produce an MAE. A similar idea leads to a global rotation display without local motion (“fractal rotation”; Benton, O'Brien, & Curran, 2007; Lagacé-Nadon, Allard, & Faubert, 2009). 
Higher order motion
Definition of second-order motion
While first-order motion is the movement of luminance-defined patterns, second-order motion is the movement of high-level features defined by such properties as contrast modulation and temporal modulation (Anstis, 1980; Badcock & Derrington, 1985, 1987, 1989; Cavanagh & Mather, 1989; Chubb & Sperling, 1988; Derrington & Badcock, 1985; Sperling, 1976). It is known that second-order motion is visible to a wide range of species, including zebrafish (Orger, Smear, Anstis, & Baier, 2000) and flies (Theobald, Duistermars, Ringach, & Frye, 2008). 
According to a strict definition, first-order motion is the movement of luminance-defined patterns detectable by the standard Fourier motion analyzer, such as the motion energy model. In other words, first-order motion is predicted by the MFFC principle (Chubb & Sperling, 1988), on which the standard motion analysis is based. 2 According to this definition, one can easily understand why a shift of the same luminance-defined pattern can change from first-order to second-order depending on the jump size. Consider a jump of a luminance-defined Gabor patch (a sinusoidal carrier grating modulated by a Gaussian envelope). When the jump size is smaller than a half-cycle of the carrier, the apparent motion seen in the jump direction is (dominantly) a first-order motion, since it can be explained by the Fourier motion analysis. However, when the jump size is much larger than that, the apparent motion seen in the jump direction is likely to reflect a non-first-order motion carried by the movement of the contrast envelope. Drift-balanced motion is a pure second-order motion that is mathematically impossible for any mechanisms following the MFFC principle to consistently detect the second-order motion direction (Chubb & Sperling, 1988). This distinction between first-order motion and second-order motion is theoretically clear, but whether it is meaningful depends on how valid the assumptions are. 
Relation between first-order motionand second-order motion
With different assumptions about the non-linear components involved in first-order motion detection, motion detectors for first-order motion could be sensitive to some types of second-order motion (Benton & Johnston, 2001; Benton, 2004; Benton, Johnston, & McOwan, 1997, 2000; Johnston & Clifford, 1995; Taub, Victor, & Conte, 1997). For instance, Benton and Johnston (2001) have shown mathematically that correct information about movement of contrast modulation is present in the local spatial and temporal luminance gradients within the low-contrast regions of a contrast-modulated sine wave and can be detected by a luminance-sensitive gradient-type motion sensor. Psychophysical evidence for this use of this information by the human visual system comes from the match between computational predictions from the model and measurements of the perceived speed of the envelope motion (Johnston & Clifford, 1995). 
Several lines of behavioral evidence, however, indicate that first-order motion and second-order motion are, at least partially, processed separately. Motion adaptation phenomena, such as direction-selective sensitivity reduction (Ashida, Lingnau, Wall, & Smith, 2007; Nishida, Ledgeway, & Edwards, 1997) and flicker MAEs (Pavan, Campana, Guerreschi, Manassi, & Casco, 2009; Schofield, Ledgeway, & Hutchinson, 2007) are weak between first-order (luminance-defined) motion and second-order (contrast-defined) motion, in particular in the direction of second-order to first-order motion. Cross-order masking effects (Hutchinson & Ledgeway, 2004) and motion priming effects (Pavan et al., 2009) are also weak. When opposing first-order and second-order gratings are superimposed on each other, one sees motion transparency (Goutcher & Loffler, 2009). The lack of the ocular following response for second-order motion (Hayashi, Miura, Tabata, & Kawano, 2008) provides further support of segregated processing. 
A detailed architecture of first-order and second-order motion processing has been psychophysically revealed through the analysis of two types of MAE (Mather, Pavan, Campana, & Casco, 2008; Nishida & Sato, 1995). One is the static MAE measured with stationary tests, and the other is the flicker MAE measured with counterphase tests. [Although the flicker MAE is often identified with the dynamic MAE measured with random-dot motion tests (Blake & Hiris, 1993), it remains unclear whether second-order motion adaptation effects are the same when measured with dynamic and flicker tests.] A static MAE is induced by first-order motion adaptation but not by second-order motion adaptation (Derrington & Badcock, 1985; Nishida & Sato, 1992). After adaptation to a compound grating motion (2f + 3f motion) in which first-order and second-order components are moving in the opposite directions, a static MAE is induced in the direction opposite the first-order direction even when the dominant perception during adaptation is second-order motion (positive MAEs; Mather, Cavanagh, & Anstis, 1985; Nishida & Sato, 1992). On the other hand, a flicker MAE is induced by second-order motion adaptation (Ledgeway & Smith, 1994; Nishida & Sato, 1995). After adaptation to the 2f + 3f motion, a flicker MAE is primarily induced in the direction opposite to the second-order motion and can be stronger in magnitude for an interocular condition than for a monocular condition (over 100% interocular transfer; Nishida & Ashida, 2001). These findings indicate an architecture of visual motion processing in which low-level parallel processing for first-order and second-order motion is followed by a high-level integrative processing (Nishida & Ashida, 2000; Nishida & Sato, 1995; Wilson & Kim, 1994). Static MAEs reflect adaptation in the low-level first-order system, and flicker MAEs reflect adaptation in all three systems, i.e., the low-level first-order and second-order systems and the high-level integrative system. This functional structure, however, does not necessarily have a direct large-scale anatomical correlate. Neuroimaging studies indicate significant overlap and possible partial segregation of first-order and second-order processing (Ashida et al., 2007; Dumoulin, Baker, Hess, & Evans, 2003; Nishida, Sasaki, Murakami, Watanabe, & Tootell, 2003; Seiffert, Somers, Dale, & Tootell, 2003; Smith, Greenlee, Singh, Kraemer, & Hennig, 1998). 
It is also known that second-order motion interacts with first-order motion in a number of ways. While some of them seem to suggest interactions at early motion detection levels (Allard & Faubert, 2008; Barraclough, Tinsley, Webb, Vincent, & Derrington, 2006; Hock & Gilroy, 2005), most of them can be interpreted as cross-order interactions at post-detection stages. Motion detection masking indicates separate processing at low temporal frequencies but common processing at high temporal frequencies (Allard & Faubert, 2008). Adaptation effects indicate an asymmetric transfer between first-order motion and second-order motion such that adaptation to first-order motion affects second-order motion perception but not vice versa (Nishida, Ledgeway et al., 1997; Schofield et al., 2007). Perceptual learning also indicates an asymmetric transfer, but the direction is opposite—perceptual learning with second-order motion affects first-order motion perception but not vice versa (Petrov & Hayes, 2010; Zanker, 1999). Second-order motion can be integrated with first-order motion when local 1D motion signals are integrated into a global 2D motion (Maruya & Nishida, 2010; Stoner & Albright, 1992a), but this cross-order integration is not very effective (Victor & Conte, 1992), and noise masking obtained with denser motion patterns, such as global Gabor motion, is consistent with separate 1D pooling and 2D pooling of first-order motion and second-order motion (Cassanello et al., 2011; Edwards & Badcock, 1995). The infinite regress illusion (Tse & Hsieh, 2006) can be ascribed to faulty integration of envelope second-order motion and carrier first-order motion. 
Characteristics of second-order motion
Adaptation and masking studies show that, like first-order motion detection, second-order motion detection is spatial frequency selective (Hutchinson & Ledgeway, 2004; Nishida, Ledgeway et al., 1997). Temporal frequency tuning is predominantly low-pass for second-order motion, while band-pass for first-order motion (Hutchinson & Ledgeway, 2006). Thresholds for direction identification of second-order motion are consistently higher than those for spatial orientation identification, unlike first-order gratings, for which the two thresholds are typically the same (Ledgeway & Hutchinson, 2005). With regard to spatial summation characteristics, the image sizes at which direction identification performance reaches the asymptote are larger for first-order motion than for second-order motion (Hutchinson & Ledgeway, 2010). Both latencies of visual-evoked potentials (VEPs; Ellemberg et al., 2003) and behavioral reaction times for direction identification (Ledgeway & Hutchinson, 2008) are generally longer for second-order motion than for first-order motion even when the sensitivity difference is taken into account. 
It has been suggested that second-order motion contributes to a variety of high-level motion functions, such as 1D motion pooling (Maruya & Nishida, 2010), optic flow processing (Aaen-Stockdale, Ledgeway, & Hess, 2007; Bertone & Faubert, 2003), structure from motion (Aaen-Stockdale, Farivar, & Hess, 2010; Landy, Dosher, & Sperling, 1991), and biological motion perception (Aaen-Stockdale, Thompson, Hess, & Troje, 2008; Gurnsey & Troje, 2010; Mather, Radford, & West, 1992). However, it remains in dispute whether the contribution of second-order motion is as effective as that of first-order motion when scaled appropriately in intensity and spatial scale and whether first-order and second-order motion contribute together in a cue-invariant manner. 
Mechanisms of second-order motion detection
There are two possible mechanisms responsible for second-order motion detection—a low-level second-order motion sensor and a high-level feature-tracking mechanism (see Feature tracking subsection). According to a popular view, low-level second-order motion detection has a structure similar to a first-order motion sensor but has non-linear preprocessing to extract second-order features, such as filter–rectify–filter stages (Chubb & Sperling, 1988; Ledgeway & Hess, 2000; Wilson et al., 1992). An alternative view is that a gradient-type first-order motion sensor (Johnston et al., 1992) contributes to low-level second-order motion detection. Johnston, McOwan, and Benton (1999) argue that induced motion in a static carrier in the opposite direction to second-order motion is difficult to explain either by filter–rectify–filter models or feature tracking. 
The evidence currently available indicates that low-level second-order sensors operate in combination with feature-tracking mechanisms, and which mechanism predominates is dependent on the stimulus and task. One test of low-level motion detection is pedestal immunity, which examines whether motion detection is affected by addition of a static pedestal pattern that masks feature tracking (Lu & Sperling, 1995b). Second-order motion detection passes this test at high contrasts but not at low contrasts (Lu & Sperling, 2001; Ukkonen & Derrington, 2000). The minimum motion threshold for second-order motion detection is position-based (which suggests feature tracking) at low contrasts and low speeds, while it is velocity-based (which suggests low-level motion detection) otherwise (Seiffert & Cavanagh, 1998, 1999). Monitoring multiple motion signals in parallel is much harder for second-order motion than for first-order motion (Allen & Ledgeway, 2003; Ashida, Seiffert, & Osaka, 2001; Lu, Liu, & Dosher, 2000), which suggests the contribution of attention-limited feature-tracking mechanisms. This capacity limitation is evident at low speeds but less so at high speeds (Allen & Ledgeway, 2003; Ashida et al., 2001). These results indicate that low-level mechanisms are responsible for second-order motion perception at least at fast speeds or at high contrasts. In addition, the involvement of low-level second-order motion detection is supported by spatial frequency-selective motion detection (see Characteristics of second-order motion subsection), as well as by MAE induction by second-order adaptation motion even when attention is distracted (Nishida & Ashida, 2000) and even without awareness of motion (Harp, Bressler, & Whitney, 2007; Whitney & Bressler, 2007). 
Second-order motion can be produced by movements of various high-level image features. It remains an open question whether different types of second-order motion are detected by a common mechanism, though it has been suggested that contrast-defined and orientation-defined motion may be separately detected (Blaser & Sperling, 2008). 
Feature tracking
The notion of high-level motion detection has a long history (Anstis, 1980; Braddick, 1974), but its specification had been generally crude until two concrete notions of feature tracking were proposed. 
One is what Lu and Sperling (1995a, 1995b, 2001) called the third-order motion mechanism. It detects movements in a saliency map by standard motion analysis. Since the salience map integrates salient location information from various feature processing subsystems, this universal mechanism can detect motion between any salient events even when they are defined by different attributes (Cavanagh, Arguin, & von Grünau, 1989; Lu & Sperling, 1995a). This is an attentive motion mechanism in that attention exerts strong control over stimulus saliency (Lu & Sperling, 1995a; Tseng, Gobell, & Sperling, 2004). Lu and Sperling proposed that motion-defined motion (Zanker, 1993) and stereo-defined motion are third-order motion, although there is a counterargument that stereo-defined motion shows various low-level characteristics (Patterson, 2002). 
The second type of feature-tracking mechanism is attentive tracking proposed by Cavanagh (1992, 1994). According to his view, it is an active shift of attention, not passive detection of stimulus-driven motion signals, that produces motion sensation. Although this mechanism can operate during perception of standard motion stimuli, including classical long-range apparent motion, an experimental manipulation that is considered to exclusively drive this mechanism is to ask the observer to voluntarily track one of two directions of physically ambiguous motion stimuli, such as a radial counterphase grating. The active motion sensation produced in this way is accompanied by a smooth shift of the peak of improvements in contrast sensitivity measured with a probe presented along the tracking path (Shioiri, Yamamoto, Kageyama, & Yaguchi, 2002). It is also capable of inducing flicker MAE (Culham, Verstraten, Ashida, & Cavanagh, 2000) and positional MAE (Shim & Cavanagh, 2005). 
Note that third-order motion and attentive tracking are not exclusive concepts. They may exist as separate mechanisms in human visual motion processing or may only emphasize different aspects of the same complex high-level motion system. 
Feature tracking is the most powerful mechanism of motion perception in that it is able to detect movements of nearly any kind, but it cannot operate rapidly. The temporal limit of seeing third-order motion is suggested to be ∼3 Hz (Lu & Sperling, 2001). This may be a general temporal limit of high-level super-modal processing, since it is comparable to the temporal binding limit across different sensory attributes and modalities (Fujisaki & Nishida, 2010; Holcombe, 2009; Holcombe & Cavanagh, 2001; Nishida & Johnston, 2010). The temporal limit of attentive tracking of ambiguous motion is reported to be 4–8 Hz (Verstraten, Cavanagh, & Labianca, 2000). 
The high-level tracking mechanisms described above should not be identified with a recently proposed low-level terminator tracking mechanism (Pack, Conway, Born, & Livingstone, 2006; Tsui et al., 2010). In addition, it is unclear how the high-level tracking mechanisms are related to the feature-tracking mechanisms proposed for 2D vector estimation of plaid motion (Alais et al., 1997; Bowns, 1996; Derrington et al., 1992; Pack et al., 2006; Tsui et al., 2010). It should also be noted that third-order motion could have a different, stimulus-based, meaning, i.e., the movement of features defined by third-order statistical properties. A recent study has reported a class of motion stimuli characterized by their third- and fourth-order correlations, yet the stimuli are likely to seen by low-level motion processors rather than feature tracking (Hu & Victor, 2010). Another recent study failed to find perception of a motion of an order higher than third, i.e., semantics-based motion (Blaser & Sperling, 2008). 
Multiple object tracking provides an alternative paradigm with which to examine the attentive tracking mechanism (Pylyshyn & Storm, 1988). There are numerous studies based on this task (see Cavanagh & Alvarez, 2005; Scholl, 2009 for review), but most of them are out of the scope of the current review, since their interests were mainly placed on cognitive processing, not on motion perception. It is reported that tracking performance is worse when the texture within an object moves in the opposite direction of the object than when the texture moves in the same direction as the object (St Clair, Huff, & Seiffert, 2010). 
Equiluminant chromatic motion
As a physical stimulus, equiluminant chromatic motion is first order, in the sense that color is a property defined by a single point as is luminance. In terms of visual processing, however, chromatic motion may be primarily detected by high-level feature-tracking (third-order) mechanisms. This is because the perception of chromatic motion is significantly affected by the relative saliency of component colors (Lu, Lesmes, & Sperling, 1999a, 1999b). On the other hand, a contribution from low-level chromatic mechanisms to color motion is also suggested by the performance of motion detection in the presence of a static mask (Cropper, 2006) and by lack of effects of attention and saliency on chromatic MAEs (Dobkins, Rezec, & Krekelberg, 2007). Since feature tracking is unable to effectively detect noisy global motion, the contribution of chromatic signals (both L–M and S) to global motion perception (Michna & Mullen, 2008; Michna, Yoshizawa, & Mullen, 2007) also indicates the existence of a low-level color-sensitive motion mechanism, although it is also shown that this mechanism is luminance-sensitive (Michna & Mullen, 2008; see also Cropper & Wuerger, 2005; Dobkins & Albright, 2004; Lu & Sperling, 2001, for review on this topic). 
Motion aftereffects
This section presents a brief overview of recent MAE studies. Note that topics concerning MAEs are also addressed in other sections of this review (see, e.g., Relation between first-order motion and second-order motion and Flash-lag effect subsections). Excellent summaries of MAE studies can be also found in Mather et al. (2008) and Mather, Verstraten, and Anstis (1998). 
MAEs have been and still are used as useful psychophysical probes to analyze visual motion processing. Different aspects of motion processing can be assessed by manipulating an adaptation stimulus (e.g., translation or expansion), test stimulus (e.g., static or dynamic), presentation style (e.g., monocular or interocular presentation), and task (e.g., with or without an attention-distracting task). MAEs have shown the internal structure of visual motion processing, such as second-order motion processing (see Relation between first-order motion and second-order motion subsection). Speed selectivities of MAEs indicate the structure of multiple speed-tuned channels (Alais, Verstraten, & Burr, 2005; Anstis, 2009; Shioiri & Matsumiya, 2009; Tao, Lankheet, van de Grind, & van Wezel, 2003; van de Grind, Verstraten, & van der Smagt, 2003; van der Smagt et al., 1999). MAEs have also revealed where and how visual motion processing interacts with other systems, such as attention control (Arman, Ciaramitaro, & Boynton, 2006; Mukai & Watanabe, 2001), non-visual sensory modalities (Freeman & Driver, 2008; Konkle, Wang, Hayward, & Moore, 2009), and cognitive processing (Blaser & Shepard, 2009; Dils & Boroditsky, 2010). 
Several studies used MAEs to examine how awareness is related to visual motion processing. When adaptation stimuli are made invisible by binocular rivalry, MAEs survive, but their magnitude is reduced at least at low adaptation contrasts (Blake, Tadin, Sobel, Raissian, & Chong, 2006). Although this was found when the aftereffect was measured for both static and dynamic tests (Blake et al., 2006), it has also been reported that a high-level component of the MAE (interocular component of flicker MAE) does not result from adaptation to motion made invisible by flash suppression (Maruya, Watanabe, & Watanabe, 2008). On the other hand, when the awareness of adaptation motion is suppressed by crowding, MAEs are induced by high-level motion stimuli such as non-local rotation (Aghdaee, 2005) and second-order motion (Whitney & Bressler, 2007). 
MAEs can occur beyond retinotopically adapted locations. One example is the phantom MAE induced by rotating motion (Snowden & Milne, 1997; see Complex global motion subsection). Rotating motion also produces flicker or dynamic MAEs even when the center of rotation shifts between adaptation and test (Culham et al., 2000; Meng, Mazzoni, & Qian, 2006). Adaptation to motion close to fixation induces flicker MAEs that propagate centrifugally across the visual field (Mcgraw & Roach, 2008). Another form of non-retinotopic aftereffect is the spatiotopic one observed at the environmental location of adaptation despite movements of the eye (Melcher, 2005, 2008; Melcher & Colby, 2008). With one exception (Ezzati, Golzar, & Afraz, 2008), the existence of spatiotopic MAEs has not been supported (Cavanagh, Hunt, Afraz, & Rolfs, 2010; Knapen, Rolfs, & Cavanagh, 2009; Wenderoth & Wiese, 2008). It should be noted that the modulation of retinotopic aftereffects by the gaze direction (Nishida, Motoyoshi, Andersen, & Shimojo, 2003) is distinct from the spatiotopic aftereffects. There is also an extraretinal MAE directly induced by pursuit eye movements (Chaudhuri, 1991; Freeman, 2007b; Freeman, Sumnall, & Snowden, 2003). 
Static MAEs introduce illusory motion that is incompatible with the spatial pattern of the test field. One outcome of this motion–space incompatibility is positional MAEs (see Backward shift induced by the motion aftereffect (positional MAE) subsection), in which illusory motion alters space and form perception. Another outcome is the modulation of MAEs by the test spatial pattern. Specifically, MAEs are suppressed more strongly when the test stimulus contains strong form information that goes against illusory motion, such as sharp edges (Fang & He, 2004), and the spatial alignment of the test field elements with surround elements (Harris, Sullivan, & Oakley, 2008). The perceived motion in MAEs is also affected by the test spatial location (López-Moliner, Smeets, & Brenner, 2004) or by the test depth structure (van der Smagt & Stoner, 2002). These effects are ascribed to the context-dependent interpretation of early illusory motion signals by the subsequent motion processing. 
A variety of models have been proposed to explain the mechanism of the MAE. van de Grind et al. (van de Grind, Lankheet, & Tao, 2003; van de Grind, van der Smagt, & Verstraten, 2004) showed the effectiveness of a simple gain control model. Morgan, Chubb, and Solomon (2006) proposed that MAEs are predictable from adaptation-induced sensitivity loss. Stocker and Simoncelli (2009) proposed a model that includes two isomorphic adaptation mechanisms, one non-directional and one directional (see also a review by Clifford, 2002, for his computational analysis of the mechanism of MAEs). 
When an unambiguous motion is followed by a directionally ambiguous test stimulus, such as a counterphase grating, a negative (flicker) MAE is observed when the motion duration is long. However, an induction effect in the opposite positive direction (motion priming) is observed when the motion duration is short (say, <200 ms; Kanai & Verstraten, 2005; Pavan, Cuturi, Maniglia, Casco, & Campana, 2010; Piehler & Pantle, 2001; Pinkus & Pantle, 1997; Ramachandran & Anstis, 1983). A possible mechanism of motion priming is temporal integration of the motion energy signals (Pinkus & Pantle, 1997). However, since motion priming nearly disappears under low retinal illumination where temporal integration is enhanced, contribution of higher order feature tracking has also been suggested (Takeuchi, Tuladhar, & Yoshimoto, 2011). 
Motion-induced position shift
By using uniform motion fields, such as drifting gratings and random-dot motion fields, conventional motion studies have tried to treat motion as a location-independent attribute. Given that motion is a temporal change of position, however, motion and position are inseparable attributes. Since the 1990s, motion–position interactions have been one of the major topics of visual motion research. There are several cases where motion affects apparent position, including a forward shift induced by internal motion (MIPS), a forward shift induced by external motion (motion drag), a backward shift induced by MAEs (positional MAEs), and a mislocalization of flash relative to continuous motion (flash lag). 
Forward shift induced by internal motion (MIPS)
When the internal texture of a stationary object is moving, the object location is apparently shifted in the direction of motion (Ramachandran & Anstis, 1990). This apparent shift is observed when the boundary between the object and its background is not abrupt or well localized. A popular example of this illusion is the apparent shift of a stationary Gabor containing a drifting carrier (De Valois & De Valois, 1991). This phenomenon is often called motion-induced position shift (MIPS). I will use this somewhat general term only to refer to this specific type of position shift. I will use different terms to refer to the other types that I will describe in the following subsections. 
Presumably due to a similar mechanism, internal radial motion induces a size change (Whitaker, McGraw, & Pearson, 1999), and motion in depth induces a shift in depth (Edwards & Badcock, 2003; Tsui, Khuu, & Hayes, 2007a). MIPS occurs not only for first-order motion but also for second-order motion defined by contrast modulations (Bressler & Whitney, 2006; Pavan & Mather, 2008) and interocular correlations (Murakami & Kashiwabara, 2009). The shift magnitude, however, is smaller than that obtained with first-order motion (Pavan & Mather, 2008) and reduced still more when the relative position shift is measured between first-order and second-order motion (Pavan & Mather, 2008). 
MIPS is observed even at a very short duration, and the shift magnitude monotonically increases as the duration is increased (Arnold, Thompson, & Johnston, 2007; Chung, Patel, Bedell, & Yilmaz, 2007). Exceptionally, at fast speeds, the shift magnitude initially increases and then decreases before reaching a steady-state value at longer durations (Chung et al., 2007). MIPS affects other spatial pattern processing, such as contour integration (Bex, Simmers, & Dakin, 2001; Hayes, 2000) and global form perception (Dickinson, Han, Bell, & Badcock, 2010; Rainville & Wilson, 2005). 
The mechanism responsible for MIPS remains unclear. One hypothesis ascribes it to a direction-dependent shift of the receptive field of neurons in early visual cortex (Fu, Shen, Gao, & Dan, 2004), but this hypothesis is inconsistent with the findings that MIPS can be produced by plaid motion and global Gabor motion, which implies that the position shift is based on 2D global motion produced by 1D motion pooling not on early local 1D motion (Hisakata & Murakami, 2009; Mather & Pavan, 2009; Rider, McOwan, & Johnston, 2009). Furthermore, fMRI BOLD activity in V1 does not show a shift of retinotopy consistent with MIPS (Liu, Ashida, Smith, & Wandell, 2006; Whitney, Goltz et al., 2003). Another hypothesis assumes a direction-dependent contrast modulation such that contrast appears to be higher at the leading edge than at the trailing edge (Arnold et al., 2007; Chung et al., 2007; Tsui, Khuu, & Hayes, 2007b). A recent study further shows that the enhanced detection at the leading edge is phase dependent, which may provide the evidence of forward prediction of spatial pattern by the visual system (Roach, Mcgraw, & Johnston, 2011). Although the asymmetric contrast modulation is an interesting finding, whether it can explain MIPS is debatable, since the magnitude of MIPS could be too large for contrast modulation to explain (Hisakata & Murakami, 2009; Rider et al., 2009). 
It has been reported that the magnitude of MIPS is dependent on motion direction (toward or away from the fovea), but the pattern of anisotropy is not always consistent (Fan & Harris, 2008; Linares & Holcombe, 2008). 
Forward shift induced by external motion (motion drag)
The position of a stationary flash appears to shift in the same direction as the motion field in the neighborhood (Whitney & Cavanagh, 2000). In this review, I use the term “motion drag” (Scarfe & Johnston, 2010) to refer to this forward shift induced by external motion, although it has also been called flash drag (Eagleman & Sejnowski, 2007), motion-induced mislocalization (Tse, Whitney, Anstis, & Cavanagh, 2011), and position capture (Watanabe, Sato, & Shimojo, 2003; note a different use of this term by Murakami & Shimojo, 1993). While classical induced motion (Dunker, 1929) is a contrast effect, motion drag is an assimilation effect. With regard to this, motion drag is similar to the induction of forward motion of an ambiguous motion by moving surrounds (Nishida, Edwards, & Sato, 1997; Ohtani, Ido, & Ejima, 1995). Note, however, that motion drag is not accompanied by the perception of induced motion in a flash, and it therefore cannot be ascribed to the position shift induced by its own motion described in the Forward shift induced by internal motion (MIPS) subsection. 
Motion drag can occur in large spatial scales. A moving inducer presented in the central visual field can change the position of a flash presented in the far periphery (Whitney & Cavanagh, 2000), as in the case of positional MAEs (see Backward shift induced by the motion aftereffect (positional MAE) subsection). Strong motion drag is observed when test duration is brief, possibly because at long durations, accurate localization of the flash may suppress mislocalization. The magnitude of motion drag peaks not when the target flash is closest in space and time to the motion but when the target flash is slightly ahead of the motion (Durant & Johnston, 2004; Watanabe, 2005a; Watanabe et al., 2003). The magnitude of motion drag as a function of the time lag between motion onset and the flash can be interpreted as reflecting the dynamics of population neural responses to that motion, which have slow adaptation components (Roach & McGraw, 2009). The position shift of a target can be measured not only by perceptual judgments but also by motor responses. It has been suggested that the time course of a target position shift can be estimated from the curved trajectory of target reaching hand movement (Whitney, Westwood, & Goodale, 2003). It remains controversial, however, whether this hand movement reflects unconscious dynamics of target position change or the manual following response, i.e., a direct, rapid, and involuntarily modulation of hand position by the motion field (Saijo, Murakami, Nishida, & Gomi, 2005; Whitney et al., 2007). 
Motion drag may be a high-level effect, since it can be induced by a variety of high-level motion. Induction by global Gabor motion indicates the involvement of the motion signal produced after global motion pooling (Scarfe & Johnston, 2010). Motion drag is induced by object motion seen only through a narrow slit (Watanabe, Nijhawan, & Shimojo, 2002) and by object motion rendered invisible by occlusion (Watanabe et al., 2003). Induction of motion drag by the perceived or attentively tracked motion of ambiguous motion stimuli indicates the contribution of an attentive tracking mechanism to motion drag (Shim & Cavanagh, 2004, 2005). A recent study demonstrates strong modulation by voluntary attention (Tse et al., 2011). Motion drag requires awareness of motion, since suppression of motion from awareness by binocular rivalry removes the effect (Watanabe, 2005b). 
A possible mechanism of motion drag is that the flash location is encoded in relation to the moving context but, due to the apparent delay of the appearance of the flash, with the motion context already shifted forward at the apparent time of the flash. As a result, the flash location linked with the motion also appears to shift forward in the external frame. 
A potential linkage between motion drag and perisaccadic mislocalization (Ross, Morrone, & Burr, 1997) has also been suggested. That is, when a flashed object is presented beyond the end point of apparent motion, it is mislocalized not in the same direction as but in the opposite direction to the apparent motion, just like saccadic compression toward the saccade target (Shim & Cavanagh, 2006). 
Backward shift induced by the motion aftereffect (positional MAE)
Motion adaptation causes a backward position shift at adapted and non-adapted locations (Nishida & Johnston, 1999; Snowden, 1998). A process similar to this positional MAE can produce an apparent size change (Whitaker et al., 1999). 
Positional MAEs occur even when the adaptation and test stimuli are in significantly different spatial locations (Mcgraw & Roach, 2008; Whitney & Cavanagh, 2003) or have different orientation, spatial frequency, contrast (McGraw, Whitaker, Skillen, & Chung, 2002), and chromatic composition (McKeefry, Laviers, & McGraw, 2006). This implies that a positional MAE is mediated, at least partially, by a mechanism distinct from the mechanism underlying the conventional static MAE. Further evidence of dissociation of the static MAE and position shift is given by the spatial frequency-contingent aftereffect (Bulakowski, Koldewyn, & Whitney, 2007). It remains an open question whether the positional MAE and the flicker MAE share a common mechanism. At least, they are similar with regard to the remote induction effect (Mcgraw & Roach, 2008). On this argument, the positional MAE may have two components: a transient component that appears from the onset and a sustained component that develops over time (Nishida & Johnston, 1999). 
A positional MAE occurs even when motion during adaptation is excluded from awareness by crowding (Whitney, 2005). This is found even when the adaptation and test have orthogonal orientations (Whitney, 2005) or when they are second-order motion (Harp et al., 2007). These findings imply that the adaptation occurring at early levels can induce positional MAEs, but they do not necessarily imply that the position shift itself occurs at an early level. A recent study reports that adaptation to implied motion from static photographs induces a position shift (Pavan et al., 2010). The positional MAE is greatly reduced when TMS is delivered to MT/V5 (McGraw, Walsh, & Barrett, 2004). 
Flash-lag effect
The position of a flashed object appears to lag behind a moving object. Although the first report of this illusion was made in 1920s (Hazelhoff & Wiersma, 1924; Nijhawan, 2002), since its rediscovery was reported in 1994 (Nijhawan, 1994), a number of studies have investigated the flash-lag effect, and several reviews have already been published (Eagleman & Sejnowski, 2007; Krekelberg, 2003; Nijhawan, 2002; Whitney, 2002). 
Flag lag occurs under a variety of conditions, including motion from eye movements (Nijhawan, 2001), motion in depth (Harris, Duke, & Kopinska, 2006; Ishii, Seekkuarachchi, Tamura, & Tang, 2004; Lee, Khuu, Li, & Hayes, 2008), and movements in other attributes (Sheth, Nijhawan, & Shimojo, 2000) and modalities (Alais & Burr, 2003; Arrighi, Alais, & Burr, 2005). It has also been shown that the perceived position of a flash is not uniformly displaced, but instead shifts toward a single point of convergence that follows the moving object from behind at a fixed distance (Watanabe & Yokoi, 2006, 2007, 2008). 
Flash lag and motion drag are similar in that the stimulus consists of a combination of flash and motion, but their shift directions are opposite and their conditions are different in many respects. In the case of flash lag, motion is presented as a moving object, and the flash position is judged relative to the moving object. The flash object is often perceptually segregated from the moving object. In the case of motion drag, on the other hand, motion can be presented as a texture movement within a stationary field, and the flash position is judged relative to another flash or an external reference. The flash object is often perceptually grouped with the moving field. It should be noted that human judgments about different aspects of perceptual space are not always consistent with one another, since they are based on the measurements of specific local relationships not on a globally coherent spatial representation in a common spatial coordinate. It has also been suggested that the flash and motion mutually interact with each other (Linares, López-Moliner, & Johnston, 2007). 
Flash lag is caused by a spatial shift or a temporal shift, or by both. While the spatial shift implies a spatial shift of the moving object as found in motion-induced mislocalizations (see above), the temporal shift implies an apparent delay of the flash object relative to the moving object (Murakami, 2001a, 2001b; Whitney & Murakami, 1998; Whitney, Murakami, & Cavanagh, 2000). 
The nature of this apparent delay remains controversial. The points in dispute are whether the apparent flash delay implies long onset latency or long persistency, whether the delay reflects early signal processing or late object processing, and whether the apparent delay reflects the actual time course of neural processing or is instead a subjective interpretation of the event that does not necessarily reflect the time course of neural processing. 
On the one hand, there is a suggestion that the relative flash delay reflects a shorter latency of early neural responses for a predictive moving stimulus than for an abrupt flash, due to preactivation by the motion stimulus before it reaches the center of the receptive field of the neuron (Berry, Brivanlou, Jordan, & Meister, 1999). Manipulation of neural delay by using equiluminant stimuli and luminance noise can modulate the magnitude of flash lag (Chappell & Mullen, 2010). On the other hand, it has been reported that the magnitudes of within- and cross-modal flash lags are incompatible with a simple latency difference account (Alais & Burr, 2003; Arrighi et al., 2005) and that temporal tuning of the tilt illusion suggests the flash delay relative to motion is too small to explain the flash lag (Arnold, Durant, & Johnston, 2003). 
A promising hypothesis about the apparent timing difference between flash and motion is that motion deblurring may reduce visual persistence of moving objects, but not flashed objects (Moore & Enns, 2004; see Figure 4). In agreement with this persistency account, position judgments focusing on flash onset are reported to abolish flash lag (Gauch & Kerzel, 2009). A classical notion of the mechanism of visual persistence is that it reflects a sluggish passive response of early sensors, but a more recent view is that visual persistence is a product of, or an interpretation by, active high-level processes acting to preserve object continuity (Dixon & Di Lollo, 1994; Moore & Enns, 2004; Moore, Mordkoff, & Enns, 2007). The finding that the apparent delay is larger for new object appearances than for property changes also suggests the involvement of object-level processing on apparent flash delay (Kanai, Carlson, Verstraten, & Walsh, 2009). 
Figure 4
 
Contribution of object updating to flash-lag effect (Moore & Enns, 2004). (Top) Standard flash-lag effect is observed with continued motion. (Bottom) When the moving disk makes an abrupt size change at the timing of the flash, the smaller disk appears to persist at that position and is accurately judged as being aligned with the flash.
Figure 4
 
Contribution of object updating to flash-lag effect (Moore & Enns, 2004). (Top) Standard flash-lag effect is observed with continued motion. (Bottom) When the moving disk makes an abrupt size change at the timing of the flash, the smaller disk appears to persist at that position and is accurately judged as being aligned with the flash.
Other phenomena
The onset position of a moving object appears to shift forward (Flöhlich effect) or backward (onset repulsion effect) depending on the stimulus condition (Kerzel & Gegenfurtner, 2004; Müsseler & Kerzel, 2004; Thornton, 2002). An orthogonal turning point of a motion trajectory appears to shift backward relative to the subsequent motion direction (Nieman, Sheth, & Shimojo, 2010). 
In an asynchronous binding effect, a bar gradually changes size as it moves horizontally. Somewhere along its trajectory, it also changes to a different color for one frame. Observers report that the new color is perceptually assigned to a different sized bar at a new spatial location (Cai & Schlag, 2001; Sundberg, Fallah, & Reynolds, 2006). This striking illusion indicates that the apparent delay of an abrupt change relative to a continuous change not only leads to apparent position misalignment (i.e., flash lag) but also to attribute misbinding. 
Temporal properties of motion processing
Perceptual latency and apparent timing
Perception is not instantaneous, since the transmission and processing of sensory information by neural mechanisms takes time. Broadly speaking, two approaches have been taken to understand the perceptual timing of visual motion. One investigates when a visual event appears to occur to the observer, and the other investigates when a visual event is recognized by the observer. The first question concerns subjective event time, and the second question concerns the objective time of neural processing (brain time). These two do not necessarily have to be directly related to each other (Dennett & Kinsbourne, 1992; Johnston & Nishida, 2001; Nishida & Johnston, 2010). 
Next to flash lag, the subjective timing illusion that has attracted a great deal of attention in vision science is color–motion asynchrony. In a typical presentation, a green pattern moving upward and a red pattern moving downward are alternated at the rate of 1–2 Hz. Most observers find it difficult to tell which direction is associated with which color. In addition, when the direction change occurs about 60–100 ms earlier than the color change, the observers reliably bind the two attributes, confidently reporting that the events appear simultaneous (Moutoussis & Zeki, 1997). This effect is observed even when perceived color changes are dissociated from spectral changes (Zeki & Moutoussis, 1997). The magnitude of apparent asynchrony is affected by various factors including attention (Paul & Schyns, 2003) and stimulus saliency (Adams & Mamassian, 2004). It is significantly reduced when the direction change angle is changed from 180 to 90 deg (Arnold & Clifford, 2002; Bedell, Chung, Ogmen, & Patel, 2003). The apparent asynchrony is consistent with how the magnitude of a color-contingent MAE changes with the relative timing of color and motion (Arnold, Clifford, & Wenderoth, 2001). The apparent synchrony obtained with binding judgments for repetitive changes diminishes or disappears for temporal-order judgments between a single color change and a single direction change (Aymoz & Viviani, 2004; Bedell et al., 2003; Nishida & Johnston, 2002; Viviani & Aymoz, 2001; see also Amano, Johnston, & Nishida, 2007; Linares & López-Moliner, 2006). One interpretation of the color–motion asynchrony is that it reflects asynchronous awareness of color and motion, i.e., perceptual latency is longer for motion than for color (Moutoussis & Zeki, 1997; Zeki, 2003; Zeki & Bartels, 1999). This is a brain time account. Alternatively, the illusion may be caused by an error in generating proper neural codes to represent subjective time. According to the time marker hypothesis (Nishida & Johnston, 2002), color–motion asynchrony results from matching inappropriate time markers (salient features), with a color change being matched with a position change (motion) rather than with a motion direction change. This is because color change is a first-order temporal change (first-order temporal derivative of color), while motion direction change is a less-salient second-order property, a change in the direction of change. This hypothesis does not exclude the possibility that processing latency differences affect perceptual asynchrony when it affects time markers. In agreement with this hypothesis, color–motion asynchrony is not accompanied by a corresponding difference in perceptual latency when the latency is estimated from cortical responses or behavioral reaction time (Amano et al., 2007; Nishida & Johnston, 2002), and second-order temporal changes appear delayed relative to first-order temporal changes regardless of the stimulus attributes involved (Nishida & Johnston, 2002). The time marker hypothesis however cannot account for the small asynchrony between color and orientation (Zeki, 2003; Zeki & Moutoussis, 1997). Furthermore, the neural correlates of time marker processing remain unspecified. 
One can more directly estimate the objective timing of neural processing by measuring response latencies than by asking observers to judge the relative timing of events. Nowadays, we have several invasive and non-invasive methods to accurately measure the time course of cortical response to a visual input, but perceptual latency is not easy to estimate from neural responses alone without knowing which cortical activity corresponds to a given perceptual decision. Although behavioral reaction time is likely to be correlated with perceptual latency, it also includes an unknown duration for post-decision processing. The limitations of neural and behavioral latencies can be overcome by correlating the two latencies measured at the same time (Hanes & Schall, 1996). On the basis of this idea, it has been shown that an increase in the perceptual latency to an onset of coherent motion as a function of coherence level can be explained by leaky integration models applied to neural response of extrastriate cortical areas (Amano et al., 2006; Cook & Maunsell, 2002; Ditterich, 2006; Roitman & Shadlen, 2002). These findings support the diffusion model for reaction times (Ratcliff, 2006). While human behavioral reaction time to visual motion is about 200–400 ms for voluntary responses (Amano et al., 2006), it is about 100 ms for involuntary ocular or manual following responses, which is comparable to the peak latency of motion-evoked MEG responses (Amano, Kimura, Nishida, Takeda, & Gomi, 2008). Simple and choice reaction times to changes in visual motion direction and speed can be described as functions of velocity changes in the two orthogonal directions of the initial vector (see Mateeff, Genova, & Hohnsbein, 2005, for a review). It has also been shown that reaction time to judge motion direction of bistable stimuli is affected little by the magnitude of ambiguity (Takei & Nishida, 2010). 
Discrete motion processing
When an object having both luminance edges and equiluminant color edges is continuously moving, an illusory jitter is perceived (Arnold & Johnston, 2003). This illusion may reflect a process in which the relative position inconsistency caused by the apparent speed mismatch between the two types of edges is built up and resolved at a given rate (Arnold & Johnston, 2003, 2005). It has also been suggested that the discrete update of visual representation may be related to synchronous neural activity at alpha band (∼10 Hz; Amano, Arnold, Takeda, & Johnston, 2008). 
In stroboscopic conditions, rotating objects may appear to rotate in the reverse direction due to undersampling. A seemingly similar phenomenon occurs in constant light (Purves, Paydarfar, & Andrews, 1996). There is an ongoing debate about whether this continuous Wagon Wheel illusion is caused by discrete sampling of motion information by the visual system at a rate between 10 and 15 Hz (Andrews & Purves, 2005; Andrews, Purves, Simpson, & VanRullen, 2005; Purves et al., 1996; VanRullen, 2006, 2007; VanRullen, Pascual-Leone, & Battelli, 2008; VanRullen, Reddy, & Koch, 2005, 2006) or by a different process, such as perceptual rivalry between forward motion and adaptation-induced backward motion (Holcombe, Clifford, Eagleman, & Pakarian, 2005; Holcombe & Seizova-Cajic, 2008; Kline & Eagleman, 2008; Kline, Holcombe, & Eagleman, 2004). The reversal does not occur globally (Kline et al., 2004); it occurs locally in the object of the observer's attention (VanRullen, 2006). It is enabled, but may not be exclusively explained, by motion adaptation (VanRullen, 2007). The illusion is observed also with non-visual stimuli (Holcombe & Seizova-Cajic, 2008). Although discrete sampling by peripheral motion detectors seems unlikely to cause the continuous Wagon Wheel illusion, whether discrete sampling by high-level motion processing exists and contributes to the illusion remains an open question. 
Interactions with motor systems
Visual motion information is used to control involuntary and voluntary motor responses of the eyes, hands, and other parts of the body. Numerous studies have examined how motion information is used to control voluntary motor responses, such as pursuit eye movements (Ilg, 2008), saccadic eye movements (Etchells, Benton, Ludwig, & Gilchrist, 2010), and interception (Merchant & Georgopoulos, 2006). The extent to which motion processing for pursuit is common to that for perception is extensively reviewed in an article (Spering & Montagnini, 2011) included in a recent special issue of Vision Research on perception and action. 
A large field motion produces an involuntary and rapid eye movement (Miles et al., 1986). This ocular following response effect has been used as a behavioral tool to analyze a subsystem of “vision for motor control” tuned to fast first-order motion (Hayashi et al., 2008, 2010; Masson, Busettini, Yang, & Miles, 2001; Masson & Castet, 2002; Masson et al., 2000; Masson, Yang, & Miles, 2002; Sheliga, Chen, Fitzgibbon, & Miles, 2005; Sheliga, Chen et al., 2006; Sheliga et al., 2008; Sheliga, Fitzgibbon, & Miles, 2009; Sheliga, Kodaka et al., 2006; Yang & Miles, 2003). A similar motion-induced response is observed for reaching hand movements (manual following response; Amano, Kimura et al., 2008; Gomi, Abekawa, & Nishida, 2006; Saijo et al., 2005; Whitney et al., 2007). 
On the other hand, observers' actions, in particular eye movements, exert a significant influence on motion perception. 
For the estimation of motion in the environment, retinal motion signals should be combined with extraretinal signals about movements of the eyes and body (see Angelaki, Gu, & DeAngelis, 2009; Freeman, 2007a, for review). Once the retinal motion signal is bound with eye movement signal during smooth pursuit, the observer has no direct access to retinal image motion (Freeman, Champion, Sumnall, & Snowden, 2009). Illusory motion perception during pursuit can be ascribed to underestimation of extraretinal motion signals. This may be a result of an optimal estimation (Freeman, Champion, & Warren, 2010). 
To see a stable visual world, the visual system discounts the effect of involuntary jitter of the eyes by being insensitive to large-field uniform motion (Martinez-Conde, Macknik, & Hubel, 2004). One can reveal the operation of this stabilization mechanism by adapting local motion sensors by dynamic noise or by flickering the surround area. Then, the observers are able to see image jitter caused by their eye movements (Murakami, 2003; Murakami & Cavanagh, 1998, 2001; Sasaki, Murakami, Cavanagh, & Tootell, 2002). Involuntary eye jitter impairs detection of small motion (Murakami, 2004), whereas it can improve fine pattern perception (Rucci, Iovin, Poletti, & Santini, 2007). In addition, the positive correlation between fixation stability and the magnitude of illusory motion in a static display (“Rotating Snakes”) suggests the contribution of involuntary eye jitter to this powerful illusion (Murakami, Kitaoka, & Ashida, 2006; see also Backus & Oruç, 2005; Conway, Kitaoka, Yazdanbakhsh, Pack, & Livingstone, 2005; Hisakata & Murakami, 2008; Kuriki, Ashida, Murakami, & Kitaoka, 2008, for possible mechanisms of this illusion, and Burr & Thompson, 2011, for a review on illusory motion from stationary pictures). 
Finally, perception of retinal motion is dynamically and anisotropically modulated at the time of saccades (Lee & Lee, 2005; Park, Lee, & Lee, 2001). Apparent motion is perceived as a coherent event across saccades (Cavanagh et al., 2010; Fracasso, Caramazza, & Melcher, 2010). 
Object motion and cross-attribute integration
Early visual processing estimates a retinotopic map of motion vectors, but the observer eventually perceives the movements of objects in world coordinates. The visual system has a variety of mechanisms for perception of object movements. Some of them include interactions with other sensory modules. 
Vector analysis
Integration of retinal motion signal with extraretinal signal about eye and body movements, addressed in the Interactions with motor systems section, is one mechanism contributing to coordinate transformation from retinotopic to non-retinotopic motion. Even without eye movements, retinotopic motion vectors of multiple moving elements, which appear to belong to a common object or framework, are perceptually decomposed into a global component (a common vector over the elements, or the motion of the framework, possibly computed by the vector average of element motion) and local components (residual relative motion among the elements within the framework). This vector analysis (Johansson, 1973) is a crucial mechanism for extracting meaningful object movements and for recognizing natural dynamic events, such as biological motion (Johansson, 1973; Troje, 2002; see Biological motion subsection). Vector analysis also affects the motion discrimination performance with complex motion stimuli (Tadin, Lappin, Blake, & Grossman, 2002). However, the neural processing underlying vector analysis remains poorly understood. 
Perceptual organization
Motion perception of a scene depends not only on retinotopically extracted motion signals but also on how those motion signals are assigned to objects. In agreement with this view, form information controls local motion integration, as was pointed out in previous sections. In addition, it is known that perceptual grouping and figure ground segregation exert considerable influence on various aspects of motion perception. Integration of surface contours moving behind occluders is affected by luminance contrast polarity and color (Su, He, & Ooi, 2010a, 2010b). Speed discrimination is improved when the number of elements is increased but remains unchanged when the area of a single stimulus is increased (Verghese & Stone, 1995, 1996). It is phenomenal segregation, rather than physical separation, that controls this effect (Verghese & Stone, 1997). Speed discrimination across a border is impaired when motion appears to cross the border, and the two regions separated by the border appear to be grouped into a single region (Verghese & McKee, 2006). Speed discrimination between two elements is impaired when one of the elements is seen on a different phenomenal depth plane because of illusory contours (Bertamini, Bruno, & Mosca, 2004). In figure–ground assignment, an object is more likely to be seen moving in front (i.e., as a figure to which the motion signal is assigned) when its contour is advancing rather than receding (Barenholtz & Tarr, 2009) and when its counter segment is convex rather than concave (Barenholtz, 2010). 
Trajectory integration
For moving objects, we see the properties of the objects, such as form, color, and position, in addition to their movements. We have already seen how motion signals affect the object position, but motion signals also affect processing of the form and color of moving objects. 
When a pattern moves behind stationary narrow slits, the shape of the pattern becomes clearly visible (Burr & Ross, 2004; Fahle & Poggio, 1981; Morgan, Findlay, & Watt, 1982; Nishida, 2004). A mechanism suggested to be responsible for this motion-enhanced pattern perception is spatiotemporal integration of form information along the trajectory of motion (trajectory integration) rather than at the same retinal locations (Burr & Ross, 1986; Nishida, 2004). 
When a moving object changes its color (e.g., between red and green), the observer perceives the mixed color (yellow) even when the two colors are not mixed on the retina (Kanai, Sheth, & Shimojo, 2007; Nishida, Watanabe, Kuriki, & Tokimoto, 2007). On the basis of this principle, a change in perceived motion path can alter apparent color (Figure 5A). Again, a mechanism suggested to be responsible for this motion-induced color mixture is spatiotemporal integration of color information along the trajectory of motion. 
Figure 5
 
Motion-based integration of object properties. (A) Trajectory integration of color. Space–time plots of multipath displays in which integration of color signals along a rightward color-alternating path results in color mixing, whereas integration along a leftward color-keeping path results in color segregation. When the path-length ratio of the color-keeping path is 1 (left), the color-keeping path predominates in motion perception. When the path-length ratio is 4 (right), the color-alternating path predominates. In accordance with this direction change, apparent color also changes. Reproduced with permission from Watanabe and Nishida (2007). (B) Mobile computing. In each patch, color alternates between red and green and motion alternates between inward and outward. The task is to report the direction of the red dots while fixating the central cross. When the observers attend to one location, they cannot judge the binding between color and direction when the alternation rate is fast (say 4 Hz). However, when the observers are shown a guide ring that allows them to attentively track a specific combination of color and motion over space and time, they can perform the binding task due to spatiotemporal integration of object features. Modified with permission from Cavanagh et al. (2008).
Figure 5
 
Motion-based integration of object properties. (A) Trajectory integration of color. Space–time plots of multipath displays in which integration of color signals along a rightward color-alternating path results in color mixing, whereas integration along a leftward color-keeping path results in color segregation. When the path-length ratio of the color-keeping path is 1 (left), the color-keeping path predominates in motion perception. When the path-length ratio is 4 (right), the color-alternating path predominates. In accordance with this direction change, apparent color also changes. Reproduced with permission from Watanabe and Nishida (2007). (B) Mobile computing. In each patch, color alternates between red and green and motion alternates between inward and outward. The task is to report the direction of the red dots while fixating the central cross. When the observers attend to one location, they cannot judge the binding between color and direction when the alternation rate is fast (say 4 Hz). However, when the observers are shown a guide ring that allows them to attentively track a specific combination of color and motion over space and time, they can perform the binding task due to spatiotemporal integration of object features. Modified with permission from Cavanagh et al. (2008).
Trajectory integration can account for shifts, misattributions, and non-retinotopic mixtures of visual features, such as vernier offset, during apparent motion (Boi, Oğmen, Krummenacher, Otto, & Herzog, 2009; Enns, 2002; Kawabe, 2008; Öğmen, 2007; Otto, Oğmen, & Herzog, 2006, 2008; Shimozaki, Eckstein, & Thomas, 1999). It may also be related to impaired detection of a probe presented on the path of apparent motion (Hogendoorn, Carlson, & Verstraten, 2008; Yantis & Nakama, 1998). 
Temporal integration can improve the signal-to-noise ratio, but the temporal integration at the same retinal locations would induce motion blur for moving inputs. Trajectory integration is a useful mechanism for improving the signal-to-noise ratio without introducing motion blur (Burr, 1980; Burr & Ross, 1986). Indeed, it makes the temporal resolution of color perception, evaluated in terms of retinal temporal frequency, higher for moving patterns than for stationary flickering patterns (Watanabe & Nishida, 2007). The mechanism of motion deblur is known to be modulated by eye movements (Bedell & Lott, 1996; Bedell, Tong, & Aydin, 2010), and a similar modulation is also observed for the temporal resolution enhancement by trajectory color integration (Terao, Watanabe, Yagi, & Nishida, 2010). 
Trajectory integration effects are observed not only by the motion of the objects themselves but also by the motion of a cue to guide attentive tracking by the observer (Cavanagh, Holcombe, & Chou, 2008; Holcombe & Cavanagh, 2008; Figure 5B). These novel techniques pave the way to investigating non-retinotopic “mobile computing” (Boi et al., 2009; Cavanagh et al., 2008). 
Motion sharpening
Trajectory integration is one mechanism for motion deblurring, but it cannot explain why blurred edges look sharper when they are moving than when stationary (Bex, Edgar, & Smith, 1995; Ramachandran, Rao, & Vidyasagar, 1974). A possible mechanism of this motion sharpening is the application of compressive non-linear contrast response to dynamic inputs (Hammett, 1997; Hammett, Georgeson, & Barbieri-Hesse, 2003; Hammett, Georgeson, & Gorea, 1998). In comparison with other explanations, such as linear filtering by biphasic visual response (Pääkkönen & Morgan, 2001), the notion of compressive non-linearity provides better accounts of motion sharpening observed with stationary stimuli surrounded by motion (Takeuchi & De Valois, 2000a) or presented briefly (Georgeson & Hammett, 2002). 
Motion standstill
In the history of vision research, it was once emphasized that visual motion processing is separate from color and form processing. Later studies revealed a number of cross-attribute interactions, such as form-based motion integration and trajectory integration of form and color information and separate processing of different visual attributes, as reviewed in this paper. However, basic processing segregation of different attributes is suggested by various psychophysical phenomena. Under conditions where motion signals are expected to be nulled, a quickly moving object appears to stand still, while its details (colors and textures) are clearly visible (Lu et al., 1999a, 1999b). This motion standstill suggests that the color and form of moving objects can be perceived independently of motion processing. 
Multisensory object motion
The movement of an object can be detected non-visually. Auditory and tactile motion signals can be combined with visual motion signals to yield multisensory object motion perception. In multisensory combination, motion signals from different modalities are mixed at appropriate weights or motion of a stronger modality captures others (Arrighi, Marini, & Burr, 2009; Harrison, Wuerger, & Meyer, 2010; López-Moliner & Soto-Faraco, 2007). In addition to cross-modal data fusion, non-visual (auditory) information can affect visual motion through apparent timing modulation (Freeman & Driver, 2008; Kafaligonul & Stoner, 2010; Kawabe, Miura, & Yamada, 2008; Kawabe et al., 2010). It has also been reported that MAEs occur cross-modally (Deas, Roach, & Mcgraw, 2008; Jain, Sally, & Papathomas, 2008; Kitagawa & Ichihara, 2002; see Alais, Newell, & Mamassian, 2010; Burr & Thompson, 2011, for detailed reviews of this topic). 
Motion-induced blindness
Visual motion is not only capable of altering the appearance of objects but also capable of completely erasing the appearance of objects. In motion-induced blindness (MIB; Bonneh, Cooperman, & Sagi, 2001), when a global moving pattern is superimposed on high-contrast stationary or slowly moving stimuli, the latter disappear and reappear alternately for periods of several seconds. Several hypotheses about the mechanism of MIB have been proposed: competition for attention (Bonneh et al., 2001), interhemispheric rivalry (Funk & Pettigrew, 2003), surface completion (Graf, Adams, & Lages, 2002; Lages, Adams, & Graf, 2009), perceptual filling-in (Hsu, Yeh, & Kramer, 2006, 2004), perceptual scotoma (New & Scholl, 2008), simultaneous changes in sensitivity and decision criterion (Caetta, Gorea, & Bonneh, 2007), adaptation (Gorea & Caetta, 2009), and motion streak suppression (Wallis & Arnold, 2009). 
The question most relevant to the current review is how much does motion processing contribute to MIB. Several findings suggest that motion does not play a critical role. MIB is tuned to temporal frequency, not to speed (Wallis & Arnold, 2008), and a similar blindness effect can be produced by non-moving flicker (Kanai, Moradi, Shimojo, & Verstraten, 2005; Kawabe & Miura, 2007; Wallis & Arnold, 2009). On the other hand, involvement of motion processing is suggested by recent findings that MIB is stronger at the trailing edges of movement than at the leading edges (Wallis & Arnold, 2009) and that MIB is induced by the MAE (Lages et al., 2009). 
Three-dimensional motion processing
Depth perception from motion
Visual motion processing contributes to 3D perception in a variety of ways. 
First, there are two potential binocular cues for motion in depth—a change in horizontal binocular disparity and an interocular velocity difference. Although these cues are redundant under natural conditions, recent studies separately analyzed their contributions by controlling interocular and temporal correlations of the stimuli and showed that the interocular velocity difference, in addition to the change in horizontal disparity, plays a considerable role in perception of motion in depth (Brooks & Stone, 2004, 2006; Fernandez & Farell, 2006; Rokers, Cormack, & Huk, 2008; Shioiri, Kakehi, Tashiro, & Yaguchi, 2009; Shioiri, Saisho, & Yaguchi, 2000). With regard to the change in horizontal disparity, researchers exploiting the Pulfrich phenomena have been investigating whether binocular disparity and motion information are jointly encoded or not (Anzai, Ohzawa, & Freeman, 2001; Qian & Andersen, 1997; Qian & Freeman, 2009; Read & Cumming, 2005a, 2005b; Sohn & Lee, 2009). 
Second, a large field of translational, radial, or circular global motion is an optic flow pattern that carries information about observer's own 3D movement in the stationary environment. Detailed reviews on optic flow processing can be found in Duffy (2003) and Warren (2003, 2008). Topics of recent research on optic flow processing include flow parsing of motion due to self-movement from that due to object movement (Warren & Rushton, 2007); the mechanism of optic flow illusion, in which the focus of a radially expanding pattern of moving dots appears shifted when another pattern of translating dots is transparently superimposed (Duffy & Wurtz, 1993; Duijnhouwer, van Wezel, & van den Berg, 2008; Hanada, 2005; Lappe & Duffy, 1999; Royden & Conti, 2003); estimation of travel distance from optic flow (Frenz, Bremmer, & Lappe, 2003; Frenz & Lappe, 2005); and cross-modal integration of self-motion information with vestibular and proprioceptive signals (Butler, Smith, Campos, & Bülthoff, 2010; Gu, Deangelis, & Angelaki, 2007; Gu, Fetsch, Adeyemo, DeAngelis, & Angelaki, 2010; Nardini, Jones, Bedford, & Braddick, 2008; Shaikh et al., 2005). 
Third, motion information is used to perceive 3D spatiotemporal structures, such as depth from motion parallax (McKee & Taylor, 2010; Nawrot, 2003; Rauschecker, Solomon, & Glennerster, 2006; Svarverud, Gilson, & Glennerster, 2010) and structure from motion (3D object structure perception from motion gradient field; Aaen-Stockdale et al., 2010; Fernandez & Farell, 2007, 2009). Estimation of 3D structure from motion includes biological motion perception, which has probably been the most extensively studied topic of 3D motion processing in the last decade. 
Biological motion
From a small number of point lights attached to human walkers (point-light walker), people can obtain a vivid impression of a human figure as well as a variety of information about the walker (Johansson, 1973). Similarly, from motion-based information of the head and face alone, people can discriminate individuals and gender (Hill & Johnston, 2001). For the generation of effective stimuli for psychophysical experiments, models based on decomposition of biological motion into multiple components have been used to visualize and exaggerate the differences in action style (Pollick, Fidopiastis, & Braden, 2001), in facial expression (Pollick, Hill, Calder, & Paterson, 2003), and in male and female walking patterns (Troje, 2002; see Blake & Shiffrar, 2007; Troje, 2008, for more detailed reviews of biological motion). 
Characterization of biological motion perception is not an easy job, since many stages of visual motion processing are involved in this phenomenon (Troje, 2008). It is often claimed that humans are particularly sensitive to biological motion, but it has also been suggested that the sensitivity to biological motion is comparable to the sensitivity to structured non-biological motion (Hiris, 2007). The long integration time of biological motion (Neri et al., 1998) may reflect a general property of global motion processing (Burr & Santoro, 2001). Biological motion can be seen in the periphery, but there are mixed results about whether size scaling is sufficient (Gurnsey, Roddy, Ouhnana, & Troje, 2008; Thompson, Hansen, Hess, & Troje, 2007) or insufficient (Ikeda, Blake, & Watanabe, 2005) to equate discrimination and identification of point-light walkers across the visual field. The disagreement might be ascribable to task differences. It has been reported that biological motion perception is cue-invariant (Aaen-Stockdale et al., 2008), but at least under some conditions, second-order motion is less effective than first-order motion (Gurnsey & Troje, 2010). Biological motion may enhance cross-modal binding (Arrighi et al., 2009; Saygin, Driver, & de Sa, 2008) through learning (Petrini, Holt, & Pollick, 2010). 
There are two types of information for biological motion: local motion and dynamic global form. One can perform some biological motion tasks, such as backward–forward discrimination, only from dynamic global form information (Beintema, Georg, & Lappe, 2006; Beintema & Lappe, 2002; Lange, Georg, & Lappe, 2006), and biological motion perception is impaired when global form perception is impaired (Hunt & Halper, 2008; Lu, 2010; Wittinghofer, De Lussanet, & Lappe, 2010). On the other hand, there are cases where local motion information alone is sufficient to perform the task (Casile & Giese, 2005; Chang & Troje, 2008; Westhoff & Troje, 2007). The recent consensus seems to be that both motion and form contribute to biological motion, with their weight dependent on the required task (Chang & Troje, 2009b; Garcia & Grossman, 2008; Thirkettle, Benton, & Scott-Samuel, 2009). It is suggested that we should make a distinction among different processing levels included in biological motion perception, such as life detection, structure from motion, action recognition, and style recognition (Troje, 2008). 
Biological perception is significantly impaired by upside-down inversion of the stimulus. It has been suggested that the reference frame of this inversion effect is primarily egocentric (Troje, 2003), with additional contribution of gravity (Chang, Harris, & Troje, 2010) and little contribution of prior knowledge about display orientation (Pavlova & Sokolov, 2003). Even when global form information is entirely disrupted, biological motion perceived from the accelerations of local motion of the feet is still subject to a considerable inversion effect (Chang & Troje, 2009a; Troje & Westhoff, 2006). 
Search and dual-task paradigms indicate that biological motion perception is attention-demanding (Cavanagh, Labianca, & Thornton, 2001; Thornton, Rensink, & Shiffrar, 2002), but it includes some components automatically processed without attention, since peripheral task-irrelevant walkers can affect the processing of a central target walker (Thornton & Vuong, 2004). 
There are adaptation aftereffects (Jordan, Fallah, & Stoner, 2006; Troje, Sadr, Geyer, & Nakayama, 2006) and correlated changes with orientation (Brooks et al., 2008) for the perception of gender from the style of point-light walking. Objects moving in the forward direction, including walking people, induce backward motion in the dynamic background (backscroll illusion; Fujimoto, 2003; Fujimoto & Sato, 2006; Fujimoto & Yagi, 2008). Biological motion affects smooth eye movements (Coppe, de Xivry, Missal, & Lefèvre, 2010; Orban de Xivry, Coppe, Lefèvre, & Missal, 2010). Motor learning has a direct and highly selective influence on visual action recognition that is not mediated by visual learning (Casile & Giese, 2006). 
Concluding remarks
Motion perception is one of the most successful research areas in vision science, owing to the discovery and invention of useful stimuli that can psychophysically isolate the motion “module,” such as apparent motion (Kolers, 1972; Wertheimer, 1912), MAEs (Mather et al., 1998; Wohlgemuth, 1911), induced motion (Dunker, 1929), low-contrast drifting gratings (Burr & Ross, 1982; Levinson & Sekuler, 1975; Watson & Robson, 1981), random-dot kinematograms (Braddick, 1974; Julesz, 1971; Newsome & Paré, 1988; Williams, Phillips, & Sekuler, 1986), plaids (Adelson & Movshon, 1982), and optic flow patterns (Gibson, 1977). It is also fortunate that the neural correlates of the perception of these stimuli have been primarily identified in the so-called motion processing pathway including V1, MT, and MST (Born & Bradley, 2005; McCool & Britten, 2008; Pack & Born, 2008). Some topics reviewed in this paper, such as local motion detection, local motion interactions, 2D vector estimation, and global motion perception, concern processing stages within the motion “module.” Having had good probes and well-defined target processes, motion research has attained a reasonably good understanding of the basic mechanisms. However, as reviewed here, recent research has revealed that motion processing is more complex than previously thought, including the existence of tight interactions with the processing of other attributes. 
Let me summarize recent advances in motion research in relation to two computational goals of visual motion processing. One is to estimate the pattern of retinal motion vectors from the image. The other is to generate a representation of moving objects. It is possible to regard the first goal as a subgoal of the second one, although all the mechanisms for the first goal do not necessarily contribute to the second goal, nor precede the mechanisms for the second goal in the processing hierarchy. The interest of psychophysical vision research has extended from the mechanisms contributing to the first goal to those contributing to the second goal. 
The mechanisms for the first goal, retinal motion estimation, can be localized mainly within the “visual motion module.” The major advances made in the last decade informed us about the computation of 2D vectors. A model to compute 2D vectors in MT from speed-selective integration of the outputs of motion energy sensors in V1 (Simoncelli & Heeger, 1998) was tested physiologically and psychophysically and became the standard view of the core mechanism of 2D vector computation (see Cross-orientation integration of 1D motion signals subsection). Vector estimation errors under noisy situations were interpreted as resulting from statistically optimal estimations (see Cross-orientation integration of 1D motion signals and Speed perception subsections). In addition to this core process, it was shown that neural mechanisms sensitive to motion streaks or terminator motion also contribute to 2D vector computation (see Propagation of local 2D vector signals and Interactions with form information subsections). It was also shown that the motion integration process is dynamically developing and flexibly changes how integration operates depending on the type of local motion (1D or 2D) and form constraints (see Interactions with form information subsection). 
Research on the mechanisms underlying the second computational goal, object motion estimation, has made significant progresses over the last decade. The involvement of cross-attribute interactions is a characteristic of these mechanisms. Motion-induced mislocalization effects (see Motion-induced position shift section) show us that motion, position, and form are inseparable attributes in perception. Trajectory integration of form and color information (see Trajectory integration section) reveals that motion information plays a critical role in the perception of multiple-attribute objects in motion. Cross-modal interactions (see Multisensory object motion subsection) indicate convergence of visual and non-visual motion signals into object representations. Biological motion studies (see Biological motion subsection) have shown that motion information and dynamic form information jointly contribute to the recognition of complex object movements. Early motion processing also involves processing for object motion estimation. Inhibitory interactions among motion signals in space (center–surround suppression, see Center–surround interactions subsection), spatial frequency (see Interaction across different spatial scales subsection), and direction (motion transparency and direction repulsion, see Motion transparency subsection) are initial steps for segmenting motion signals that are likely to belong to separate objects and can be affected by form-based perceptual organization (see Center–surround interactions subsection). It was also shown that motion vectors assigned to objects are determined not by pure motion analysis but through tight interactions with the processing of object form information such as border ownership and spatial configuration (see Stimulus specificity of 1D motion integration, Propagation of local 2D vector signals, Interactions with form information subsections). 
Following these advances, what are the next challenges? 
While visual motion processing has been studied from a variety of perspectives, the linkage among different topics is not necessarily clear. This is in part because research topics are classified in terms of stimulus and task rather than computation and mechanism involved. Since motion processing consists of multiple stages and parallel routes, it is often difficult to fully predict how a given stimulus is processed by the whole system. To acquire coherent understanding of diversity of phenomena, we should attempt to organize a wide range of knowledge into a single model. 
This is a realistic challenge for the mechanisms contributing to the first computational goal, retinal vector estimation. The model is expected to consist of early visual responses before motion extraction, first-order motion and second-order motion detection, local motion interactions, and mechanisms for 2D vector estimation including local motion pooling, motion streaks, and form-based modulations. The performance of the model can be compared with actual psychophysical data by including the stages for neural response decoding and perceptual decision making, along with voluntary and involuntary motor control. Such an integrative model, if successfully made, would provide the standard framework for considering numerous psychophysical findings on visual motion perception, regardless of whether the input stimulus is made of dots, lines, gratings, or Gabors and whether the task is detection, discrimination, or rating. The model could include the mechanisms for rapid and slow dynamical changes as produced by ambiguous input stimuli, luminance level, exogenous and endogenous attention, motion adaptation, and perceptual learning. The model will also help us specify the neural correlate of motion awareness. To account for psychophysical findings, the model should be primarily a functional one. Of course, it should be consistent with the latest knowledge about neural mechanisms, but paying too much attention to the details of neural processing could blur the computational meaning of the model. The understanding of a system at multiple levels is critical for vision research (Marr, 1982). 
In theory, on top of the model for retinal vector estimation, we can develop a model for object motion representation. However, this is probably not a realistic challenge at present, since our understanding of the mechanisms for the second computation goal is still immature. Methodologically, it is not easy to investigate object-level processing as rigorously as is done in low-level visual psychophysics, since it is beyond modular processing (Foder, 1983). Techniques for isolating the target mechanism and silencing the others cannot be used. Conceptually, the term “object” remains a vague concept. It does not have a precise definition that would be acceptable to strict psychophysicists. As a result of these limitations, many studies of object motion perception remain phenomenal, cognitive, or speculative. 
To bridge the gap between retinal motion vector estimation and object motion representation, we should have better understanding of the following three mechanisms: coordinate transformation, motion pattern analysis, and object representation. 
Object motion is represented in non-retinotopic coordinates defined relative to such references as the observer's body, the surrounding environment, and the framework to which the object belongs. One mechanism for coordinate transformation is vector analysis/decomposition (see Vector analysis subsection). Despite being an old notion, it remains poorly understood. Another mechanism for coordinate transformation is the integration of retinal motion vectors with extraretinal motion signals about eye and body movements (see Interactions with motor systems subsection). The neural mechanisms underlying this process have been extensively investigated. In addition, it is becoming recognized that signals about eye and body movements are not only passively integrated with retinal motion signals but also actively modulate visual sensory processing (see for example Trajectory integration subsection). Not only physical body movements but also active movements of attention play crucial roles in motion perception (see Feature tracking and Trajectory integration subsections). Motion processing and coordinate transformation by free-moving living observers will be an important target of future research. 
By motion pattern analysis, I mean spatiotemporal analysis of motion vectors. It is analogous to spatial pattern analysis that computes global shapes from local orientation measurements. Motion pattern analysis is included in optic flow processing and biological motion processing, but it must play a more general role in dynamic scene perception. For example, motion patterns let us know a variety of properties of objects, such as the weight of a falling object and the viscosity of liquid. Vector decomposition for coordinate transformation is another example of motion pattern analysis. With regard to the computational algorithm, motion pattern analysis is presumably hierarchical, starting from encoding of the relationship among small numbers of local motion vectors, just like angle coding in spatial pattern analysis (Ito & Komatsu, 2004), and ending with global motion pattern recognition. Few studies have considered this processing hierarchy. A notable exception is a study about precise encoding of coherent motion (Lappin, Donnelly, & Kojima, 2001; Lappin, Tadin, & Whittier, 2002). 
Finally, there is no established idea on how the brain represents dynamic objects. Specifically, two major questions remain unsolved. One is how does the brain represent an object's location in space and time. The other is how does the brain represent an object to which multiple attributes are bound. The two questions are tightly related with each other, since coincidence in space and/or time is a critical condition of attribute binding (Treisman, 1996). The neural representation of attribute binding has been widely recognized as a hard problem and has been extensively investigated (see, e.g., Seymour, Clifford, Logothetis, & Bartels, 2009). Here, I would like to emphasize that how to represent space and time in the brain is also a fundamental and hard question. The currently popular view is that spatial location is represented as position in a retinotopic map, and temporal location is determined by the physical time of corresponding neural responses. This is a good assumption to use in the search for neural correlations of apparent distortions of space and time. I would not say this is an incorrect view, but in these forms, spatiotemporal positions are only implicitly represented. They have to be encoded as explicit representations of positions for subsequent processes to recognize and use. A nice example of explicit representation of (a relationship of) spatiotemporal positions is that a motion sensor with a spatiotemporally slanted receptive field encodes a local position change (Adelson & Bergen, 1985). A similar idea may be applied to the representation of moving objects as well. When an object traverses the visual field, motion sensors along the trajectory will be sequentially activated. Since this is a mixture of explicit and implicit position representations, I expect the whole object motion is somehow explicitly represented in a subsequent stage by integrating local motion representations into a global trajectory representation. Assuming that the position of a moving object is read out from such a non-retinotopic abstract representation, it would not be surprising that the apparent position of a moving object is not easily compared with the apparent position of another object that has a different spatiotemporal structure, as in the flash-lag effect. I believe motion-induced mislocalizations (see Motion-induced position shift section) provides useful hints about how an object position is explicitly encoded in the brain, and likewise, temporal illusions (see for example Perceptual latency and apparent timing subsection) provide hints about how event timing is explicitly represented in the brain. In other words, unless we understand the nature of spatiotemporal position representations, we will not be able to fully understand mislocalization phenomena nor temporal illusions. Furthermore, without knowing how the space and time of an object are explicitly represented in the brain, we would not be able to fully understand the mechanisms contributing to the second computational goal. 
In sum, we have reached a reasonable understanding of the mechanisms for retinal vector estimation. The next challenge will be to accumulate this knowledge into an integrated model. Our understanding of the mechanisms for moving object representation has also greatly advanced. To move on to the next step, we should clarify coordinate transformation, motion pattern analysis, and object representation per se
Acknowledgments
I am grateful to K. Amano, H. Ashida, C. B. Benton, W. Curran, M. Edwards, A. Johnston, T. Kawabe, D. Linares, K. Maruya, I. Motoyoshi, I. Murakami, S. Shioiri, T. Takeuchi, and anonymous reviewers for comments on the manuscript. This work was supported by KAKENHI (Grant-in-Aid for Scientific Research on Innovative Areas No. 22135004). 
Commercial relationships: none. 
Corresponding author: Shin'ya Nishida. 
Email: shinyanishida@me.com. 
Address: NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Morinosato Wakamiya 3-1, Atsugi, Kanagawa 243-0198, Japan. 
Footnotes
Footnotes
1  In the current context, 2D implies spatially 2D. The time dimension is not considered. A dot or a corner is a 2D spatial pattern whose location on the image plane is specified by two parameters (e.g., x and y coordinates). When a 2D pattern moves, a unique motion vector can be determined. A motion vector is 2D, since it is specified by two parameters on the image plane (horizontal and vertical speeds in xy coordinates or vector direction and length in polar coordinates). In contrast, a line or a straight contour is a 1D spatial pattern whose spatial location is defined only along the axis orthogonal to the line orientation. When a 1D pattern moves, only the speed component orthogonal to its axis of orientation can be defined (a 1D motion signal). A motion sensor with a spatially oriented receptive field is a 1D motion sensor. It responds only to specific orientation components in the input pattern that match its receptive field. The sensor's output is a 1D motion signal even when the input pattern is spatially 2D.
Footnotes
2  One may prefer to use “Fourier” and “non-Fourier”, instead of “first-order” and “second-order” to describe this distinction.
References
Aaen-Stockdale C. Ledgeway T. Hess R. F. (2007). Second-order optic flow processing. Vision Research, 47, 1798–1808. [CrossRef] [PubMed]
Aaen-Stockdale C. Thompson B. Hess R. F. Troje N. F. (2008). Biological motion perception is cue-invariant. Journal of Vision, 8(8):6, 1–11, http://www.journalofvision.org/content/8/8/6, doi:10.1167/8.8.6. [PubMed] [Article] [CrossRef] [PubMed]
Aaen-Stockdale C. R. Farivar R. Hess R. F. (2010). Co-operative interactions between first- and second-order mechanisms in the processing of structure from motion. Journal of Vision, 10(13):6, 1–9, http://www.journalofvision.org/content/10/13/6, doi:10.1167/10.13.6. [PubMed] [Article] [CrossRef] [PubMed]
Aaen-Stockdale C. R. Thompson B. Huang P.-C. Hess R. F. (2009). Low-level mechanisms may contribute to paradoxical motion percepts. Journal of Vision, 9(5):9, 1–14, http://www.journalofvision.org/content/9/5/9, doi:10.1167/9.5.9. [PubMed] [Article] [CrossRef] [PubMed]
Adams W. J. Mamassian P. (2004). The effects of task and saliency on latencies for colour and motion processing. Proceedings of the Royal Society of London B: Biological Sciences, 271, 139–146. [CrossRef]
Adelson E. H. Bergen J. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2, 284–299. [CrossRef]
Adelson E. H. Movshon J. (1982). Phenomenal coherence of moving visual patterns. Nature, 300, 523–525. [CrossRef] [PubMed]
Aghdaee S. M. (2005). Adaptation to spiral motion in crowding condition. Perception, 34, 155–162. [CrossRef] [PubMed]
Alais D. Burr D. C. (2003). The “Flash-Lag” effect occurs in audition and cross-modally. Current Biology, 13, 59–63. [CrossRef] [PubMed]
Alais D. Lorenceau J. (2002). Perceptual grouping in the Ternus display: Evidence for an 'association field' in apparent motion. Vision Research, 42, 1005–1016. [CrossRef] [PubMed]
Alais D. Newell F. N. Mamassian P. (2010). Multisensory processing in review: From physiology to behaviour. Seeing Perceiving, 23, 3–38. [CrossRef] [PubMed]
Alais D. Verstraten F. A. J. Burr D. C. (2005). The motion aftereffect of transparent motion: Two temporal channels account for perceived direction. Vision Research, 45, 403–412. [CrossRef] [PubMed]
Alais D. Wenderoth P. Burke D. (1997). The size and number of plaid blobs mediate the misperception of type-II plaid direction. Vision Research, 37, 143–150. [CrossRef] [PubMed]
Albright T. D. (1984). Direction and orientation selectivity of neurons in visual area MT of the macaque. Journal of Neurophysiology, 52, 1106–1130. [PubMed]
Allard R. Faubert J. (2008). First- and second-order motion mechanisms are distinct at low but common at high temporal frequencies. Journal of Vision, 8(2):12, 1–17, http://www.journalofvision.org/content/8/2/12, doi:10.1167/8.2.12. [PubMed] [Article] [CrossRef] [PubMed]
Allen H. A. Ledgeway T. (2003). Attentional modulation of threshold sensitivity to first-order motion and second-order motion patterns. Vision Research, 43, 2927–2936. [CrossRef] [PubMed]
Alvarez G. A. (2011). Representing multiple objects as an ensemble enhances visual cognition. Trends in Cognitive Sciences, 15, 122–131. [CrossRef] [PubMed]
Amano K. Arnold D. H. Takeda T. Johnston A. (2008). Alpha band amplification during illusory jitter perception. Journal of Vision, 8(10):3, 1–8, http://www.journalofvision.org/content/8/10/3, doi:10.1167/8.10.3. [PubMed] [Article] [CrossRef] [PubMed]
Amano K. Edwards M. Badcock D. R. Nishida S. (2009a). Adaptive pooling of visual motion signals by the human visual system revealed with a novel multi-element stimulus. Journal of Vision, 9(3):4, 1–25, http://www.journalofvision.org/content/9/3/4, doi:10.1167/9.3.4. [PubMed] [Article] [CrossRef]
Amano K. Edwards M. Badcock D. R. Nishida S. (2009b). Spatial-frequency tuning in the pooling of one- and two-dimensional motion signals. Vision Research, 49, 2862–2869. [CrossRef]
Amano K. Goda N. Nishida S. Ejima Y. Takeda T. Ohtani Y. (2006). Estimation of the timing of human visual perception from magnetoencephalography. Journal of Neuroscience, 26, 3981–3991. [CrossRef] [PubMed]
Amano K. Johnston A. Nishida S. (2007). Two mechanisms underlying the effect of angle of motion direction change on colour-motion asynchrony. Vision Research, 47, 687–705. [CrossRef] [PubMed]
Amano K. Kimura T. Nishida S. Takeda T. Gomi H. (2008). Close similarity between spatiotemporal frequency tunings of human cortical responses and involuntary manual following responses to visual motion. Journal of Neurophysiology, 101, 888–897. [CrossRef] [PubMed]
Anderson S. J. Burr D. C. (1985). Spatial and temporal selectivity of the human motion detection system. Vision Research, 25, 1147–1154. [CrossRef] [PubMed]
Andrews T. Purves D. (2005). The wagon-wheel illusion in continuous light. Trends in Cognitive Sciences, 9, 261–263. [CrossRef] [PubMed]
Andrews T. Purves D. Simpson W. (2005). The wheels keep turning. Trends in Cognitive Sciences, 9, 560–561. [CrossRef]
Angelaki D. E. Gu Y. DeAngelis G. C. (2009). Multisensory integration: Psychophysics, neurophysiology, and computation. Current Opinion in Neurobiology, 19, 452–458. [CrossRef] [PubMed]
Anstis S. (1990). Imperceptible intersections: The chopstick illusion. In Blake A. Troscianko T. (Eds.), AI and the Eye (pp. 105–117). London: John Wiley & Sons Inc.
Anstis S. (2001). Footsteps and inchworms: Illusions show that contrast affects apparent speed. Perception, 30, 785–794. [CrossRef] [PubMed]
Anstis S. (2004). Factors affecting footsteps: Contrast can change the apparent speed, amplitude and direction of motion. Vision Research, 44, 2171–2178. [CrossRef] [PubMed]
Anstis S. (2009). ‘Zigzag motion’ goes in unexpected directions. Journal of Vision, 9(4):17, 1–13, http://www.journalofvision.org/content/9/4/17, doi:10.1167/9.4.17. [PubMed] [Article] [CrossRef] [PubMed]
Anstis S. M. (1970). Phi movement as a subtraction process. Vision Research, 10, 1411–1430. [CrossRef] [PubMed]
Anstis S. M. (1980). The perception of apparent movement. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 290, 153–168. [CrossRef]
Anstis S. M. Rogers B. J. (1986). Illusory continuous motion from oscillating positive–negative patterns: Implications for motion perception. Perception, 15, 627–640. [CrossRef] [PubMed]
Anzai A. Ohzawa I. Freeman R. D. (2001). Joint-encoding of motion and depth by visual cortical neurons: Neural basis of the Pulfrich effect. Nature Neuroscience, 4, 513–518. [PubMed]
Apthorp D. Alais D. (2009). Tilt aftereffects and tilt illusions induced by fast translational motion: Evidence for motion streaks. Journal of Vision, 9(1):27, 1–11, http://www.journalofvision.org/content/9/1/27, doi:10.1167/9.1.27. [PubMed] [Article] [CrossRef] [PubMed]
Apthorp D. Cass J. Alais D. (2010). Orientation tuning of contrast masking caused by motion streaks. Journal of Vision, 10(10):11, 1–13, http://www.journalofvision.org/content/10/10/11, doi:10.1167/10.10.11. [PubMed] [Article] [CrossRef] [PubMed]
Apthorp D. Wenderoth P. Alais D. (2009). Motion streaks in fast motion rivalry cause orientation-selective suppression. Journal of Vision, 9(5):10, 1–14, http://www.journalofvision.org/content/9/5/10, doi:10.1167/9.5.10. [PubMed] [Article] [CrossRef] [PubMed]
Arman A. C. Ciaramitaro V. M. Boynton G. M. (2006). Effects of feature-based attention on the motion aftereffect at remote locations. Vision Research, 46, 2968–2976. [CrossRef] [PubMed]
Arnold D. H. Clifford C. W. Wenderoth P. (2001). Asynchronous processing in vision: Color leads motion. Current Biology, 11, 596–600. [CrossRef] [PubMed]
Arnold D. H. Clifford C. W. G. (2002). Determinants of asynchronous processing in vision. Proceedings of the Royal Society of London B: Biological Sciences, 269, 579–583. [CrossRef]
Arnold D. H. Durant S. Johnston A. (2003). Latency differences and the flash-lag effect. Vision Research, 43, 1829–1835. [CrossRef] [PubMed]
Arnold D. H. Johnston A. (2003). Motion-induced spatial conflict. Nature, 425, 181–184. [CrossRef] [PubMed]
Arnold D. H. Johnston A. (2005). Motion induced spatial conflict following binocular integration. Vision Research, 45, 2934–2942. [CrossRef] [PubMed]
Arnold D. H. Thompson M. Johnston A. (2007). Motion and position coding. Vision Research, 47, 2403–2410. [CrossRef] [PubMed]
Arrighi R. Alais D. Burr D. C. (2005). Neural latencies do not explain the auditory and audio-visual flash-lag effect. Vision Research, 45, 2917–2925. [CrossRef] [PubMed]
Arrighi R. Marini F. Burr D. C. (2009). Meaningful auditory information enhances perception of visual biological motion. Journal of Vision, 9(4):25, 1–7, http://www.journalofvision.org/content/9/4/25, doi:10.1167/9.4.25. [PubMed] [Article] [CrossRef] [PubMed]
Ashida H. Lingnau A. Wall M. B. Smith A. T. (2007). FMRI adaptation reveals separate mechanisms for first-order and second-order motion. Journal of Neurophysiology, 97, 1319–1325. [CrossRef] [PubMed]
Ashida H. Osaka N. (1995). Motion aftereffect with flickering test stimuli depends on adapting velocity. Vision Research, 35, 1825–1833. [CrossRef] [PubMed]
Ashida H. Seiffert A. E. Osaka N. (2001). Inefficient visual search for second-order motion. Journal of the Optical Society of America A, 18, 2255–2266. [CrossRef]
Aymoz C. Viviani P. (2004). Perceptual asynchronies for biological and non-biological visual events. Vision Research, 44, 1547–1563. [CrossRef] [PubMed]
Backus B. T. Oruç I. (2005). Illusory motion from change over time in the response to contrast and luminance. Journal of Vision, 5(11):10, 1055–1069, http://www.journalofvision.org/content/5/11/10, doi:10.1167/5.11.10. [PubMed] [Article] [CrossRef]
Badcock D. R. Derrington A. M. (1985). Detecting the displacement of periodic patterns. Vision Research, 25, 1253–1258. [CrossRef] [PubMed]
Badcock D. R. Derrington A. M. (1987). Detecting the displacements of spatial beats: A monocular capability. Vision Research, 27, 793–797. [CrossRef] [PubMed]
Badcock D. R. Derrington A. M. (1989). Detecting the displacements of spatial beats: No role for distortion products. Vision Research, 29, 731–739. [CrossRef] [PubMed]
Badcock D. R. Dickinson J. E. (2009). Second-order orientation cues to the axis of motion. Vision Research, 49, 407–415. [CrossRef] [PubMed]
Badcock D. R. McKendrick A. M. Ma-Wyatt A. (2003). Pattern cues disambiguate perceived direction in simple moving stimuli. Vision Research, 43, 2291–2301. [CrossRef] [PubMed]
Baker D. H. Graf E. W. (2008). Equivalence of physical and perceived speed in binocular rivalry. Journal of Vision, 8(4):26, 1–12, http://www.journalofvision.org/content/8/4/26, doi:10.1167/8.4.26. [PubMed] [Article] [CrossRef] [PubMed]
Baker D. H. Graf E. W. (2010a). Contextual effects in speed perception may occur at an early stage of processing. Vision Research, 50, 193–201. [CrossRef]
Baker D. H. Graf E. W. (2010b). Extrinsic factors in the perception of bistable motion stimuli. Vision Research, 50, 1257–1265. [CrossRef]
Barenholtz E. (2010). Convexities move because they contain matter. Journal of Vision, 10(11):19, 1–12, http://www.journalofvision.org/content/10/11/19, doi:10.1167/10.11.19. [PubMed] [Article] [CrossRef] [PubMed]
Barenholtz E. Tarr M. J. (2009). Figure–ground assignment to a translating contour: A preference for advancing vs receding motion. Journal of Vision, 9(5):27, 1–9, http://www.journalofvision.org/content/9/5/27, doi:10.1167/9.5.27. [PubMed] [Article] [CrossRef] [PubMed]
Barraclough N. Tinsley C. Webb B. Vincent C. Derrington A. (2006). Processing of first-order motion in marmoset visual cortex is influenced by second-order motion. Visual Neuroscience, 23, 815–824. [CrossRef] [PubMed]
Barraza J. F. Grzywacz N. M. (2002). Measurement of angular velocity in the perception of rotation. Vision Research, 42, 2457–2462. [CrossRef] [PubMed]
Barraza J. F. Grzywacz N. M. (2003). Local computation of angular velocity in rotational visual motion. Journal of the Optical Society of America A, 20, 1382–1390. [CrossRef]
Barraza J. F. Grzywacz N. M. (2005). Parametric decomposition of optic flow by humans. Vision Research, 45, 2481–2491. [CrossRef] [PubMed]
Beck C. Neumann H. (2010). Interactions of motion and form in visual cortex—A neural model. The Journal of Physiology, 104, 61–70.
Bedell H. E. Chung S. T. L. Ogmen H. Patel S. S. (2003). Color and motion: Which is the tortoise and which is the hare? Vision Research, 43, 2403–2412. [CrossRef] [PubMed]
Bedell H. E. Lott L. A. (1996). Suppression of motion-produced smear during smooth pursuit eye movements. Current Biology, 6, 1032–1034. [CrossRef] [PubMed]
Bedell H. E. Tong J. Aydin M. (2010). The perception of motion smear during eye and head movements. Vision Research, 50, 2692–2701. [CrossRef] [PubMed]
Beintema J. A. Georg K. Lappe M. (2006). Perception of biological motion from limited-lifetime stimuli. Perception & Psychophysics, 68, 613–624. [CrossRef] [PubMed]
Beintema J. A. Lappe M. (2002). Perception of biological motion without local image motion. Proceedings of the National Academy of Sciences of the United States of America, 99, 5661–5663. [CrossRef] [PubMed]
Benton C. Johnston A. (2001). A new approach to analysing texture-defined motion. Proceedings of the Royal Society of London B: Biological Sciences, 268, 2435. [CrossRef]
Benton C. P. (2004). A role for contrast-normalisation in second-order motion perception. Vision Research, 44, 91–98. [CrossRef] [PubMed]
Benton C. P. Curran W. (2003). Direction repulsion goes global. Current Biology, 13, 767–771. [CrossRef] [PubMed]
Benton C. P. Johnston A. McOwan P. W. (1997). Perception of motion direction in luminance- and contrast-defined reversed-phi motion sequences. Vision Research, 37, 2381–2399. [CrossRef] [PubMed]
Benton C. P. Johnston A. McOwan P. W. (2000). Computational modelling of interleaved first- and second-order motion sequences and translating 3f + 4f beat patterns. Vision Research, 40, 1135–1142. [CrossRef] [PubMed]
Benton C. P. O'Brien J. Curran W. (2007). Fractal rotation isolates mechanisms for form-dependent motion in human vision. Biology Letters, 3, 306. [CrossRef] [PubMed]
Berry M. J. Brivanlou I. H. Jordan T. A. Meister M. (1999). Anticipation of moving stimuli by the retina. Nature, 398, 334–338. [CrossRef] [PubMed]
Bertamini M. Bruno N. Mosca F. (2004). Illusory surfaces affect the integration of local motion signals. Vision Research, 44, 297–308. [CrossRef] [PubMed]
Bertone A. Faubert J. (2003). How is complex second-order motion processed? Vision Research, 43, 2591–2601. [CrossRef] [PubMed]
Berzhanskaya J. Grossberg S. Mingolla E. (2007). Laminar cortical dynamics of visual form and motion interactions during coherent object motion perception. Spatial Vision, 20, 337–395. [CrossRef] [PubMed]
Betts L. R. Sekuler A. B. Bennett P. J. (2009). Spatial characteristics of center–surround antagonism in younger and older adults. Journal of Vision, 9(1):25, 1–15, http://www.journalofvision.org/content/9/1/25, doi:10.1167/9.1.25. [PubMed] [Article] [CrossRef] [PubMed]
Betts L. R. Taylor C. P. Sekuler A. B. Bennett P. J. (2005). Aging reduces center–surround antagonism in visual motion processing. Neuron, 45, 361–366. [CrossRef] [PubMed]
Bex P. Edgar G. Smith A. (1995). Sharpening of drifting, blurred images. Vision Research, 35, 2539–2546. [CrossRef] [PubMed]
Bex P. J. Dakin S. C. (2002). Comparison of the spatial-frequency selectivity of local and global motion detectors. Journal of the Optical Society of America A, 19, 670–677. [CrossRef]
Bex P. J. Dakin S. C. (2005). Spatial interference among moving targets. Vision Research, 45, 1385–1398. [CrossRef] [PubMed]
Bex P. J. Metha A. B. Makous W. (1999). Enhanced motion aftereffect for complex motions. Vision Research, 39, 2229–2238. [CrossRef] [PubMed]
Bex P. J. Simmers A. J. Dakin S. C. (2001). Snakes and ladders: The role of temporal modulation in visual contour integration. Vision Research, 41, 3775–3782. [CrossRef] [PubMed]
Billino J. Bremmer F. Gegenfurtner K. R. (2008). Motion processing at low light levels: Differential effects on the perception of specific motion types. Journal of Vision, 8(3):14, 1–10, http://www.journalofvision.org/content/8/3/14, doi:10.1167/8.3.14. [PubMed] [Article] [CrossRef] [PubMed]
Blake R. Hiris E. (1993). Another means for measuring the motion aftereffect. Vision Research, 33, 1589–1592. [CrossRef] [PubMed]
Blake R. Shiffrar M. (2007). Perception of human motion. Annual Review of Psychology, 58, 47–73. [CrossRef] [PubMed]
Blake R. Tadin D. Sobel K. V. Raissian T. A. Chong S. C. (2006). Strength of early visual adaptation depends on visual awareness. Proceedings of the National Academy of Sciences of the United States of America, 103, 4783–4788. [CrossRef] [PubMed]
Blaser E. Shepard T. (2009). Maximal motion aftereffects in spite of diverted awareness. Vision Research, 49, 1174–1181. [CrossRef] [PubMed]
Blaser E. Sperling G. (2008). When is motion ‘motion’? Perception, 37, 624–627. [CrossRef] [PubMed]
Boi M. Oğmen H. Krummenacher J. Otto T. U. Herzog M. H. (2009). A (fascinating) litmus test for human retino- vs non-retinotopic processing. Journal of Vision, 9(13):5, 1–11, http://www.journalofvision.org/content/9/13/5, doi:10.1167/9.13.5. [PubMed] [Article] [CrossRef] [PubMed]
Bonneh Y. S. Cooperman A. Sagi D. (2001). Motion-induced blindness in normal observers. Nature, 411, 798–801. [CrossRef] [PubMed]
Born R. T. Bradley D. C. (2005). Structure and function of visual area MT. Annual Reviews in Neuroscience, 28, 157–189. [CrossRef]
Bours R. J. E. Kroes M. C. W. Lankheet M. J. (2009). Sensitivity for reverse-phi motion. Vision Research, 49, 1–9. [CrossRef] [PubMed]
Bours R. J. E. Kroes M. C. W. Lankheet M. J. M. (2007). The parallel between reverse-phi and motion aftereffects. Journal of Vision, 7(11):8, 1–10, http://www.journalofvision.org/content/7/11/8, doi:10.1167/7.11.8. [PubMed] [Article] [CrossRef] [PubMed]
Bowns L. (1996). Evidence for a feature tracking explanation of why type II plaids move in the vector sum direction at short durations. Vision Research, 36, 3685–3694. [CrossRef] [PubMed]
Bowns L. (2006). ‘Squaring’ is better at predicting plaid motion than the vector average or intersection of constraints. Perception, 35, 469–481. [CrossRef] [PubMed]
Bowns L. Alais D. (2006). Large shifts in perceived motion direction reveal multiple global motion solutions. Vision Research, 46, 1170–1177. [CrossRef] [PubMed]
Braddick O. J. (1974). A short-range process in apparent motion. Vision Research, 14, 519–527. [CrossRef] [PubMed]
Braddick O. J. (1980). Low-level and high-level processes in apparent motion. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 290, 137–151. [CrossRef]
Bradley D. Goyal M. (2008). Velocity computation in the primate visual system. Nature Reviews Neuroscience, 9, 686–695. [CrossRef] [PubMed]
Bressler D. W. Whitney D. (2006). Second-order motion shifts perceived position. Vision Research, 46, 1120–1128. [CrossRef] [PubMed]
Brooks A. Schouten B. Troje N. F. Verfaillie K. Blanke O. van der Zwan R. (2008). Correlated changes in perceptions of the gender and orientation of ambiguous biological motion figures. Current Biology, 18, R728–R729. [CrossRef] [PubMed]
Brooks A. van der Zwan R. Holden J. (2003). An illusion of coherent global motion arising from single brief presentations of a stationary stimulus. Vision Research, 43, 2387–2392. [CrossRef] [PubMed]
Brooks K. R. Stone L. S. (2004). Stereomotion speed perception: Contributions from both changing disparity and interocular velocity difference over a range of relative disparities. Journal of Vision, 4(12):6, 1061–1079, http://www.journalofvision.org/content/4/12/6, doi:10.1167/4.12.6. [PubMed] [Article] [CrossRef]
Brooks K. R. Stone L. S. (2006). Spatial scale of stereomotion speed processing. Journal of Vision, 6(11):9, 1257–1266, http://www.journalofvision.org/content/6/11/9, doi:10.1167/6.11.9. [PubMed] [Article] [CrossRef]
Brouwer A.-M. Brenner E. Smeets J. B. J. (2002). Perception of acceleration with short presentation times: Can acceleration be used in interception? Perception & Psychophysics, 64, 1160–1168. [CrossRef] [PubMed]
Brouwer A.-M. Middelburg T. Smeets J. B. J. Brenner E. (2003). Hitting moving targets: A dissociation between the use of the target's speed and direction of motion. Experimental Brain Research, 152, 368–375. [CrossRef] [PubMed]
Bulakowski P. F. Koldewyn K. Whitney D. (2007). Independent coding of object motion and position revealed by distinct contingent aftereffects. Vision Research, 47, 810–817. [CrossRef] [PubMed]
Burr D. C. (1980). Motion smear. Nature, 284, 164–165. [CrossRef] [PubMed]
Burr D. C. (2000). Motion vision: Are ‘speed lines’ used in human visual motion? Current Biology, 10, R440–R443. [CrossRef] [PubMed]
Burr D. C. Badcock D. R. Ross J. (2001). Cardinal axes for radial and circular motion, revealed by summation and by masking. Vision Research, 41, 473–481. [CrossRef] [PubMed]
Burr D. C. Baldassi S. Morrone M. C. Verghese P. (2009). Pooling and segmenting motion signals. Vision Research, 49, 1065–1072. [CrossRef] [PubMed]
Burr D. C. Morrone M. C. Vaina L. M. (1998). Large receptive fields for optic flow detection in humans. Vision Research, 38, 1731–1743. [CrossRef] [PubMed]
Burr D. C. Ross J. (1982). Contrast sensitivity at high velocities. Vision Research, 22, 479–484. [CrossRef] [PubMed]
Burr D. C. Ross J. (1986). Visual processing of motion. Trends in Neurosciences, 9, 304–307. [CrossRef]
Burr D. C. Ross J. (2004). Vision: The world through picket fences. Current Biology, 14, R381–382. [CrossRef] [PubMed]
Burr D. C. Santoro L. (2001). Temporal integration of optic flow, measured by contrast and coherence thresholds. Vision Research, 41, 1891–1899. [CrossRef] [PubMed]
Burr D. C. Thompson P. (2011). Motion psychophysics: 1985–2010. Vision Research, 51, 1431–1456. [CrossRef] [PubMed]
Butler J. S. Smith S. T. Campos J. L. Bülthoff H. H. (2010). Bayesian integration of visual and vestibular signals for heading. Journal of Vision, 10(11):23, 1–13, http://www.journalofvision.org/content/10/11/23, doi:10.1167/10.11.23. [PubMed] [Article] [CrossRef] [PubMed]
Caetta F. Gorea A. Bonneh Y. (2007). Sensory and decisional factors in motion-induced blindness. Journal of Vision, 7(7):4, 1–12, http://www.journalofvision.org/content/7/7/4, doi:10.1167/7.7.4. [PubMed] [Article] [CrossRef] [PubMed]
Cai R. Schlag J. (2001). A new form of illusory conjunction between color and shape [Abstract]. Journal of Vision, 1(3):127, 127a, http://www.journalofvision.org/content/1/3/127, doi:10.1167/1.3.127. [CrossRef]
Capelli A. Berthoz A. Vidal M. (2010). Estimating the time-to-passage of visual self-motion: Is the second order motion information processed? Vision Research, 50, 914–923. [CrossRef] [PubMed]
Caplovitz G. P. Hsieh P.-J. Tse P. U. (2006). Mechanisms underlying the perceived angular velocity of a rigidly rotating object. Vision Research, 46, 2877–2893. [CrossRef] [PubMed]
Carlson T. A. Schrater P. He S. (2006). Floating square illusion: Perceptual uncoupling of static and dynamic objects in motion. Journal of Vision, 6(2):4, 132–144, http://www.journalofvision.org/content/6/2/4, doi:10.1167/6.2.4. [PubMed] [Article] [CrossRef]
Casile A. Giese M. A. (2005). Critical features for the recognition of biological motion. Journal of Vision, 5(4):6, 348–360, http://www.journalofvision.org/content/5/4/6, doi:10.1167/5.4.6. [PubMed] [Article] [CrossRef]
Casile A. Giese M. A. (2006). Nonvisual motor training influences biological motion perception. Current Biology, 16, 69–74. [CrossRef] [PubMed]
Cassanello C. R. Edwards M. Badcock D. R. Nishida S. (2011). No interaction of first- and second-order signals in the extraction of global-motion and optic-flow. Vision Research, 51, 352–361. [CrossRef] [PubMed]
Cavanagh P. (1992). Attention-based motion perception. Science, 257, 1563–1565. [CrossRef] [PubMed]
Cavanagh P. (1994). Is there low-level motion processing for non-luminance-based stimuli? In Papathomas P. V. Chubb C. Gorea A. Kowler E. (Eds.), Early vision and beyond (pp. 113–120). Cambridge, MA: MIT Press.
Cavanagh P. Alvarez G. (2005). Tracking multiple targets with multifocal attention. Trends in Cognitive Sciences, 9, 349–354. [CrossRef] [PubMed]
Cavanagh P. Arguin M. von Grünau M. (1989). Interattribute apparent motion. Vision Research, 29, 1197–1204. [CrossRef] [PubMed]
Cavanagh P. Holcombe A. O. Chou W. (2008). Mobile computation: Spatiotemporal integration of the properties of objects in motion. Journal of Vision, 8(12):1, 1–23, http://www.journalofvision.org/content/8/12/1, doi:10.1167/8.12.1. [PubMed] [Article] [CrossRef]
Cavanagh P. Hunt A. R. Afraz A. Rolfs M. (2010). Visual stability based on remapping of attention pointers. Trends in Cognitive Sciences, 14, 147–153. [CrossRef] [PubMed]
Cavanagh P. Labianca A. T. Thornton I. M. (2001). Attention-based visual routines: Sprites. Cognition, 80, 47–60. [CrossRef] [PubMed]
Cavanagh P. Mather G. (1989). Motion: The long and short of it. Spatial Vision, 4, 103–129. [CrossRef] [PubMed]
Challinor K. L. Mather G. (2010). A motion-energy model predicts the direction discrimination and MAE duration of two-stroke apparent motion at high and low retinal illuminance. Vision Research, 50, 1109–1116. [CrossRef] [PubMed]
Chang D. H. F. Harris L. R. Troje N. F. (2010). Frames of reference for biological motion and face perception. Journal of Vision, 10(6):22, 1–11, http://www.journalofvision.org/content/10/6/22, doi:10.1167/10.6.22. [PubMed] [Article] [CrossRef] [PubMed]
Chang D. H. F. Troje N. F. (2008). Perception of animacy and direction from local biological motion signals. Journal of Vision, 8(5):3, 1–10, http://www.journalofvision.org/content/8/5/3, doi:10.1167/8.5.3. [PubMed] [Article] [CrossRef] [PubMed]
Chang D. H. F. Troje N. F. (2009a). Acceleration carries the local inversion effect in biological motion perception. Journal of Vision, 9(1):19, 1–17, http://www.journalofvision.org/content/9/1/19, doi:10.1167/9.1.19. [PubMed] [Article] [CrossRef]
Chang D. H. F. Troje N. F. (2009b). Characterizing global and local mechanisms in biological motion perception. Journal of Vision, 9(5):8, 1–10, http://www.journalofvision.org/content/9/5/8, doi:10.1167/9.5.8. [PubMed] [Article] [CrossRef]
Chappell M. Mullen K. T. (2010). The magnocellular visual pathway and the flash-lag illusion. Journal of Vision, 10(11):24, 1–10, http://www.journalofvision.org/content/10/11/24, doi:10.1167/10.11.24. [PubMed] [Article] [CrossRef] [PubMed]
Chaudhuri A. (1991). Eye movements and the motion aftereffect: Alternatives to the induced motion hypothesis. Vision Research, 31, 1639–1645. [CrossRef] [PubMed]
Chen Y. Matthews N. Qian N. (2001). Motion rivalry impairs motion repulsion. Vision Research, 41, 3639–3647. [CrossRef] [PubMed]
Chen Y. Meng X. Matthews N. Qian N. (2005). Effects of attention on motion repulsion. Vision Research, 45, 1329–1339. [CrossRef] [PubMed]
Chubb C. Sperling G. (1988). Drift-balanced random stimuli: A general basis for studying non-Fourier motion perception. Journal of the Optical Society of America A, 5, 1986–2007. [CrossRef]
Chung S. T. L. Patel S. S. Bedell H. E. Yilmaz O. (2007). Spatial and temporal properties of the illusory motion-induced position shift for drifting stimuli. Vision Research, 47, 231–243. [CrossRef] [PubMed]
Churan J. Khawaja F. A. Tsui J. M. G. Pack C. C. (2008). Brief motion stimuli preferentially activate surround-suppressed neurons in macaque visual area MT. Current Biology, 18, R1051–1052. [CrossRef] [PubMed]
Clifford C. W. G. (2002). Perceptual adaptation: Motion parallels orientation. Trends in Cognitive Sciences, 6, 136–143. [CrossRef] [PubMed]
Clifford C. W. G. Spehar B. Pearson J. (2004). Motion transparency promotes synchronous perceptual binding. Vision Research, 44, 3073–3080. [CrossRef] [PubMed]
Cobo-Lewis A. B. Gilroy L. A. Smallwood T. B. (2000). Dichoptic plaids may rival, but their motions can integrate. Spatial Vision, 13, 415–429. [CrossRef] [PubMed]
Conway B. R. Kitaoka A. Yazdanbakhsh A. Pack C. C. Livingstone M. S. (2005). Neural basis for a powerful static motion illusion. Journal of Neuroscience, 25, 5651–5656. [CrossRef] [PubMed]
Cook E. P. Maunsell J. H. R. (2002). Dynamics of neuronal responses in macaque MT and VIP during motion detection. Nature Neuroscience, 5, 985–994. [CrossRef] [PubMed]
Coppe S. de Xivry J.-J. O. Missal M. Lefèvre P. (2010). Biological motion influences the visuomotor transformation for smooth pursuit eye movements. Vision Research, 50, 2721–2728. [CrossRef] [PubMed]
Cox M. J. Derrington A. M. (1994). The analysis of motion of two-dimensional patterns: Do Fourier components provide the first stage? Vision Research, 34, 59–72. [CrossRef] [PubMed]
Croner L. J. Albright T. D. (1997). Image segmentation enhances discrimination of motion in visual noise. Vision Research, 37, 1415–1427. [CrossRef] [PubMed]
Cropper S. J. (2006). The detection of motion in chromatic stimuli: Pedestals and masks. Vision Research, 46, 724–738. [CrossRef] [PubMed]
Cropper S. J. Wuerger S. M. (2005). The perception of motion in chromatic stimuli. Behavioral and Cognitive Neuroscience Reviews, 4, 192–217. [CrossRef] [PubMed]
Culham J. C. Verstraten F. A. Ashida H. Cavanagh P. (2000). Independent aftereffects of attention and motion. Neuron, 28, 607–615. [CrossRef] [PubMed]
Curran W. Benton C. P. (2003). Speed tuning of direction repulsion describes an inverted U-function. Vision Research, 43, 1847–1853. [CrossRef] [PubMed]
Curran W. Benton C. P. (2006). Test stimulus characteristics determine the perceived speed of the dynamic motion aftereffect. Vision Research, 46, 3284–3290. [CrossRef] [PubMed]
Curran W. Braddick O. J. (2000). Speed and direction of locally-paired dot patterns. Vision Research, 40, 2115–2124. [CrossRef] [PubMed]
Curran W. Clifford C. W. G. Benton C. P. (2006). The direction aftereffect is driven by adaptation of local motion detectors. Vision Research, 46, 4270–4278. [CrossRef] [PubMed]
Dakin S. C. Mareschal I. (2000). The role of relative motion computation in ‘direction repulsion’. Vision Research, 40, 833–841. [CrossRef] [PubMed]
Dakin S. C. Mareschal I. Bex P. J. (2005). Local and global limitations on direction integration assessed using equivalent noise analysis. Vision Research, 45, 3027–3049. [CrossRef] [PubMed]
Deas R. W. Roach N. W. Mcgraw P. V. (2008). Distortions of perceived auditory and visual space following adaptation to motion. Experimental Brain Research, 191, 473–485. [CrossRef] [PubMed]
Del Viva M. M. Gori M. (2008). Anti-Glass patterns and real motion perception: Same or different mechanisms? Journal of Vision, 8(2):1, 1–15, http://www.journalofvision.org/content/8/2/1, doi:10.1167/8.2.1. [PubMed] [Article] [CrossRef] [PubMed]
Del Viva M. M. Gori M. Burr D. C. (2006). Powerful motion illusion caused by temporal asymmetries in ON and OFF visual pathways. Journal of Neurophysiology, 95, 3928. [CrossRef] [PubMed]
Dennett D. Kinsbourne M. (1992). Time and the observer: The where and when of consciousness in the brain. Behavioral and Brain Sciences, 15, 183–247. [CrossRef]
Derrington A. M. Badcock D. R. (1985). Separate detectors for simple and complex grating patterns? Vision Research, 25, 1869–1878. [CrossRef] [PubMed]
Derrington A. M. Badcock D. R. Holroyd S. A. (1992). Analysis of the motion of 2-dimensional patterns: Evidence for a second-order process. Vision Research, 32, 699–707. [CrossRef] [PubMed]
Derrington A. M. Fine I. Henning G. B. (1993). Errors indirection-of-motion discrimination with dichoptically viewed stimuli. Vision Research, 33, 1491–1494. [CrossRef] [PubMed]
Derrington A. M. Henning G. B. (1987). Errors in direction-of-motion discrimination with complex stimuli. Vision Research, 27, 61–75. [CrossRef] [PubMed]
De Valois R. L. De Valois K. K. (1991). Vernier acuity with stationary moving Gabors. Vision Research, 31, 1619–1626. [CrossRef] [PubMed]
Dickinson J. E. Han L. Bell J. Badcock D. R. (2010). Local motion effects on form in radial frequency patterns. Journal of Vision, 10(3):20, 1–15, http://www.journalofvision.org/content/10/3/20, doi:10.1167/10.3.20. [PubMed] [Article] [CrossRef] [PubMed]
Dils A. T. Boroditsky L. (2010). Visual motion aftereffect from understanding motion language. Proceedings of the National Academy of Sciences of the United States of America, 107, 16396–16400. [CrossRef] [PubMed]
Ditterich J. (2006). Stochastic models of decisions about motion direction: Behavior and physiology. Neural Networks, 19, 981–1012. [CrossRef] [PubMed]
Dixon P. Di Lollo V. (1994). Beyond visible persistence: An alternative account of temporal integration and segregation in visual processing. Cognitive Psychology, 26, 33–63. [CrossRef] [PubMed]
Dobkins K. R. Albright T. D. (2004). Merging processing streams: Color cues for motion detection and interpretation. In Chalupa L. M. Werner J. S. (Eds.), The visual neuroscience (pp. 1217–1228). Cambridge, MA: The MIT Press.
Dobkins K. R. Rezec A. A. Krekelberg B. (2007). Effects of spatial attention and salience cues on chromatic and achromatic motion processing. Vision Research, 47, 1893–1906. [CrossRef] [PubMed]
Dubrowski A. Carnahan H. (2002). Action-perception dissociation in response to target acceleration. Vision Research, 42, 1465–1473. [CrossRef] [PubMed]
Duffy C. J. (2003). The cortical analysis of optic flow. In Chalupa L. M. Werner J. S. (Eds.), The visual neurosciences (pp. 1260–1283). Cambridge, MA: The MIT Press.
Duffy C. J. Wurtz R. H. (1991). Sensitivity of MST neurons to optic flow stimuli: I. A continuum of response selectivity to large-field stimuli. Journal of Neurophysiology, 65, 1329–1345. [PubMed]
Duffy C. J. Wurtz R. H. (1993). An illusory transformation of optic flow fields. Vision Research, 33, 1481–1490. [CrossRef] [PubMed]
Duijnhouwer J. van Wezel R. J. A. van den Berg A. V. (2008). The role of motion capture in an illusory transformation of optic flow fields. Journal of Vision, 8(4):27, 1–18, http://www.journalofvision.org/content/8/4/27, doi:10.1167/8.4.27. [PubMed] [Article] [CrossRef] [PubMed]
Dumoulin S. O. Baker C. L. Hess R. F. Evans A. C. (2003). Cortical specialization for processing first- and second-order motion. Cerebral Cortex, 13, 1375–1385. [CrossRef] [PubMed]
Duncan R. O. Albright T. D. Stoner G. R. (2000). Occlusion and the interpretation of visual motion: Perceptual and neuronal effects of context. Journal of Neuroscience, 20, 5885–5897. [PubMed]
Dunker K. (1929). Über induzierte Bewegung. Psychologishe Forschung, 12, 180–259. [CrossRef]
Durant S. Johnston A. (2004). Temporal dependence of local motion induced shifts in perceived position. Vision Research, 44, 357–366. [CrossRef] [PubMed]
Eagleman D. M. Sejnowski T. J. (2007). Motion signals bias localization judgments: A unified explanation for the flash-lag, flash-drag, flash-jump, and Frohlich illusions. Journal of Vision, 7(4):3, 1–12, http://www.journalofvision.org/content/7/4/3, doi:10.1167/7.4.3. [PubMed] [Article] [CrossRef] [PubMed]
Edwards M. Badcock D. R. (1994). Global motion perception: Interaction of the ON and OFF pathways. Vision Research, 34, 2849–2858. [CrossRef] [PubMed]
Edwards M. Badcock D. R. (1995). Global motion perception: No interaction between the first- and second-order motion pathways. Vision Research, 35, 2589–2602. [CrossRef] [PubMed]
Edwards M. Badcock D. R. (1996). Global-motion perception: Interaction of chromatic and luminance signals. Vision Research, 36, 2423–2431. [CrossRef] [PubMed]
Edwards M. Badcock D. R. (2003). Motion distorts perceived depth. Vision Research, 43, 1799–1804. [CrossRef] [PubMed]
Edwards M. Badcock D. R. Smith A. T. (1998). Independent speed-tuned global-motion systems. Vision Research, 38, 1573–1580. [CrossRef] [PubMed]
Edwards M. Crane M. F. (2007). Motion streaks improve motion detection. Vision Research, 47, 828–833. [CrossRef] [PubMed]
Edwards M. Greenwood J. A. (2005). The perception of motion transparency: A signal-to-noise limit. Vision Research, 45, 1877–1884. [CrossRef] [PubMed]
Edwards M. Metcalf O. (2010). Independence in the processing of first- and second-order motion signals at the local-motion-pooling level. Vision Research, 50, 261–270. [CrossRef] [PubMed]
Edwards M. Nishida S. (2004). Contrast-reversing global-motion stimuli reveal local interactions between first- and second-order motion signals. Vision Research, 44, 1941–1950. [CrossRef] [PubMed]
Ellemberg D. Lavoie K. Lewis T. L. Maurer D. Lepore F. Guillemot J.-P. (2003). Longer VEP latencies and slower reaction times to the onset of second-order motion than to the onset of first-order motion. Vision Research, 43, 651–658. [CrossRef] [PubMed]
Enns J. T. (2002). Visual binding in the standing wave illusion. Psychonomic Bulletin & Review, 9, 489–496. [CrossRef] [PubMed]
Etchells P. J. Benton C. P. Ludwig C. J. H. Gilchrist I. D. (2010). The target velocity integration function for saccades. Journal of Vision, 10(6):7, 1–14, http://www.journalofvision.org/content/10/6/7, doi:10.1167/10.6.7. [PubMed] [Article] [CrossRef] [PubMed]
Ezzati A. Golzar A. Afraz A. S. R. (2008). Topography of the motion aftereffect with and without eye movements. Journal of Vision, 8(14):23, 1–16, http://www.journalofvision.org/content/8/14/23, doi:10.1167/8.14.23. [PubMed] [Article] [CrossRef] [PubMed]
Fahle M. Poggio T. (1981). Visual hyperacuity: Spatiotemporal interpolation in human vision. Proceedings of the Royal Society of London B: Biological Sciences, 213, 451–477. [CrossRef]
Fan Z. Harris J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48, 2793–2804. [CrossRef] [PubMed]
Fang F. He S. (2004). Strong influence of test patterns on the perception of motion aftereffect and position. Journal of Vision, 4(7):9, 637–642, http://www.journalofvision.org/content/4/7/9, doi:10.1167/4.7.9. [PubMed] [Article] [CrossRef]
Fennema C. L. Thompson W. B. (1979). Velocity discrimination in scenes containing several moving objects. Computer Vision, Graphics, and Image Processing, 9, 301–315. [CrossRef]
Fernandez J. M. Farell B. (2006). Motion in depth from interocular velocity differences revealed by differential motion aftereffect. Vision Research, 46, 1307–1317. [CrossRef] [PubMed]
Fernandez J. M. Farell B. (2007). Shape constancy and depth-order violations in structure from motion: A look at non-frontoparallel axes of rotation. Journal of Vision, 7(7):3, 1–18, http://www.journalofvision.org/content/7/7/3, doi:10.1167/7.7.3. [PubMed] [Article] [CrossRef] [PubMed]
Fernandez J. M. Farell B. (2009). A new theory of structure-from-motion perception. Journal of Vision, 9(11):23, 1–20, http://www.journalofvision.org/content/9/11/23, doi:10.1167/9.11.23. [PubMed] [Article] [CrossRef] [PubMed]
Fink P. W. Foo P. S. Warren W. H. (2009). Catching fly balls in virtual reality: A critical test of the outfielder problem. Journal of Vision, 9(13):14, 1–8, http://www.journalofvision.org/content/9/13/14, doi:10.1167/9.13.14. [PubMed] [Article] [CrossRef] [PubMed]
Foder J. A. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Fracasso A. Caramazza A. Melcher D. (2010). Continuous perception of motion and shape across saccadic eye movements. Journal of Vision, 10(13):14, 1–17, http://www.journalofvision.org/content/10/13/14, doi:10.1167/10.13.14. [PubMed] [Article] [CrossRef] [PubMed]
Freeman E. Driver J. (2008). Direction of visual apparent motion driven solely by timing of a static sound. Current Biology, 18, 1262–1266. [CrossRef] [PubMed]
Freeman T. Harris M. (1992). Human sensitivity to expanding and rotating motion: Effects of complementary masking and directional structure. Vision Research, 32, 81–87. [CrossRef] [PubMed]
Freeman T. C. A. (2007a). Extra-retinal vision: Firing at will. Current Biology, 17, R99–R101. [CrossRef]
Freeman T. C. A. (2007b). Simultaneous adaptation of retinal and extra-retinal motion signals. Vision Research, 47, 3373–3384. [CrossRef]
Freeman T. C. A. Champion R. A. Sumnall J. H. Snowden R. J. (2009). Do we have direct access to retinal image motion during smooth pursuit eye movements? Journal of Vision, 9(1):33, 1–11, http://www.journalofvision.org/content/9/1/33, doi:10.1167/9.1.33. [PubMed] [Article] [CrossRef] [PubMed]
Freeman T. C. A. Champion R. A. Warren P. A. (2010). A Bayesian model of perceived head-centered velocity during smooth pursuit eye movement. Current Biology, 20, 757–762. [CrossRef] [PubMed]
Freeman T. C. A. Sumnall J. H. Snowden R. J. (2003). The extra-retinal motion aftereffect. Journal of Vision, 3(11):11, 771–779, http://www.journalofvision.org/content/3/11/11, doi:10.1167/3.11.11. [PubMed] [Article] [CrossRef]
Frenz H. Bremmer F. Lappe M. (2003). Discrimination of travel distances from ‘situated’ optic flow. Vision Research, 43, 2173–2183. [CrossRef] [PubMed]
Frenz H. Lappe M. (2005). Absolute travel distance from optic flow. Vision Research, 45, 1679–1692. [CrossRef] [PubMed]
Fu Y.-X. Shen Y. Gao H. Dan Y. (2004). Asymmetry in visual cortical circuits underlying motion-induced perceptual mislocalization. Journal of Neuroscience, 24, 2165–2171. [CrossRef] [PubMed]
Fujimoto K. (2003). Motion induction from biological motion. Perception, 32, 1273–1277. [CrossRef] [PubMed]
Fujimoto K. Sato T. (2006). Backscroll illusion: Apparent motion in the background of locomotive objects. Vision Research, 46, 14–25. [CrossRef] [PubMed]
Fujimoto K. Yagi A. (2008). Biological motion alters coherent motion perception. Perception, 37, 1783–1789. [CrossRef] [PubMed]
Fujisaki W. Nishida S. (2010). A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities. Proceedings of the Royal Society of London B: Biological Sciences, 277, 2281–2290. [CrossRef]
Funk A. P. Pettigrew J. D. (2003). Does interhemispheric competition mediate motion-induced blindness? A transcranial magnetic stimulation study. Perception, 32, 1325–1338. [CrossRef] [PubMed]
Garcia J. O. Grossman E. D. (2008). Necessary but not sufficient: Motion perception is required for perceiving biological motion. Vision Research, 48, 1144–1149. [CrossRef] [PubMed]
Gauch A. Kerzel D. (2009). Contributions of visible persistence and perceptual set to the flash-lag effect: Focusing on flash onset abolishes the illusion. Vision Research, 49, 2983–2991. [CrossRef] [PubMed]
Gegenfurtner K. R. Hawken M. J. (1996). Perceived velocity of luminance, chromatic and non-Fourier stimuli: Influence of contrast and temporal frequency. Vision Research, 36, 1281–1290. [CrossRef] [PubMed]
Gegenfurtner K. R. Mayser H. M. Sharpe L. T. (2000). Motion perception at scotopic light levels. Journal of the Optical Society of America A, 17, 1505–1515. [CrossRef]
Geisler W. S. (1999). Motion streaks provide a spatial code for motion direction. Nature, 400, 65–69. [CrossRef] [PubMed]
Georgeson M. A. Hammett S. T. (2002). Seeing blur: ‘Motion sharpening’ without motion. Proceedings of the Royal Society of London B: Biological Sciences, 269, 1429–1434. [CrossRef]
Georgeson M. A. Scott-Samuel N. E. (1999). Motion contrast: A new metric for direction discrimination. Vision Research, 39, 4393–4402. [CrossRef] [PubMed]
Gibson J. J. (1977). On the analysis of change in the optic array. Scandinavian Journal of Psychology, 18, 161–163. [CrossRef] [PubMed]
Glasser D. M. Tadin D. (2010). Low-level mechanisms do not explain paradoxical motion percepts. Journal of Vision, 10(4):20, 1–9, http://www.journalofvision.org/content/10/4/20, doi:10.1167/10.4.20. [PubMed] [Article] [CrossRef] [PubMed]
Golomb B. Andersen R. A. Nakayama K. MacLeod D. I. Wong A. (1985). Visual thresholds for shearing motion in monkey and man. Vision Research, 25, 813–820. [CrossRef] [PubMed]
Gomi H. Abekawa N. Nishida S. (2006). Spatiotemporal tuning of rapid interactions between visual-motion analysis and reaching movement. Journal of Neuroscience, 26, 5301–5308. [CrossRef] [PubMed]
Gorea A. Caetta F. (2009). Adaptation and prolonged inhibition as a main cause of motion-induced blindness. Journal of Vision, 9(6):16, 1–17, http://www.journalofvision.org/content/9/6/16, doi:10.1167/9.6.16. [PubMed] [Article] [CrossRef] [PubMed]
Gottsdanker R. (1956). The ability of human operators to detect acceleration of target motion. Psychological Bulletin, 53, 477–487. [CrossRef] [PubMed]
Goutcher R. Loffler G. (2009). Motion transparency from opposing luminance modulated and contrast modulated gratings. Vision Research, 49, 660–670. [CrossRef] [PubMed]
Graf E. W. Adams W. J. Lages M. (2002). Modulating motion-induced blindness with depth ordering and surface completion. Vision Research, 42, 2731–2735. [CrossRef] [PubMed]
Graziano M. Andersen R. A. Snowden R. J. (1994). Tuning of MST neurons to spiral motions. Journal of Neuroscience, 14, 54–67. [PubMed]
Greenwood J. A. Edwards M. (2006a). An extension of the transparent-motion detection limit using speed-tuned global-motion systems. Vision Research, 46, 1440–1449. [CrossRef]
Greenwood J. A. Edwards M. (2006b). Pushing the limits of transparent-motion detection with binocular disparity. Vision Research, 46, 2615–2624. [CrossRef]
Grigo A. Lappe M. (1998). Interaction of stereo vision and optic flow processing revealed by an illusory stimulus. Vision Research, 38, 281–290. [CrossRef] [PubMed]
Grossberg S. Mingolla E. Viswanathan L. (2001). Neural dynamics of motion integration and segmentation within and across apertures. Vision Research, 41, 2521–2553. [CrossRef] [PubMed]
Grunewald A. (2004). Motion repulsion is monocular. Vision Research, 44, 959–962. [CrossRef] [PubMed]
Gu Y. Deangelis G. C. Angelaki D. E. (2007). A functional link between area MSTd and heading perception based on vestibular signals. Nature Neuroscience, 10, 1038–1047. [CrossRef] [PubMed]
Gu Y. Fetsch C. R. Adeyemo B. DeAngelis G. C. Angelaki D. E. (2010). Decoding of MSTd population activity accounts for variations in the precision of heading perception. Neuron, 66, 596–609. [CrossRef] [PubMed]
Gurnsey R. Roddy G. Ouhnana M. Troje N. F. (2008). Stimulus magnification equates identification and discrimination of biological motion across the visual field. Vision Research, 48, 2827–2834. [CrossRef] [PubMed]
Gurnsey R. Troje N. F. (2010). Peripheral sensitivity to biological motion conveyed by first and second-order signals. Vision Research, 50, 127–135. [CrossRef] [PubMed]
Hammett S. T. (1997). Motion blur and motion sharpening in the human visual system. Vision Research, 37, 2505–2510. [CrossRef] [PubMed]
Hammett S. T. Champion R. A. Morland A. B. Thompson P. G. (2005). A ratio model of perceived speed in the human visual system. Proceedings of the Royal Society of London B: Biological Sciences, 272, 2351–2356. [CrossRef]
Hammett S. T. Champion R. A. Thompson P. G. Morland A. B. (2007). Perceptual distortions of speed at low luminance: Evidence inconsistent with a Bayesian account of speed encoding. Vision Research, 47, 564–568. [CrossRef] [PubMed]
Hammett S. T. Georgeson M. A. Barbieri-Hesse G. S. (2003). Motion, flash, and flicker: A unified spatiotemporal model of perceived edge sharpening. Perception, 32, 1221–1232. [CrossRef] [PubMed]
Hammett S. T. Georgeson M. A. Gorea A. (1998). Motion blur and motion sharpening: Temporal smear and local contrast non-linearity. Vision Research, 38, 2099–2108. [CrossRef] [PubMed]
Hanada M. (2005). Computational analyses for illusory transformations in the optic flow field and heading perception in the presence of moving objects. Vision Research, 45, 749–758. [CrossRef] [PubMed]
Hanes D. P. Schall J. D. (1996). Neural control of voluntary movement initiation. Science, 274, 427–430. [CrossRef] [PubMed]
Harp T. D. Bressler D. W. Whitney D. (2007). Position shifts following crowded second-order motion adaptation reveal processing of local and global motion without awareness. Journal of Vision, 7(2):15, 1–13, http://www.journalofvision.org/content/7/2/15, doi:10.1167/7.2.15. [PubMed] [Article] [CrossRef] [PubMed]
Harris J. Sullivan D. Oakley M. (2008). Spatial offset of test field elements from surround elements affects the strength of motion aftereffects. Perception, 37, 1010–1021. [CrossRef] [PubMed]
Harris L. R. Duke P. A. Kopinska A. (2006). Flash lag in depth. Vision Research, 46, 2735–2742. [CrossRef] [PubMed]
Harrison N. R. Wuerger S. M. Meyer G. F. (2010). Reaction time facilitation for horizontally moving auditory–visual stimuli. Journal of Vision, 10(14):16, 1–21, http://www.journalofvision.org/content/10/14/16, doi:10.1167/10.14.16. [PubMed] [Article] [CrossRef] [PubMed]
Hayashi R. Miura K. Tabata H. Kawano K. (2008). Eye movements in response to dichoptic motion: Evidence for a parallel-hierarchical structure of visual motion processing in primates. Journal of Neurophysiology, 99, 2329–2346. [CrossRef] [PubMed]
Hayashi R. Sugita Y. Nishida S. Kawano K. (2010). How motion signals are integrated across frequencies: Study on motion perception and ocular following responses using multiple-slit stimuli. Journal of Neurophysiology, 103, 230–243. [CrossRef] [PubMed]
Hayes A. (2000). Apparent position governs contour-element binding by the visual system. Proceedings of the Royal Society of London B: Biological Sciences, 267, 1341–1345. [CrossRef]
Hazelhoff F. Wiersma H. (1924). Die Wahrnehmungszeit. Zeitschrift fur Psychologie, 96, 171–188.
Heeger D. J. (1987). Model for the extraction of image flow. Journal of the Optical Society of America A, 4, 1455–1471. [CrossRef]
Hess R. Hutchinson C. Ledgeway T. Mansouri B. (2007). Binocular influences on global motion processing in the human visual system. Vision Research, 47, 1682–1692. [CrossRef] [PubMed]
Hess R. F. Aaen-Stockdale C. (2008). Global motion processing: The effect of spatial scale and eccentricity. Journal of Vision, 8(4):11, 1–11, http://www.journalofvision.org/content/8/4/11, doi:10.1167/8.4.11. [PubMed] [Article] [CrossRef] [PubMed]
Hess R. F. Zaharia A. G. (2010). Global motion processing: Invariance with mean luminance. Journal of Vision, 10(13):22, 1–10, http://www.journalofvision.org/content/10/13/22, doi:10.1167/10.13.22. [PubMed] [Article] [CrossRef] [PubMed]
Hibbard P. B. Bradshaw M. F. DeBruyn B. (1999). Global motion processing is not tuned for binocular disparity. Vision Research, 39, 961–974. [CrossRef] [PubMed]
Hill H. Johnston A. (2001). Categorizing sex and identity from the biological motion of faces. Current Biology, 11, 880–885. [CrossRef] [PubMed]
Hiris E. (2007). Detection of biological and nonbiological motion. Journal of Vision, 7(12):4, 1–16, http://www.journalofvision.org/content/7/12/4, doi: