Human visual systems are remarkably accurate at discriminating fine differences in motion direction, with directional resolution generally reported to be in the region of 5 degrees (Ball & Sekuler,
1987; Gros, Blake, & Hiris,
1998; Krukowski, Pirog, Beutter, Brooks, & Stone,
2003). This directional acuity is achieved despite the fact that motion-sensitive neurons are tuned to a rather broad range of directions, both at the level of primary visual cortex (V1) (Sincich & Horton,
2005) and in more specialized motion-sensitive areas such as hMT+ (V5) (Born & Bradley,
2005). Single-unit studies show that the directional tuning curves of hMT+ neurons have a full width at half maximum of about 100° (Snowden, Treue, & Andersen,
1992). Individual motion units therefore have a relatively poor directional resolution. In response to this observation, most models of motion perception have assumed that direction of motion must be computed at the level of population response, possibly through a vector-averaging process whereby a population of motion-sensitive neurons responding to a moving stimulus provide a network of information from which motion direction can be extracted (Adelson & Movshon,
1982; Georgeson & Scott-Samuel,
1999; Koch, Wang, & Mathur,
1989; Reichardt & Schlögl,
1988; Snowden & Braddick,
1989; Wilson & Kim,
1994). These population models assume that this process occurs among motion-specialized neurons in a stream that is functionally distinct from other visual properties such as color and orientation (Goodale & Milner,
1992; Livingstone & Hubel,
1988; Mishkin, Ungerleider, & Macko,
1983). However, in the years since the proposal for parallel and largely independent pathways for motion and for color and form was proposed, evidence has been accumulating to suggest that form and motion pathways do engage in significant interactions (Giese,
1999; Kourtzi, Krekelberg, & van Wezel,
2008; Lorenceau & Alais,
2001; Murray, Olshausen, & Woods,
2002; Ross, Badcock, & Hayes,
2000).