The question remains, therefore, whether people use the motion information available during the moment-to-moment tracking of visible objects. To investigate this, we asked participants to track textured objects and varied the motion of the texture independently of the motion of the objects. The texture of the objects was either static or moved relative to the object motion. If people only use location information to track multiple objects, as suggested by some previous work (Keane & Pylyshyn,
2006), tracking accuracy should be the same in all texture conditions. However, if people use local motion to track objects, tracking accuracy should vary across conditions. We assume that the motion of the texture will be integrated with the motion of the object's edges producing an estimate of local motion (Lorenceau,
1996; Mingolla, Todd, & Norman,
1992; Qian, Andersen, & Adelson,
1994; Weiss, Simoncelli, & Adelson,
2002). This is the same type of local averaging that explains induced motion (Brosgole,
1968; Duncker,
1929; Johnston, Benton, & McOwan,
1999; Rock, Auster, Schiffman, & Wheeler,
1980). It has also been suggested that the coherent percept of a moving plaid occurs through combining the motion signals of the two individual gratings that make up the plaid (Adelson & Movshon,
1982; Burke, Alais, & Wenderoth,
1994; Derrington & Suero,
1991). Consequently, we predict that texture moving in a different direction than the objects will produce a local estimate of motion that does not match the true object motion and will lead to tracking errors. Furthermore, if people use the direction of motion to predict the future location of targets, tracking accuracy should decrease as direction of the texture motion deviates further from the target's motion.