In order to successfully interact with the world, we must encode, organize, and use spatial information. Such information can be represented in two fundamentally different ways. Egocentric information is defined relative to the self, whereas allocentric information describes object-to-object relations independent of the self. This distinction is believed to be critical for the way in which the visual system is organized. A ventral visual pathway that is primarily involved in tasks that require persistent relationships, such as recognizing people or objects, is proposed to organize visual input allocentrically, while a dorsal visual pathway primarily guides ongoing actions using instantaneous egocentric spatial information (
Goodale & Milner, 1992).
Although the distinction is often referred to as one between perception and action, not all actions have a straightforward place within this scheme. Memory-guided actions, for instance, are movements in which a target object is removed from view prior to a motor response. Such movements are guided by remembered target positions, so they presumably depend on the ventral system that stores persistent information to some extent (
Goodale, 2008). This contrasts with actions toward visible targets that might rely exclusively on the dorsal system in which ongoing movements are updated online according to moment-to-moment information about the location of a target object.
Westwood and Goodale (2003) proposed that the kind of information that is used depends on when the target is visible, suggesting that the target does not need to be visible throughout the entire movement for the action to be guided by dorsal, egocentric information. Moreover, evidence is accumulating against a strict separation between the two pathways in terms of goals and representations (
Schenk & McIntosh, 2010). Thus, it is not inconceivable that allocentric information could guide ongoing movements under certain circumstances.
Lu and Fiehler (2020) recently adapted their well-established paradigm (e.g.,
Fiehler, Wolf, Klinghammer, & Blohm, 2014;
Klinghammer, Blohm, & Fiehler, 2015,
2017) by administering an air-puff to the right eye of participants to force an eye-blink. During the blink, the target item disappeared, and on the critical trials, other objects (i.e., landmarks) were shifted. The puffs were presented at various times with respect to movement onset and were classified as memory guided, memory guided delayed, or online according to
Westwood and Goodale's (2003) real-time hypothesis. Irrespective of the time of the eye-blink, and therefore of the object displacements, reaching movements were corrected in accordance with the updated location of the landmarks, indicating the use of allocentric information. The authors interpret this result as the first evidence for the use of allocentric information in real-time reaching. A limitation of
Lu and Fiehler's (2020) work is that, after the eye-blink, there was no visual information regarding the target because the target was removed from the test scene. This is, of course, a requirement of the experimental paradigm: The target had to be removed to allow for a modification of allocentric but not egocentric information, which was necessary to be able to determine the weighting of these two types of spatial information. To search for evidence for the use of allocentric information in an ongoing visually guided action when the target remains visible, we sought for a task in which it could be advantageous for participants to rely on allocentric information despite the target being visible.
It is generally accepted that using a tool changes the relationship between the “self” and the “surroundings” to some extent (see
Holmes, 2012, for a review). Several
types of tools exist. Some tools, when gripped by our hand, can be considered extensions of that hand. This is, for instance, the case when using a stick to intercept a ball drifting along a canal. In this example, we get tactile feedback from the stick and watch our arm perform a movement that logically leads to the interception. For other tools, the relationship between the task and how the hand moves is less straightforward. This is, for instance, so when turning the steering wheel of a car or using a cursor to intercept a virtual ball drifting along a virtual canal on a computer screen. In the latter example, we do not get tactile information regarding the interception and cannot directly perceive the relationship between how our arm moves and how the cursor moves to intercept the ball. Actually, when moving a cursor, the hand normally moves forward to move the cursor upward on the screen, and the extents of the hand and cursor movements can be quite different. It is not even obvious where the origin of the egocentric reference would be when moving a cursor across a screen. It is therefore reasonable to assume that such a tool increases the extent to which one relies upon allocentric visual information to guide the action. We therefore used such a tool to examine whether
ongoing movements can be guided by allocentric spatial information.
Perturbation paradigms have commonly been used to explore how sudden unpredictable changes in the visual input influence goal-directed movements. The hand has been reported to deviate from its path in the direction of a target perturbation (e.g.,
Georgopoulos, Kalaska, & Massey, 1981;
Franklin, Reichenbach, Franklin, & Diedrichsen, 2016;
Reichenbach, Franklin, Zatka-Haas, & Diedrichsen, 2014;
Soechting & Lacquaniti, 1983). This response occurs approximately 100–150 ms after the perturbation (
Brenner & Smeets, 1997;
Day & Lyon, 2000;
Gritsenko, Yakovenko, & Kalaska, 2009;
Oostwoud Wijdenes, Brenner, & Smeets, 2011 ; Prablanc & Martin, 1992). Research has also shown that the hand deviates in the direction opposite to a cursor perturbation with a similar latency (
Brenner & Smeets, 2003;
Brière & Proteau, 2010;
Cross, Cluff, Takei, & Scott, 2019;
Franklin et al., 2016;
Proteau, Roujoula, & Messier, 2009;
Reichenbach et al., 2014;
Sarlegna et al., 2004;
Sarlegna & Blouin, 2010;
Saunders & Knill, 2004;
Veyrat-Masson, Brière, & Proteau, 2010). Importantly for the current study, the hand has also been reported to deviate in the direction of sudden unexpected background motion with a latency of approximately 110–160 ms (
Gomi, Abekawa, & Nishida, 2006;
Gomi, Abekawa, & Shimojo, 2013;
Saijo, Murakami, Nishida, & Gomi, 2005;
Whitney, Westwood, & Goodale, 2003) even when the target remains visible (
Brenner & Smeets, 1997).
The existing literature shows that corrective responses to target, cursor, and background perturbations are robust and occur with a similar latency. This study aimed to explore whether moving a cursor relies upon allocentric information by applying perturbations to these different components (i.e., target, cursor, background) of an interception task either independently or simultaneously. When all the components of the task move simultaneously, the spatial relations between the target, cursor, and background remain constant, whereas the spatial relations between the observer and the task components change. Therefore, if only allocentric information is used to guide the cursor in this task, we would not expect any corrective responses to the simultaneous perturbation of all three task components. In a first experiment, we examined whether participants responded to such simultaneous perturbation and, if so, whether responses were consistent with the responses to the separate components. In a second experiment, we examined whether participants would learn not to respond to simultaneous perturbations if such perturbations were presented repeatedly.