The results of Experiment 1 show that we encode reach targets relative to other task-relevant objects (among other reference frames), which is reflected in the allocentric weights. Surprisingly, we seem to do so preferably in the Visual Space condition for which we found higher allocentric weights compared with the two Pictorial Space conditions. This suggests that, in the Pictorial Space conditions, participants represented target objects more strongly relative to other entities than the task-relevant objects. Such entities could be either the observer themselves, that is, egocentric reference frame, or other more stable task-irrelevant allocentric cues, for example, the frame of the monitor or the left and right table edges. Given the ill-defined location of the observer in pictorial space, we believe that a stronger reliance on egocentric information in the pictorial space seems to be less likely. Therefore in Experiment 2 we controlled for other potential allocentric cues participants could have used to represent the task-relevant objects. To this end, we substantially increased the width of the table and the monitor to reduce the possibility that their vertical edges are used as allocentric cues. If the vertical monitor edges, as one of the task-irrelevant allocentric cues, were responsible for differences in spatial coding between Visual Space and Pictorial Space, we would expect them to be less pronounced. In contrast, if these allocentric cues did not play an important role for the encoding of the target position, we would expect to replicate the results of Experiment 1.
The results of Experiment 2 are depicted in
Figure 3. We found that reaching endpoints systematically deviated in the direction of object shift (
Figure 3B), with allocentric weights being significantly higher than when no reaching error would have occurred (all tests against zero:
p < 0.001). Similar to the results in Experiment 1, we found a significant effect of
presentation mode,
F2, 30 = 40.863,
p < 0.001, η
p2 = 0.731, with highest allocentric weights in the Visual Space condition (
M = 0.497,
SD = 0.147), followed by the Pictorial Large (
M = 0.395,
SD = 0.096), and then the Pictorial Small condition (
M = 0.279,
SD = 0.078). All pairwise comparisons were significant (Visual vs. Pictorial Small:
t(15) = 7.927,
p < 0.001,
dz = 1.982; Visual vs. Pictorial Large:
t(15) = 3.767,
p = 0.002,
dz = 0.942; Pictorial Small vs. Pictorial Large:
t(15) = –7.327,
p < 0.001,
dz = 1.832). Our results support the findings of Experiment 1 suggesting that participants make more use of allocentric information in visual compared with pictorial space.