Third, we determined the allocentric weight by comparing the observed baseline-corrected reaching errors with the maximal expected reaching error (MERE). The MERE was estimated by assuming that the reach endpoint errors were equal to the amount of the physical displacement of the objects when participants solely relied on the allocentric information to localize objects in space and was calculated by averaging the amount of displacement of the shifted objects for each image (Klinghammer et al.,
2015). For example, if three out of the five table objects were shifted by 3 cm to the left, the MERE should be the sum of the displacements divided by the number of shifted objects, resulting in 3 cm left from the original reach target position; if all five table objects were shifted by 3 cm to the left, the MERE should also be 3 cm left from the original reach target position. The allocentric weight is then defined as the slope of a linear regression of the observed reach endpoints and the MERE, which was calculated for each participant by having the MERE as the independent variable of the linear regression and the observed baseline-corrected horizontal reaching error as the dependent variable. A slope of one would indicate that the baseline-corrected reaching error equates the MERE, i.e., participants completely rely on the allocentric information given by the shifted objects, while a slope of zero would indicate no use of allocentric information of the shifted objects (equal to baseline). First, we tested if the allocentric weights in each condition and group significantly differed from zero by using two-sided, one-sample
t tests. If allocentric information is used for memory-guided reaching, allocentric weights should be significantly greater than zero (baseline), i.e., reach endpoints systematically deviate in the direction of object shifts. In order to assess how
gaze and
prior knowledge influence allocentric coding of reach targets, we conducted a 3 × 2 × 2 mixed ANOVA with allocentric weight (= slope) as the dependent variable,
shift number (
1,
3,
5) and
gaze (
fixation vs.
free-view) as within-subject factors, and
prior knowledge (
nonpreview vs.
preview) as a between-subjects factor. As we found significant differences in the encoding time for
gaze and
prior knowledge (see
Results), we further controlled for the influence of
encoding time by adding it as a between-group covariate to the three-way ANOVA. In line with our previous studies (e.g., Fiehler et al.,
2014), we expect allocentric weights to increase with an increasing number of object shifts. According to our hypotheses on the effects of
gaze and
prior knowledge, we expect higher allocentric weights in the
free-view than the
fixation condition as no stable retinal reference point is available. Moreover, allocentric weights should be higher in the
nonpreview than the
preview group as the reach target is unknown and all table objects need to be spatially encoded. This result pattern should be reflected in a main effect of shift number, a main effect of gaze, and a main effect of prior knowledge.