Open Access
Article  |   January 2023
Psychophysical evidence for the involvement of head/body-centered reference frames in egocentric visuospatial memory: A whole-body roll tilt paradigm
Author Affiliations
  • Keisuke Tani
    Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan
    Faculty of Psychology, Otemon Gakuin University, Osaka, Japan
    keisuketani.pt@gmail.com
  • Shintaro Uehara
    Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan
    shintaro.uehara@gmail.com
  • Satoshi Tanaka
    Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan
    tanakas@hama-med.ac.jp
Journal of Vision January 2023, Vol.23, 16. doi:https://doi.org/10.1167/jov.23.1.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Keisuke Tani, Shintaro Uehara, Satoshi Tanaka; Psychophysical evidence for the involvement of head/body-centered reference frames in egocentric visuospatial memory: A whole-body roll tilt paradigm. Journal of Vision 2023;23(1):16. https://doi.org/10.1167/jov.23.1.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Accurate memory regarding the location of an object with respect to one's own body, termed egocentric visuospatial memory, is essential for action directed toward the object. Although researchers have suggested that the brain stores information related to egocentric visuospatial memory not only in the eye-centered reference frame but also in the other egocentric (i.e., head- or body-centered or both) reference frames, experimental evidence is scarce. Here, we tested this possibility by exploiting the perceptual distortion of head/body-centered coordinates via whole-body tilt relative to gravity. We hypothesized that if the head/body-centered reference frames are involved in storing the egocentric representation of a target in memory, then reproduction would be affected by this perceptual distortion. In two experiments, we asked participants to reproduce the remembered location of a visual target relative to their head/body. Using intervening whole-body roll rotations, we manipulated the initial (target presentation) and final (reproduction of the remembered location) body orientations in space and evaluated the effect on the reproduced location. Our results showed significant biases of the reproduced target location and perceived head/body longitudinal axis in the direction of the intervening body rotation. Importantly, the amount of error was correlated across participants. These results provide experimental evidence for the neural encoding and storage of information related to egocentric visuospatial memory in the head/body-centered reference frames.

Introduction
The ability to remember spatial relationships between one's own body and objects in space is an important determinant of action directed toward objects. When reaching for an object that was initially visible but was then made invisible as a result of occlusion, visuospatial memory regarding the object location with respect to one's own body or body parts, termed egocentric visuospatial memory, is essential for successful action. 
How does the brain store the egocentric representations of visual space in memory? Visual information is received by the retina, and is initially coded via eye (retino)-centered coordinates (Goldberg & Bruce, 1990; Duhamel, Colby, & Goldberg, 1992). Therefore it is reasonable to assume that visuospatial information is stored with respect to the eyes. Indeed, gaze shifting to the periphery after memorizing a target location was found to induce a systematic bias in the reproduced location (Henriques, Klier, Smith, Lowy, & Crawford, 1998; Golomb & Kanwisheret, 2012; Shafer-Skelton & Golomb, 2018; Tanaka, 2005; Smith & Crawford, 2001; Sorrento & Henriques, 2008; Thompson & Henriques, 2008; Baker, Harper, & Snyder, 2003). If a target location were stored entirely in stable head- or body-centered reference frames by combining retinal and extraretinal (e.g., eye position in orbit or head position relative to the trunk or gravity) signals, reproduction would be accurate regardless of intervening saccades. Hence, these gaze-dependent errors signal the dominance of the eye-centered storage mechanism for visuospatial memory (see Thompson & Henriques, 2011 for a review). 
Nevertheless, the head/body-centered reference frames may be partially involved in the storage and retrieval of egocentric visuospatial memory, together with the eye-centered reference frame. This is because even in such a case, errors in eye-centered spatial updating caused by peripheral saccades can lead to reproduction errors. Baker et al. (2003) reported that reproduction of a gaze-fixed target location after intervening eye movements or whole-body rotation was better accounted for by a model that nonlinearly combined the eye- (i.e., retinal) and head-centered signals (i.e., eye position) than a model that only included the eye-centered signal. This finding suggests that the head-centered reference frame may also be used for egocentric visuospatial memory. However, the above-mentioned results may alternatively support the presence of a flexible eye-centered storage mechanism, where the eye-centered representation of space is remapped based on the eye position signal (Colby & Goldberg 1999; Goldberg & Bruce, 1990; Zipser & Andersen, 1988). Accordingly, whether the head/body-centered reference frames are directly involved in egocentric visuospatial memory is still unclear. 
To address this, we conducted two psychophysical experiments involving an egocentric visuospatial memory task in which participants were asked to memorize and reproduce a visual target location with respect to their own body. In these experiments, the initial (presentation of a visual target) and final (reproduction of the remembered location) orientations of the body in space were manipulated via an intervening whole-body roll rotation, as in a previous study (Van Pelt, Van Gisbergen, & Medendorp, 2005). We evaluated the effect of the body orientation manipulation on egocentric visuospatial memory task performance and the perceived head/body-centered coordinates. Previous studies have shown that when the whole-body is tilted sideways, the subjective direction of the head/body longitudinal axis exhibits a large bias in the tilted direction (Barra, Benaim, Chauvineau, Ohlmann, Gresty, & Pérennou, 2008; Bauermeister, Werner, & Wapner, 1964; Ceyte, Cian, Nougier, Olivier, & Roux, 2006; Ceyte, Cian, Nougier, Olivier, & Trousselard, 2007; Ceyte, Cian, Trousselard, & Barraud, 2009; McFarland & Clarkson, 1966; Tani, Shiraki, Yamamoto, Kodaka, & Kushiro, 2018, Tani & Tanaka, 2021; Tamura, Wada, Inui, & Shiotani, 2017; Wood, Paloski, & Reschke, 1998). This phenomenon indicates the perceptual distortion of head/body-centered coordinates, which likely occurs because of a disturbance in the process of constructing the head/body-centered representation of space after body tilt-related changes in somatosensory and vestibular inputs (Ceyte et al., 2007). A head roll tilt relative to gravity can also distort the eye-centered and gravity-based coordinates. Specifically, when the head is tilted relative to gravity, the torsional eye position relative to the head and perceived gravitational vertical are biased in the opposite direction of the head tilt (see Kheradmand & Winnick, 2017 for a review). However, the amplitude of these distortions is minor during a small head/body roll tilt (Bockisch & Haslwanter, 2001; Tani, Uehara, & Tanaka, 2022). Therefore, if the egocentric location of a visual target is stored in the head/body-centered reference frames, the reproduced location should shift toward the perceptually distorted head/body axis (i.e., the direction of the intervening whole-body rotation; see Figure 1 for details). Alternatively, if the target location is stored entirely in the eye-centered or other (i.e., gravity-based) reference frames, the reproduction error should be negligible. 
Figure 1.
 
The head/body-centered coding model for egocentric visuospatial memory. In this example, an intervening body rotation occurs in the clockwise (CW) direction. When the target is presented at angle θ relative to the head longitudinal axis in the left-side-down (LSD) position, the perceived head axis is biased leftward by αh. Therefore the memorized target angle (θm) is θ + αh. When the target location is reproduced in the right-side-down (RSD) position, the memorized angle with respect to the perceived head axis is biased rightward by αhʹ (θr shows the reproduced angle). This leads to a reproduction error of αh + αhʹ to the right relative to the true location. If the intervening body rotation occurs in the counterclockwise (CCW) direction, the reproduced target location will shift to the left relative to the true location. In other words, it is assumed that the reproduction error (αh + αhʹ) will occur in the direction of the intervening body rotation.
Figure 1.
 
The head/body-centered coding model for egocentric visuospatial memory. In this example, an intervening body rotation occurs in the clockwise (CW) direction. When the target is presented at angle θ relative to the head longitudinal axis in the left-side-down (LSD) position, the perceived head axis is biased leftward by αh. Therefore the memorized target angle (θm) is θ + αh. When the target location is reproduced in the right-side-down (RSD) position, the memorized angle with respect to the perceived head axis is biased rightward by αhʹ (θr shows the reproduced angle). This leads to a reproduction error of αh + αhʹ to the right relative to the true location. If the intervening body rotation occurs in the counterclockwise (CCW) direction, the reproduced target location will shift to the left relative to the true location. In other words, it is assumed that the reproduction error (αh + αhʹ) will occur in the direction of the intervening body rotation.
Methods
Participants
The present study was approved by the Ethical Committee of Hamamatsu University School of Medicine and was conducted in accordance with the Declaration of Helsinki (2013). Twenty healthy volunteers (aged 21–33 years, 4 women) participated in both Experiments 1 and 2. All participants had normal vision and no neurological, vestibular, or cognitive disorders. Each participant provided written informed consent before the experiments began. The number of participants (n = 20) was determined by a sample size calculation conducted using G*power (version 3.1.9.2, Heinrich-Heine-Universität, Düsseldorf, Germany). The effect size was set at r = 0.60 for intersubject correlation analyses (α = 0.05, 1-β = 0.8), in reference to a previous study (Ceyte et al., 2009) on human perception of head and body tilt. The drop-out rate was set at 0.15, assuming that some participants would be unable to complete both experiments. To increase the reliability and transparency of this study (Nosek, Ebersole, DeHaven, & Mellor, 2018), the study protocol was pre-registered in the University Hospital Medical Information Network (UMIN; registration number: UMIN000039163). 
Apparatus
Participants were seated on a motorized tilting chair (SP-PS100-Z, Pair support, Japan) capable of rotating in the frontal plane around a rotating axis located 3 cm below the center of the seat. The trunk and legs of each participant were tightly restrained using a seatbelt (Clubman 70, Sabelt Japan, Japan) and straps. The participant's head was also secured to the chair in a natural upright position via a band and lateral and posterior plates fixed to the backrest. This enabled safe and comfortable whole-body rotation. The chair rotated with a constant angular velocity (peak velocity: 4.48°/s, peak acceleration/deceleration: 2.61°/s2). The duration of the intervening body rotation in each tilt condition (except the U-U condition) was approximately five seconds. A monitor (LQ079L1SX02, Sharp, Japan; width: 12.6 cm, height: 17.1 cm) was mounted onto the chair via a metal frame (Green Frame, SUS, Japan) such that it was positioned 25 cm in front of the participants’ head. The height of the monitor was adjusted so that its center was at eye level for each participant. A black cylinder (26 cm in diameter), one side of which was covered by a black board with a hole (11.0 cm in diameter), was firmly positioned between the participant's head and the monitor. This prevented the participants from utilizing any visual cues (e.g., edges of the monitor) when performing the task. During the visuospatial memory and subjective visual head axis (SVHA) tasks, the participants used a digital controller (F310r; Logitech, Lausanne, Switzerland). An anti-aliasing mode was used for the projection to limit the presence of cues related to the pixel alignment. During the experiment, white noise was presented to the participants via earphones to limit the use of surrounding auditory cues. 
Egocentric visuospatial memory task
Figure 2 shows the sequence of events in one trial of the egocentric visuospatial memory task. Participants were seated on the tilting chair in a semi-dark room. After the examiner signaled the beginning of the trial, the participants’ body was either maintained in an upright position or tilted to left- or right-side-down positions of 8° or 16° (termed the initial body orientation). When the participant had reached the initial body orientation, a fixation point was presented at the center of the monitor. The participants were instructed to fixate on this point whenever it was presented. Two seconds later, the peripheral memory target (0.2 cm in diameter) appeared for three seconds. The participants were asked to memorize its location using egocentric coordinates (i.e., the location relative to their head and/or body) and not allocentric coordinates (i.e., the location relative to external space and objects). The memory target could be presented at one of eight angles (15°, 60°, 105°, 150°, 195°, 240°, 285°, and 330°) in a clockwise (CW) direction with respect to the head longitudinal axis on an arc of radius 2.8 cm, centered on the fixation point. Immediately after the visual target was extinguished, the participants’ body was either held upright or rotated on the frontal plane to a new body orientation (termed the final body orientation). At the final body orientation, the visual cursor (0.2 cm in diameter) was presented in a different location from the memory target. The participant was then expected to move the cursor to the remembered location by manipulating the controller. There was no time limit for reproducing the target. The instructions to the participants were as follows: “Regardless of whether your body is upright or tilted, memorize and reproduce the target location with respect to yourself as accurately as possible.” In this study, we focused on the “directional” deviation of the reproduced target position on the frontal plane. Therefore we set the cursor so that it could only be moved on an arc of radius 2.8 cm, centered on the fixation point. The participants were required to memorize and recall only the direction of the target location, which enabled them to quickly reproduce the target position. The interval between the disappearance of the visual target and the appearance of the cursor was set at 15 seconds. After adjusting the cursor, the participants were rotated back to an upright position, where they were given time (10–30 seconds) to prepare for the next trial. The participants were not given feedback about their task performance. In the present study, the fixation point was continuously presented throughout each trial to prevent unintended eye position shifts that could affect reproduction performance. Considering that the participants memorized and reproduced the target “direction” but not the displacement of the target, we determined that the fixation point could not provide a direct allocentric cue for the task. 
Figure 2.
 
Time course of an experimental trial for the egocentric visuospatial memory task. As an example, the T-To condition in Experiment 1 for the clockwise group (see Participant allocation for details) is shown. At the initial body orientation, the memory target was presented and the participants were expected to memorize its location relative to their own body while fixating on the central fixation point. Then, the participants were rotated to the final body orientation, where they were expected to move a visual cursor (open circle) to the remembered target location (dashed circle). For clarity, the display vertical axis is shown via gray dotted lines.
Figure 2.
 
Time course of an experimental trial for the egocentric visuospatial memory task. As an example, the T-To condition in Experiment 1 for the clockwise group (see Participant allocation for details) is shown. At the initial body orientation, the memory target was presented and the participants were expected to memorize its location relative to their own body while fixating on the central fixation point. Then, the participants were rotated to the final body orientation, where they were expected to move a visual cursor (open circle) to the remembered target location (dashed circle). For clarity, the display vertical axis is shown via gray dotted lines.
Participant allocation
The participants were pseudo-randomly allocated to either the CW (10 participants) or counter-clockwise (CCW; 10 participants) group. Participants in the CW and CCW groups were tilted in a CW (i.e., rightward) or CCW (i.e., leftward) direction, respectively, in the frontal plane between the initial (target presentation) and final (reproduction) body orientations in Experiments 1 and 2 (see Experimental conditions for details). This allowed us to evaluate performance on the visuospatial memory task under the targeted body tilt conditions without a lengthy experimental session for each participant. 
Experimental conditions
To investigate the relationship between the perceived head/body-centered coordinates and egocentric visuospatial memory, we manipulated the initial and final body orientations via whole-body rotation in the frontal plane. We then evaluated the effect of this rotation on performance (accuracy) on the egocentric visuospatial memory task. The participants completed the two experiments on the same day or on separate days. 
Although it has been shown that the perceived head/body axis typically shifts towards the direction of body tilt, the amplitude of this bias varies widely across healthy individuals (Tani & Tanaka, 2021; Tani et al., 2018). Therefore we sought to confirm whether perceptual distortion of the head/body axis was present in the current participant group. In addition, evaluating the distortion of the head/body axis orientation individually enabled us to verify the inter-subject correlation between the perceptual distortion of head/body-centered coordinates and visuospatial memory performance. To quantify the perceptual distortion of head/body-centered coordinates at each tilted position, the participants additionally performed the SVHA task after each experiment. Although a whole-body tilt relative to gravity can also induce the distortion of eye-centered and gravity-based coordinates, we expected these distortions to be negligible during a near upright body tilt (Bockisch & Haslwanter, 2001; Tani et al., 2022). Therefore we assessed neither the eye rotation angles nor the perceived direction of gravity in the present study. 
Experiment 1
Participants performed the visuospatial memory task in one of the three following body tilt conditions: Tilt-Tilt in the same direction (T-Ts), Tilt-Tilt in the opposite direction (T-To), and Upright-Upright (U-U). In the T-Ts condition, the initial and final body orientations were the same (i.e., 8° on either the left or right side). After the memory target disappeared, the participant's body was rotated to the upright position and then returned to the original tilted position. In the T-To condition, the initial and final body orientations were at 8° as in the T-Ts condition, but their directions were opposite (i.e., left side vs. right side). Specifically, the participant's body was rotated to the tilted position on the opposite side after the memory target disappeared. Comparing performance between the T-Ts and T-To conditions enabled us to evaluate whether reproduction errors were caused by the body rotation itself or by the spatial inconsistency between the initial and final body orientation. In the U-U condition, which functioned as a control, both the initial and final body orientations were upright, and no body orientation occurred during the trial. 
After a few practice trials, each participant performed one trial for each body tilt condition and each memory target location, for a total of 24 trials (3 body tilt conditions × 8 memory target locations × 1 trial) of the visuospatial memory task. The order of the body tilt conditions and memory target locations was pseudorandomized across participants. A 10-minute break was given after every 10 trials to prevent fatigue. The total duration of the experiment was approximately one hour. 
Experiment 2
Experiment 2 was conducted to determine whether performance on the egocentric visuospatial memory task depended on the body tilt orientation in space during target presentation or reproduction. As in Experiment 1, each participant completed the egocentric visuospatial memory task under three conditions. The U-U condition was applied as in Experiment 1, and the Up-Tilt (U-T) and Tilt-Up (T-U) conditions were presented instead of the T-Ts and T-To conditions. In the U-T condition, the initial and final body orientations were upright (0°) and 16° to the right or left side, respectively, and vice versa for the T-U condition. Note that the angle of body rotation between the initial and final body orientations in these two conditions was 16°, as in the T-To condition in Experiment 1. The participants performed one trial for each body tilt condition (U-U, U-T, and T-U) and each memory target location (i.e., 24 trials in total) with a 10-minute break every 10 trials. 
SVHA task
We evaluated perceptual distortion of the head/body-centered coordinates at the initial and final tilt orientations during the egocentric visuospatial memory task. Each participant performed the SVHA task after completing all trials of the egocentric visuospatial memory task. The participants were instructed to adjust a visual line (4.6 cm in length) presented at the center of the monitor along the perceived head longitudinal axis by manipulating the controller while in an upright or tilted position. The initial angle of the line was randomly set at ±45°, ±60°, or 90° with respect to the head axis. After 10 trials, the participants were tilted back to an upright position. The body orientations in the SVHA task were 0° and 8° to the right or left side for Experiment 1, and 0° and 16° to the right or left side for Experiment 2. The initial body orientation in each experiment was 0°, and the other orientations were presented in a randomized order. 
Data analysis
For the egocentric visuospatial memory task, we computed the angular deviation between the vectors from the fixation point to the (true) memory target location, and calculated the reported cursor location as the reproduction error (RE) in each trial. Then, the positive and negative signs of the RE were reversed for the CCW group participants. This enabled us to calculate the rotational-side reproduction error (rRE), where a positive error indicated a bias in the direction of the intervening body rotation. 
In our statistical analyses, we first checked whether the rRE depended on the target angle or group assignment (CW or CCW). For the target angle, a one-way repeated-measures analysis of variance (ANOVA) was applied to the rRE data for the eight target angles in each tilt condition. To examine the effects of group, we conducted t-tests for the averaged rRE data across the eight targets for the CW and CCW groups. If no significant effects of target angle and group were found, the rRE values were pooled and averaged across the eight target angles and two groups for each tilt condition, and then a one-way repeated-measures ANOVA was conducted to compare the rRE values between the three tilt conditions in each experiment. The Holm correction was used for post hoc tests. 
For the SVHA task, the SVHA angle was computed as the angular deviation between the actual and subjective (self-reported) direction of the head longitudinal axis in each trial. This was averaged across the 10 trials for each body orientation. Then, the ΔSVHA for each body tilt condition was calculated by subtracting the SVHA angles at the initial body orientation from that at the final body orientation. Finally, similar to the RE, the ΔSVHA was transferred to the rotational-side ΔSVHA (rΔSVHA) by reversing the positive and negative signs of ΔSVHA for the CCW group participants. Note that the rΔSVHA was by definition zero in the U-U and T-Ts conditions where the initial and final body orientations were identical. For each of the other conditions (T-To, T-U, and U-T conditions), one-sample t-testing was conducted to assess whether the rΔSVHA values significantly differed from zero. Finally, we conducted a single regression analysis for each body tilt condition (T-To, U-T, and T-U) to evaluate the inter-subject correlation between performance on the egocentric visuospatial memory task (i.e., rRE) and that on the SVHA task (i.e., rΔSVHA). 
JASP software version 0.11.1 (Amsterdam, the Netherlands) was used for statistical analyses. The significance level for all comparisons was set to 0.05 (two-tailed). 
Results
Two participants were excluded from the analysis for the following reasons. One participant could not complete both experiments as a result of sleepiness. For the other participant, the data in both experiments were not correctly acquired because of a failure in the experimental set-up (display). Therefore the data from 18 participants (nine each for the CW and CCW groups) were included in the analysis in Experiments 1 and 2. 
Performance on the egocentric visuospatial memory task
First, we determined whether the rRE was dependent on the target angle or group assignment. For the target angle, ANOVAs showed no significant main effects of the target angle on the rRE in each condition of Experiment 1 (U-U, F7, 119 = 0.54, p = 0.80, partial η2 = 0.03; T-Ts, F7, 119 = 1.10, p = 0.36, partial η2 = 0.06; T-To, F7, 119 = 0.78, p = 0.61, partial η2 = 0.04) and Experiment 2 (U-U, F7, 119 = 0.59, p = 0.76, partial η2 = 0.03; U-T, F7, 119 = 1.21, p = 0.31, partial η2 = 0.06; T-U, F7, 119 = 0.82, p = 0.58, partial η2 = 0.05). For participant group assignment, t-tests revealed no significant differences in the rRE between the CW and CCW groups for each tilt condition in Experiment 1 (U-U, t16 = 1.43, p = 0.17, Cohen's d = 0.67; T-Ts, t16 = 1.41, p = 0.18, Cohen's d = 0.67; T-To, t16 = −0.31, p = 0.76, Cohen's d = −0.15) and Experiment 2 (U-U, t16 = 1.41, p = 0.18, Cohen's d = 0.63; U-T, t16 = 0.26, p = 0.80, Cohen's d = 0.12; T-U, t16 = −0.79, p = 0.44, Cohen's d = −0.37). These results indicate that the target angle and participant group did not strongly influence performance in the egocentric visuospatial memory task. Accordingly, we pooled and averaged the rRE values across the eight target angles and two groups in each tilt condition for further analyses. 
Figure 3 shows the individual (dots) and overall mean rRE (bars) values in each tilt condition for Experiments 1 and 2. The positive values represent bias in the direction of the intervening body rotation. We assessed the impact of the duration (15 s) for target location memorization on the rRE. The rRE values in the U-U condition in Experiment 1 (0.25° ± 0.35°) and 2 (0.35° ± 0.24°) were not significantly different from 0 (one sample t-tests, t17 = 0.69, p = 0.50, Cohen's d = 0.16 for Experiment 1; t17 = 0.27, p = 0.79, Cohen's d = 0.06 for Experiment 2). These results indicate that the participants were able to accurately memorize the target location, even with a memory time of 15 seconds, when maintaining an upright posture. 
Figure 3.
 
The rRE in each tilt condition. The gray- and orange-colored lines represent the mean rRE values for each participant and across all participants, respectively. * p < 0.05; ** p < 0.01.
Figure 3.
 
The rRE in each tilt condition. The gray- and orange-colored lines represent the mean rRE values for each participant and across all participants, respectively. * p < 0.05; ** p < 0.01.
Next, we evaluated the influence of body rotation during memorization of the target location or the spatial inconsistency between the initial and final body orientations on the reproduction accuracy by comparing the rRE values between the three groups for each experiment. For Experiment 1, an ANOVA revealed a significant main effect of body tilt condition (F2, 34 = 6.93, p = 0.003, partial η2 = 0.29). Post hoc tests showed that the rRE was significantly larger (i.e., bias of the reported cursor in the direction of intervening body rotation) in the T-To condition (mean ± SD; 6.24° ± 8.25°) than in the U-U (0.25° ± 1.50°; p = 0.036, Cohen's d = 1.17) and T-Ts conditions (1.03° ± 1.85°; p = 0.044, Cohen's d =1.01), whereas there were no significant differences in the rRE between the T-Ts and U-U conditions (p = 0.21, Cohen's d = 0.15). 
For Experiment 2, a significant main effect of body tilt condition was found (F2, 34 = 10.01, p < 0.001, partial η2 = 0.37). Post-hoc tests revealed that the rRE was significantly larger in both the U-T (4.16° ± 4.83°, p = 0.005, Cohen's d = 1.01) and T-U conditions (4.64° ± 4.92°; p = 0.005; Cohen's d = 1.13) compared with the U-U condition (0.35° ± 1.04°). No significant difference in rRE was observed between the U-T and T-U conditions (p = 0.63, Cohen's d = 0.12). Note that in the T-Ts condition, whole-body rotation was performed as in the T-To, U-T, and T-U conditions, but the initial and final body orientations were identical. A nonsignificant difference in the rRE between the T-Ts and U-U conditions rules out the possibility that the intervening body rotation itself might have biasd the reproduced location by disturbing the attentional process engaged in storing visuospatial information (Israel, Ventre-Dominey, & Denise, 1999; Gnadt, Bracewell, & Andersen, 1991). These results indicate that the bias in the reproduced location was caused by the spatial inconsistency between the initial and final body orientations resulting from the body rotation. 
Performance on the SVHA task
Figure 4A shows the mean SVHA angles at each body orientation (LSD16°–RSD16°) for each participant and across all participants. The positive error represents the bias toward the rightward side. Although there were interindividual differences, the SVHA angles were overall close to 0 (no error) when the body was upright and were biased in the direction of the body tilt when the body was tilted. 
Figure 4.
 
(A) The SVHA for each body orientation. Gray and black lines represent the mean SVHAs for each participant and across all participants, respectively. (B) The rΔSVHA in each tilt condition. Gray-colored dots and green-colored squares represent the mean rΔSVHA for each participant and across all participants, respectively. ** p < 0.01; *** p < 0.001.
Figure 4.
 
(A) The SVHA for each body orientation. Gray and black lines represent the mean SVHAs for each participant and across all participants, respectively. (B) The rΔSVHA in each tilt condition. Gray-colored dots and green-colored squares represent the mean rΔSVHA for each participant and across all participants, respectively. ** p < 0.01; *** p < 0.001.
Figure 4B shows the mean rΔSVHA in the T-To condition (9.67° ± 9.28°) for Experiment 1 and in the U-T (6.08° ± 7.22°) and T-U (7.68° ± 8.47°) conditions for Experiment 2, calculated based on the SVHA angles at the body tilt angles for the initial and final body orientations. The positive error represents the bias in the direction of the intervening body rotation. The rΔSVHAs in all three conditions were significantly larger than 0 (T-To, t 17 = 4.42, p < 0.001, Cohen's d = 1.04; U-T, t17 = 3.57, p = 0.002, Cohen's d = 0.84; T-U, t17 = 3.84, p < 0.001, Cohen's d = 0.91). The results indicate that the perceived head/body-centered coordinates were tilted in the direction of the intervening body rotation, as with the rRE in the visuospatial memory task. 
Relationship between performance on the visuospatial memory and SVHA tasks
If the target location were stored in the head/body-centered reference frames, then we would expect the perceptual distortion of the head/body axis (rΔSVHA) and reproduced target location (rRE) to be correlated across individuals. As expected, the single regression analyses revealed significant positive correlations (i.e., the rΔSVHA increased with the rRE) for the T-To (r = 0.64, p = 0.004) and T-U conditions (r = 0.68, p = 0.002). In contrast, no significant correlation was observed for the U-T condition (r = 0.39, p = 0.11; Figure 5). The slope coefficients were 0.59 (95% confidence interval (CI): 0.21–0.96) for the T-To condition, 0.26 (-0.06–0.59) for the U-T condition, and 0.39 (95%CI: 0.17–0.62) for the T-U condition, and all were significantly smaller than 1 (all at p < 0.05). 
Figure 5.
 
The intersubject relationship between the rRE and rΔSVHA in each tilt condition. Black dots and green lines show the individual plots and regression lines fitted to the data, respectively. Gray dotted lines represent a slope of 1. ** p < 0.01.
Figure 5.
 
The intersubject relationship between the rRE and rΔSVHA in each tilt condition. Black dots and green lines show the individual plots and regression lines fitted to the data, respectively. Gray dotted lines represent a slope of 1. ** p < 0.01.
In addition, we compared the regression slopes in the T-To, U-T, and T-U conditions using an analysis of covariance (ANCOVA) in which the rΔSVHA and body tilt condition were set as a covariate and categorical variable, respectively. The ANCOVA revealed a significant effect of rΔSVHA (p < 0.001) on the rRE, but no significant interaction (p = 0.32). This result indicates that the slopes were not significantly different between the three tilt conditions. 
Discussion
The present study aimed to investigate whether the head/body-centered reference frames are involved in egocentric visuospatial memory by exploiting the perceptual distortion of head/body-centered coordinates induced by a whole-body roll tilt relative to gravity. In two experiments, we manipulated the initial (when a visual target was presented) and final (when participants reproduced the remembered location) body orientations using a whole-body roll rotation, and evaluated its influence on the reproduced target location. Our results showed significant biases of reproduced location in the direction of the perceptually distorted head/body axis. These results suggest that the head/body-centered reference frames are involved in egocentric visuospatial memory. 
We initially hypothesized that if the head/body-centered reference frames were engaged in egocentric visuospatial memory, the reproduced target location would be shifted in the direction of the intervening whole-body rotation, together with the perceived longitudinal axis of the head/body (see Figure 1). This hypothesis was supported by our finding that the reproduced location (rRE) was clearly biased in the direction of intervening whole-body rotation (T-To, U-T, T-U conditions; Figure 3). The perceived head/body axis (rΔSVHA) was also shifted in the direction of whole-body rotation, consistent with previous literature (Figure 4; Barra et al., 2008; Ceyte et al., 2006; Ceyte et al., 2007; Ceyte et al., 2009; McFarland & Clarkson, 1966; Tani et al., 2018; Tani & Tanaka, 2021; Tamura et al., 2017; Wood et al., 1998). Notably, the bias of the reproduced location was significantly correlated with that of the perceived head/body axis across participants, especially when the target was presented at the tilted position (T-To and T-U conditions; Figure 5). The use of an eye-centered storage mechanism would lead to mostly accurate reproduction performance in the context of the present study, where the eye position relative to the head was constant throughout the trial, except for the eye torsion in the frontal plane. Therefore the observed biases in the reproduced location go against the notion that visuospatial information is maintained only in the eye-centered coordinates (Goldberg & Bruce, 1990). Alternatively, our data provide experimental evidence that the brain relies at least partially on the head/body-centered reference frames for egocentric visuospatial memory. 
In the T-To and T-U conditions, the slopes of rΔSVHA for the rRE were <1 (Figure 5) (i.e., the errors in the visuospatial memory task tended to be small compared with the bias in the head/body-centered coordinates). These results indicate that the eye-centered reference frame, which would be less influenced by the head/body tilt, could also be involved in storing egocentric visuospatial information. Indeed, previous neurophysiological and psychophysical studies have shown that visuospatial information is encoded and stored not in a single reference frame, but in parallel in multiple reference frames (Tramper & Medendorp, 2015; Niehof, Tramper, J. J., Doeller, C. F., & Medendorp, 2017; Mullette-Gillman, Cohen, & Groh, 2005; Mullette-Gillman, Cohen, & Groh, 2009; Caruso, Pages, Sommer, & Groh, 2021). Tramper & Medendorp (2015) showed that the bias in a reproduced world-fixed target location (not body-fixed as in the present study) caused by intervening whole-body translation was better explained by a model in which the eye-centered and head/body-centered reference frames were combined than by a model in which each reference frame was used alone. The brain likely stores egocentric visuospatial information in both the eye-centered and head/body-centered reference frames and then integrates them, possibly weighting them based on the reliability of each type of information (Tramper & Medendorp, 2015; McGuire & Sabes, 2009; Burns & Blohm, 2010). This strategy could reduce visuospatial memory inaccuracies induced by the perceptual distortion of head/body-centered reference frames during head/body tilt. 
The observed bias of SVHAs in the direction of head/body tilt relative to gravity (Figure 4) does not indicate misperception of the line orientation on the retina (i.e., perceptual distortion of eye-centered coordinates) during head/body tilt. Performance on tasks involving the SVHA differs substantially from that for the subjective visual vertical (SVV) axis, which is assessed by adjusting the visual line along the direction of gravity. In a recent study (Tani & Tanaka, 2021), we found that while both the SVHA and SVV shifted in response to a lateral body tilt of 10°, the magnitude of the bias was much larger for the SVHA than for the SVV (approximately 10 times larger). If the body tilt-induced SVHA bias were derived from the misperception of line orientation on the retina, then a similar degree of error would be observed in the SVV task because both tasks require visual alignment of a line in a desired direction. These data indicate that a SVHA bias during whole-body tilt reflects the distorted internal estimates of not eye-centered but head/body-centered coordinates. This distortion is likely attributed to the difficulty in constructing an egocentric representation of space according to body tilt-related changes in somatosensory and vestibular inputs (Ceyte et al., 2007; Tarnutzer et al., 2012). 
It is possible to argue that the observed reproduction errors in the egocentric visuospatial memory task were attributable to the perceptual distortion of gravity-centered coordinates induced by body tilt. As noted in the introduction, the perceived direction of gravity bias depends on the head tilt angle relative to gravity (see Kheradmand & Winnick, 2017 for a review), but its amplitude is quite small (under 1°; Tani et al., 2022) within a limited (< 10°) head tilt range. Thus, changes in the gravity-based coordinates could not account for the large (averaged errors: 4°–6° in the T-To, U-T, and T-U conditions) observed reproduction errors. Furthermore, Baker et al. (2003) showed that a model with a world-centered reference frame could not explain the reproduction of remembered locations of gaze-fixed (egocentric) targets in terms of performance precision. Therefore, we speculate that the gravity-based reference frame is not directly involved in egocentric visuospatial memory. 
Several previous studies have shown that the gaze-fixed location of a visual target can be accurately reproduced regardless of the intervening body rotation (Baker et al., 2003; Israel et al., 1999), which contrasts with our results showing significant biases in reproduced location. This discrepancy is likely related to the dimension of the whole-body rotation. In previous studies, the intervening body rotation was on the horizontal plane, while we examined the frontal plane in the present study. When the head/body is rotated on the horizontal plane, i.e., around the yaw axis parallel to gravity, perceptual distortions of the head/body-centered coordinates caused by gravitational cues (somatosensory and vestibular signals dependent on the tilt angles of the head and body with respect to gravity) cannot occur. This could explain why these studies found no apparent effect of intervening whole-body rotation on the accuracy of egocentric visuospatial memory. 
In the U-T condition, we found no significant inter-subject correlation between the rRE and SVHA values (Figure 5 middle), although the reproduced target location was biased in the perceived direction of head/body axis at a group-level, as in the T-To and T-U conditions (Figure 3). These results indicate that there were large individual differences in the influence of the perceived head/body-centered coordinates on egocentric visuospatial memory when the target location was encoded with the head upright. Although the eye-centered coordinates are more stable (i.e., less variable) overall when the head is upright compared with tilted (Tarnutzer, Bockisch, Olasagasti, & Straumann, 2009), stability, which is assessed according to the degree of microsaccades and ocular drifts, varies largely across individuals (Cherici, Kuang, Poletti, & Rucci, 2012). As mentioned earlier, the brain may determine the involvement of eye-centered and head/body-centered reference frames when storing visuospatial information based on their reliability. Therefore the individual stability of eye-centered coordinates might be responsible for interindividual differences in the influence of perceived head/body-centered coordinates in egocentric visuospatial memory. 
Head or body tilt relative to gravity adds noise to various sensorimotor systems, such as the vestibular and somatosensory systems (Tarnutzer et al., 2009; Alberts, Selen, Bertolini, Straumann, Medendorp, & Tarnutzer, 2016). This can influence the precision of motor or perceptual/cognitive performance (Burns, Nashed, & Blohm, 2011; Abedi Khoozani & Blohm, 2018). Burns et al. (2011) demonstrated that the variability of perceptual judgments of proprioceptive hand position relative to a visual target increased during a head roll tilt compared with when the head was upright. This tilt-dependent sensory noise cannot explain the observed systematic bias in the reproduced target position, although it would have had some influence on the reproduction performance (especially, precision). Unfortunately, in the present study, we used a limited number of trials (only one trial for each target location in each tilt condition) to avoid participant fatigue. This prevented us from analyzing the performance precision. A future study with more trials would be helpful in evaluating the effects of body tilt/rotation on visuospatial memory. 
As shown in a previous study (Tramper & Medendorp, 2015), the head/body-centered reference frames can also play a role in spatiotopic memory, in which individuals reproduce the remembered location of a world-fixed (not head/body-fixed as in the present study) target independent of body movements in space. Van Pelt et al. (2005) reported that when a whole-body rotation was inserted after target presentation, the reproduced location of a world-fixed target was biased in the direction of the intervening body rotation. These biases were strongly correlated with errors in the perceived direction of “gravity.” This finding supports the idea that the brain encodes and stores visuospatial information about world-fixed targets predominantly in an allocentric (gravity-based) reference frames (Van Pelt et al., 2005; Klier, Angelaki, & Hess, 2005; Klier, Hess, & Angelaki, 2006; Medendorp, Smith, Tweed, & Crawford, 2002). However, the above-mentioned study did not evaluate the perceptual distortion of head/body-centered coordinates caused by whole-body tilt, leaving open the possibility that this may have affected reproduction performance. In future work, we hope to evaluate this possibility while controlling for other factors that affect visuospatial memory performance, such as the modality used for reproduction and/or tilt angles for the initial and final head/body orientations. 
In the present study, a whole-body rotation including the head was applied. Therefore we cannot conclude whether the visual target location was encoded and stored in either or both the head- and body-centered reference frames. To address this limitation, further experiments are required in which performance on visuospatial memory tasks is assessed after head rotation that is independent of the body. 
Conclusion
The present study shows that the perceptual distortion of head/body-centered coordinates induced by whole-body tilt leads to biases in the reproduction of remembered target locations with respect to one's own body. Our results support the idea that egocentric visuospatial memory relies on head/body-centered frames of reference, at least in situations where the whole-body is tilted relative to gravity. It is likely that the brain flexibly determines which reference frames to encode and store spatial information depending on various factors such as task demand (e.g., egocentric vs. allocentric visuospatial memory) and body condition (e.g., eye/body movements or position) (Battaglia-Mayer, Caminiti, Lacquaniti, & Zago, 2003 for a review). Therefore further studies in which these factors are manipulated are needed to better understand the involvement of head/body-centered reference frames in visuospatial memory. 
Acknowledgments
Supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI (20K19305) and Grant-in-Aid from Hamamatsu University School of Medicine and Otemon Gakuin University. All grants were awarded to K.T. 
Commercial relationships: none. 
Corresponding author: Keisuke Tani. 
Email: keisuketani.pt@gmail.com. 
Address: Faculty of Psychology, Otemon Gakuin University, Osaka, Japan. 
References
Abedi Khoozani, P., & Blohm, G. (2018). Neck muscle spindle noise biases reaches in a multisensory integration task. Journal of Neurophysiology, 120(3), 893–909, doi: 10.1152/jn.00643.2017. [CrossRef] [PubMed]
Alberts, B. B., Selen, L. P., Bertolini, G., Straumann, D., Medendorp, W. P., & Tarnutzer, A. A. (2016). Dissociating vestibular and somatosensory contributions to spatial orientation. Journal of Neurophysiology, 116(1), 30–40, doi: 10.1152/jn.00056.2016. [CrossRef] [PubMed]
Baker, J. T., Harper, T. M., & Snyder, L. H. (2003). Spatial memory following shifts of gaze. I. Saccades to memorized world-fixed and gaze-fixed targets. Journal of Neurophysiology, 89(5), 2564–2576, doi: 10.1152/jn.00610.2002. [CrossRef] [PubMed]
Barra, J., Benaim, C., Chauvineau, V., Ohlmann, T., Gresty, M., & Pérennou, D. (2008). Are rotations in perceived visual vertical and body axis after stroke caused by the same mechanism?. Stroke, 39(11), 3099–3101, doi: 10.1161/STROKEAHA.108.515247. [CrossRef] [PubMed]
Battaglia-Mayer, A., Caminiti, R., Lacquaniti, F., & Zago, M. (2003). Multiple levels of representation of reaching in the parieto-frontal network. Cerebral Cortex, 13(10), 1009–1022, doi: 10.1093/cercor/13.10.1009. [CrossRef]
Bauermeister, M., Werner, H., & Wapner, S. (1964). The effect of body tilt on tactual-kinesthetic Perception of verticality. The American Journal of Psychology, 77, 451–456. [CrossRef] [PubMed]
Bockisch, C. J., & Haslwanter, T. (2001). Three-dimensional eye position during static roll and pitch in humans. Vision Research, 41(16), 2127–2137, doi: 10.1016/s0042-6989(01)00094-3. [CrossRef] [PubMed]
Burns, J. K., & Blohm, G. (2010). Multi-sensory weights depend on contextual noise in reference frame transformations. Frontiers in Human Neuroscience, 4, 221, doi: 10.3389/fnhum.2010.00221. [CrossRef] [PubMed]
Burns, J. K., Nashed, J. Y., & Blohm, G. (2011). Head roll influences perceived hand position. Journal of Vision, 11(9), 3, doi: 10.1167/11.9.3. [CrossRef] [PubMed]
Caruso, V. C., Pages, D. S., Sommer, M. A., & Groh, J. M. (2021). Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas. Journal of Neurophysiology, 126(1), 82–94, doi: 10.1152/jn.00385.2020. [CrossRef] [PubMed]
Ceyte, H., Cian, C., Nougier, V., Olivier, I., & Roux, A. (2006). Effects of neck muscles vibration on the perception of the head and trunk midline position. Experimental Brain Research, 170(1), 136–140, doi: 10.1007/s00221-006-0389-7. [CrossRef] [PubMed]
Ceyte, H., Cian, C., Nougier, V., Olivier, I., & Trousselard, M. (2007). Role of gravity-based information on the orientation and localization of the perceived body midline. Experimental Brain Research, 176(3), 504–509, doi: 10.1007/s00221-006-0764-4. [CrossRef] [PubMed]
Ceyte, H., Cian, C., Trousselard, M., & Barraud, P. A. (2009). Influence of perceived egocentric coordinates on the subjective visual vertical. Neuroscience Letters, 462(1), 85–88, doi: 10.1016/j.neulet.2009.06.048. [CrossRef] [PubMed]
Cherici, C., Kuang, X., Poletti, M., & Rucci, M. (2012). Precision of sustained fixation in trained and untrained observers. Journal of Vision, 12(6), 31, 1–16, doi: 10.1167/12.6.31. [CrossRef]
Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex. Annual Review of Neuroscience, 22, 319–349, doi: 10.1146/annurev.neuro.22.1.319. [CrossRef] [PubMed]
Duhamel, J. R., Colby, C. L., & Goldberg, M. E. (1992). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255(5040), 90–92, doi: 10.1126/science.1553535. [CrossRef] [PubMed]
Gnadt, J. W., Bracewell, R. M., & Andersen, R. A. (1991). Sensorimotor transformation during eye movements to remembered visual targets. Vision Research, 31(4), 693–715, doi: 10.1016/0042-6989(91)90010-3. [CrossRef] [PubMed]
Goldberg, M. E., & Bruce, C. J. (1990). Primate frontal eye fields. III. Maintenance of a spatially accurate saccade signal. Journal of Neurophysiology, 64(2), 489–508, doi: 10.1152/jn.1990.64.2.489. [CrossRef] [PubMed]
Golomb, J. D., & Kanwisher, N. (2012). Retinotopic memory is more precise than spatiotopic memory. Proceedings of the National Academy of Sciences of the United States of America, 109(5), 1796–1801, doi: 10.1073/pnas.1113168109. [CrossRef] [PubMed]
Henriques, D. Y., Klier, E. M., Smith, M. A., Lowy, D., & Crawford, J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. Journal of Neuroscience,18(4), 1583–1594, doi: org/10.1523/JNEUROSCI.18-04-01583.1998. [CrossRef]
Israel, I., Ventre-Dominey, J., & Denise, P. (1999). Vestibular information contributes to update retinotopic maps. Neuroreport, 10(17), 3479–3483, doi: 10.1097/00001756-199911260-00003. [CrossRef] [PubMed]
Kheradmand, A., & Winnick, A. (2017). Perception of Upright: Multisensory Convergence and the Role of Temporo-Parietal Cortex. Frontiers in Neurology, 8, 552, doi: 10.3389/fneur.2017.00552. [CrossRef] [PubMed]
Klier, E. M., Angelaki, D. E., & Hess, B. J. (2005). Roles of gravitational cues and efference copy signals in the rotational updating of memory saccades. Journal of Neurophysiology, 94(1), 468–478, doi: 10.1152/jn.00700.2004. [CrossRef] [PubMed]
Klier, E. M., Hess, B. J., & Angelaki, D. E. (2006). Differences in the accuracy of human visuospatial memory after yaw and roll rotations. Journal of Neurophysiology, 95(4), 2692–2697, doi: 10.1152/jn.01017.2005. [CrossRef] [PubMed]
McFarland, J. H., & Clarkson, F. (1966). Perception of orientation: adaptation to lateral body-tilt. The American Journal of Psychology, 79(2), 265–271. [CrossRef] [PubMed]
McGuire, L. M., & Sabes, P. N. (2009). Sensory transformations and the use of multiple reference frames for reach planning. Nature Neuroscience, 12(8), 1056–1061, doi: 10.1038/nn.2357. [CrossRef] [PubMed]
Medendorp, W. P., Smith, M. A., Tweed, D. B., & Crawford, J. D. (2002). Rotational remapping in human spatial memory during eye and head motion. Journal of Neuroscience, 22(1), RC196. [CrossRef]
Mullette-Gillman, O. A., Cohen, Y. E., & Groh, J. M. (2005). Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. Journal of Neurophysiology, 94(4), 2331–2352, doi: 10.1152/jn.00021.2005. [CrossRef] [PubMed]
Mullette-Gillman, O. A., Cohen, Y. E., & Groh, J. M. (2009). Motor-related signals in the intraparietal cortex encode locations in a hybrid, rather than eye-centered reference frame. Cerebral Cortex, 19(8), 1761–1775, doi: 10.1093/cercor/bhn207. [CrossRef]
Niehof, N., Tramper, J. J., Doeller, C. F., & Medendorp, W. P. (2017). Updating of visual orientation in a gravity-based reference frame. Journal of Vision, 17(12):4, 1–10, doi: 10.1167/17.12.4. [CrossRef]
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences of the United States of America, 115(11), 2600–2606, doi: 10.1073/pnas.1708274114. [PubMed]
Shafer-Skelton, A., & Golomb, J. D. (2018). Memory for retinotopic locations is more accurate than memory for spatiotopic locations, even for visually guided reaching. Psychonomic Bulletin & Review, 25(4), 1388–1398, doi: 10.3758/s13423-017-1401-x. [PubMed]
Sorrento, G. U., & Henriques, D. Y. (2008). Reference frame conversions for repeated arm movements. Journal of Neurophysiology, 99(6), 2968–2984, doi: 10.1152/jn.90225.2008. [PubMed]
Smith, M. A., & Crawford, J. D. (2001). Implications of ocular kinematics for the internal updating of visual space. Journal of Neurophysiology, 86(4), 2112–2117, doi: 10.1152/jn.2001.86.4.2112. [PubMed]
Tamura, A., Wada, Y., Inui, T., & Shiotani, A. (2017). Perceived direction of gravity and the body-axis during static whole body roll-tilt in healthy subjects. Acta Oto-Laryngologica, 137(10), 1057–1062, doi: 10.1080/00016489.2017.1328744. [PubMed]
Tanaka, M. (2005). Effects of eye position on estimates of eye displacement for spatial updating. Neuroreport, 16(12), 1261–1265, doi: 10.1097/01.wnr.0000176518.04100.e7. [PubMed]
Tani, K., Shiraki, Y., Yamamoto, S., Kodaka, Y., & Kushiro, K. (2018). Whole-Body Roll Tilt Influences Goal-Directed Upper Limb Movements through the Perceptual Tilt of Egocentric Reference Frame. Frontiers in Psychology, 9, 84, doi: 10.3389/fpsyg.2018.00084. [PubMed]
Tani, K., & Tanaka, S. (2021). Neuroanatomical correlates of the perception of body axis orientation during body tilt: a voxel-based morphometry study. Scientific Reports, 11(1), 14659, doi: 10.1038/s41598-021-93961-8. [PubMed]
Tani, K., Uehara, S., & Tanaka, S. (2022, March 20). Association between body-tilt and egocentric estimates near upright, doi: 10.31234/osf.io/nuymz.
Tarnutzer, A. A., Bockisch, C. J., Olasagasti, I., & Straumann, D. (2012). Egocentric and allocentric alignment tasks are affected by otolith input. Journal of Neurophysiology, 107(11), 3095–3106, doi: 10.1152/jn.00724.2010. [PubMed]
Tarnutzer, A. A., Bockisch, C. J., & Straumann, D. (2009). Head roll dependent variability of subjective visual vertical and ocular counterroll. Experimental Brain Research, 195(4), 621–626, doi: 10.1007/s00221-009-1823-4. [PubMed]
Thompson, A. A., & Henriques, D. Y. (2008). Updating visual memory across eye movements for ocular and arm motor control. Journal of Neurophysiology, 100(5), 2507–2514, doi: 10.1152/jn.90599.2008. [PubMed]
Thompson, A. A., & Henriques, D. Y. (2011). The coding and updating of visuospatial memory for goal-directed reaching and pointing. Vision Research, 51(8), 819–826, doi: 10.1016/j.visres.2011.01.006. [PubMed]
Tramper, J. J., & Medendorp, W. P. (2015). Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion. Journal of Neurophysiology, 114(6), 3211–3219, doi: 10.1152/jn.00576.2015. [PubMed]
Van Pelt, S., Van Gisbergen, J. A. M., & Medendorp, W. P. (2005). Visuospatial memory computations during whole-body rotations in roll. Journal of Neurophysiology, 94(2), 1432–1442, doi: 10.1152/jn.00018.2005. [PubMed]
Wood, S. J., Paloski, W. H., & Reschke, M. F. (1998). Spatial coding of eye movements relative to perceived earth and head orientations during static roll tilt. Experimental Brain Research, 121(1), 51–58, doi: 10.1007/s002210050436. [PubMed]
Zipser, D., & Andersen, R. A. (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679–684. [PubMed]
Figure 1.
 
The head/body-centered coding model for egocentric visuospatial memory. In this example, an intervening body rotation occurs in the clockwise (CW) direction. When the target is presented at angle θ relative to the head longitudinal axis in the left-side-down (LSD) position, the perceived head axis is biased leftward by αh. Therefore the memorized target angle (θm) is θ + αh. When the target location is reproduced in the right-side-down (RSD) position, the memorized angle with respect to the perceived head axis is biased rightward by αhʹ (θr shows the reproduced angle). This leads to a reproduction error of αh + αhʹ to the right relative to the true location. If the intervening body rotation occurs in the counterclockwise (CCW) direction, the reproduced target location will shift to the left relative to the true location. In other words, it is assumed that the reproduction error (αh + αhʹ) will occur in the direction of the intervening body rotation.
Figure 1.
 
The head/body-centered coding model for egocentric visuospatial memory. In this example, an intervening body rotation occurs in the clockwise (CW) direction. When the target is presented at angle θ relative to the head longitudinal axis in the left-side-down (LSD) position, the perceived head axis is biased leftward by αh. Therefore the memorized target angle (θm) is θ + αh. When the target location is reproduced in the right-side-down (RSD) position, the memorized angle with respect to the perceived head axis is biased rightward by αhʹ (θr shows the reproduced angle). This leads to a reproduction error of αh + αhʹ to the right relative to the true location. If the intervening body rotation occurs in the counterclockwise (CCW) direction, the reproduced target location will shift to the left relative to the true location. In other words, it is assumed that the reproduction error (αh + αhʹ) will occur in the direction of the intervening body rotation.
Figure 2.
 
Time course of an experimental trial for the egocentric visuospatial memory task. As an example, the T-To condition in Experiment 1 for the clockwise group (see Participant allocation for details) is shown. At the initial body orientation, the memory target was presented and the participants were expected to memorize its location relative to their own body while fixating on the central fixation point. Then, the participants were rotated to the final body orientation, where they were expected to move a visual cursor (open circle) to the remembered target location (dashed circle). For clarity, the display vertical axis is shown via gray dotted lines.
Figure 2.
 
Time course of an experimental trial for the egocentric visuospatial memory task. As an example, the T-To condition in Experiment 1 for the clockwise group (see Participant allocation for details) is shown. At the initial body orientation, the memory target was presented and the participants were expected to memorize its location relative to their own body while fixating on the central fixation point. Then, the participants were rotated to the final body orientation, where they were expected to move a visual cursor (open circle) to the remembered target location (dashed circle). For clarity, the display vertical axis is shown via gray dotted lines.
Figure 3.
 
The rRE in each tilt condition. The gray- and orange-colored lines represent the mean rRE values for each participant and across all participants, respectively. * p < 0.05; ** p < 0.01.
Figure 3.
 
The rRE in each tilt condition. The gray- and orange-colored lines represent the mean rRE values for each participant and across all participants, respectively. * p < 0.05; ** p < 0.01.
Figure 4.
 
(A) The SVHA for each body orientation. Gray and black lines represent the mean SVHAs for each participant and across all participants, respectively. (B) The rΔSVHA in each tilt condition. Gray-colored dots and green-colored squares represent the mean rΔSVHA for each participant and across all participants, respectively. ** p < 0.01; *** p < 0.001.
Figure 4.
 
(A) The SVHA for each body orientation. Gray and black lines represent the mean SVHAs for each participant and across all participants, respectively. (B) The rΔSVHA in each tilt condition. Gray-colored dots and green-colored squares represent the mean rΔSVHA for each participant and across all participants, respectively. ** p < 0.01; *** p < 0.001.
Figure 5.
 
The intersubject relationship between the rRE and rΔSVHA in each tilt condition. Black dots and green lines show the individual plots and regression lines fitted to the data, respectively. Gray dotted lines represent a slope of 1. ** p < 0.01.
Figure 5.
 
The intersubject relationship between the rRE and rΔSVHA in each tilt condition. Black dots and green lines show the individual plots and regression lines fitted to the data, respectively. Gray dotted lines represent a slope of 1. ** p < 0.01.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×