Free
Article  |   August 2011
Head roll influences perceived hand position
Author Affiliations & Notes
  • Footnotes
      1These authors have contributed equally.
Journal of Vision August 2011, Vol.11, 3. doi:https://doi.org/10.1167/11.9.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jessica K. Burns, Joseph Y. Nashed, Gunnar Blohm; Head roll influences perceived hand position. Journal of Vision 2011;11(9):3. https://doi.org/10.1167/11.9.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual and proprioceptive sensory inputs are naturally coded in different reference frames, i.e., eye-centered and body-centered, respectively. To use these signals in conjunction for motor planning or perception ultimately requires converting them into a common frame of reference using estimates of the relative orientation of the eyes, head, and body. Here, we examine whether extraretinal signals—specifically head roll—alter multisensory perception through noisy reference frame transformations. To do so, we examine the accuracy of visual localization relative to proprioceptive hand position for different head roll orientations. Subjects were required to judge whether a visual target was located closer or further and left or right (4-alternative forced-choice task) from their unseen hand. This was done for three different head roll rotations (−30, 0, and 30 deg). We show that eccentric head roll increased the variability in the subjects' ability to discriminate target location relative to the fingertip. We conclude that sensory perception is sensitive to body-geometry-dependent noise affecting the coordinate matching transformations of sensory data.

Introduction
How we perceive our environment crucially affects the way we interact with it. Being able to determine where objects of interests are in the environment relative to us is a necessity for spatial perception and action planning. Vision provides the strongest sensory input for building an internal representation of ourselves and the outside world, which can be used in both action and perception (Goodale, 2011). In addition, proprioception creates a position sense and allows us to move within our environment even in the absence of vision (Fuentes & Bastian, 2010; Goodwin, McCloskey, & Matthews, 1972). 
Performing actions on the environment relies on accurate internal representations of ourselves and our surroundings. Such internal representations can be computed from the statistically optimal integration of visual and proprioceptive information (Ernst & Banks, 2002; Knill & Pouget, 2004; Körding & Wolpert, 2006; Sober & Sabes, 2003; van Beers, Sittig, & Gon, 1999). To do so, it is believed that all sensory signals have to be converted into a common frame of reference (Buneo & Andersen, 2006; Cohen & Andersen, 2002; Engel, Flanders, & Soechting, 2002; Knudsen, du Lac, & Esterly, 1987; Lacquaniti & Caminiti, 1998; Soechting & Flanders, 1989). However, reference frame transformations can induce noise that depends on the size of the transformation required (Blohm & Crawford, 2007; Burns & Blohm, 2010; Soechting & Flanders, 1989; Tagliabue & McIntyre, 2011; van Beers, Baraduc, & Wolpert, 2002). 
Previous studies have analyzed the precision of visual and proprioceptive position estimates in a perceptual position matching task (Clark, Larwood, Davis, & Deffenbacher, 1995; Fuentes & Bastian, 2010; Scott & Loeb, 1994; van Beers, Sittig, & Denier van der Gon, 1998). For the proprioceptive system, it is known that these estimates are not invariants of our sensory systems but can depend on the body geometry (Fuentes & Bastian, 2010; van Beers et al., 1998; Wilson, Wong, & Gribble, 2010). Here, we investigate whether the perceptual system is affected by noise in reference frame conversions. This noise can arise through at least two sources: (1) variability in the sensory signals used in the reference frame transformation (e.g., eye and head orientations) and (2) noise arising in the neural computation of the transformation itself. While the former could potentially introduce signal-dependent variability (see Discussion section), the latter is believed to be constant (Sober & Sabes, 2003). 
By incorporating both visual and proprioceptive information, our experiment set out to determine how postural sensory signals—such as extraretinal head roll—affect the ability to perceive ourselves and the environment. This study was motivated by recent findings within the motor planning literature suggesting that reference frame transformations add noise to the transformed signal (Sober & Sabes, 2003) and that this noise might depend on the size of the required transformation (Blohm & Crawford, 2007; Burns & Blohm, 2010). If this is true, then changing the amount of transformation in the coordinate matching transformation required for perception should affect perceptional variability in terms of the just noticeable difference (JND). 
To test this hypothesis, head roll was introduced into a situation where a visual target position was compared to the proprioceptive localization of the fingertip. In order to compare the positions of both the target and hand locations, these representations needed to be converted in to a common frame of reference, so that a spatial relationship between these parameters can be formed (Soechting & Flanders, 1992). We predicted that head-roll-dependent variability, increasing either of the sensory signals' variability (vision or proprioception or both), should affect the precision of perceptual comparison between the hand and target. We demonstrate that this was the case and discuss how this affects our understanding of brain function. 
Materials and methods
Rationale
Figure 1 illustrates how head orientation might affect the transformation of visual and proprioceptive information. We hypothesize that visual and/or proprioceptive information first have to undergo a reference frame transformation before they can be compared to form a perceptual judgment. Note that we do not specify whether visual information gets transformed into proprioceptive coordinates or vice versa or whether both signals get transformed into an intermediate reference frame. Importantly, we predict that noisy transformations add variability to the representation of hand and/or target position resulting in a less reliable estimation of the hand–target distance. From previous studies (Blohm & Crawford, 2007; Burns & Blohm, 2010), we hypothesize that the estimated eye–head orientation signals are more corrupted by noise when the eye/head is at an eccentric position compared to when it is upright, similar to Weber's law (see Discussion section for details). As a result, the hand–target distance judgment should be more variable when the head is in eccentric positions, leading to shallower psychometric functions. 
Figure 1
 
Schematic of the experimental hypothesis. Schematic of how head roll might influence coordinate transformations. To generate a perceptual judgment, hand and target positions have to be transformed into a common frame of reference. The required coordinate transformations are head orientation dependent.
Figure 1
 
Schematic of the experimental hypothesis. Schematic of how head roll might influence coordinate transformations. To generate a perceptual judgment, hand and target positions have to be transformed into a common frame of reference. The required coordinate transformations are head orientation dependent.
Participants
Seven subjects between the ages of 23 and 26 (3 females) participated in this study. All had normal or corrected-to-normal vision and performed the task with their dominant right hand. Subjects provided written informed consent approved by the Queen's University General Board of Ethics. 
Apparatus
Subjects performed the experiment on a robotic exoskeleton (KINARM, BKIN Technologies, Kingston, ON, Canada) that allows both flexion and extension at the shoulder and elbow joints. The KINARM, which restricts movements to the horizontal plane, can record kinematic information of the joints, as well as apply independent mechanical loads to the shoulder and/or elbow. Participants' heads were securely positioned using a mounted bite bar that could be adjusted vertically (up or down), tilted forward and backward (head pitch), and rotated left or right (head roll). An overhead projector and semitransparent mirror were used to display visual feedback of both targets and hand positions. Visual information about hand position was occluded by an opaque screen but could be displayed through the use of a representative visual target. 
Task design
Subjects actively aligned their right fingertip with a start position, displayed as a crosshair (2 cm tall/wide) located 40 cm straight-ahead of the body, midline between the shoulders. When the hand was within 4 cm of the start position, a visual cue representing the right fingertip was displayed (1-cm-diameter white circle) and subjects were asked to align the fingertip with the cross (Figure 2). After 500 ms, hand position feedback was removed and the right hand was passively moved by the KINARM from the start position to anywhere within a 6-cm radial invisible circle, whose center was located 11 cm in front of the cross. The right fingertip was held there until the end of the trial. After 250 ms, a visual target (1-cm white dot) appeared for 500 ms at a target position that was randomly sampled within a list of possible target locations (Figure 2); the target grid was centered around the right fingertip position. 
Figure 2
 
Experimental display. Subjects began each trial by aligning their finger to the right initial hand position cross (r-IHP) that was aligned to the midline of the body. The right hand was represented by a visual target only when it was within the black dotted circle (4-cm diameter) surrounding the r-IHP cross at the beginning of each trial. The subject's arm was then passively moved by the KINARM to a location within the red dotted circle (12-cm diameter), indicated by the solid red dot. For each trial, randomly one of the visual targets (black dots) within one of the four quadrants would appear. The numbers within each target represent the number of times throughout a block that this particular target would appear. To the left of the target display, four response boxes (black-outlined boxes) were positioned and the subjects' left hand was maintained by the KINARM in the center of this response box at the left initial hand position (l-IHP). Subjects would make a response indicating what location they had perceived the visual target to appear relative to the right hand by moving with their left hand to the associated response box. Text within these boxes is for schematic purposes and did not appear during testing.
Figure 2
 
Experimental display. Subjects began each trial by aligning their finger to the right initial hand position cross (r-IHP) that was aligned to the midline of the body. The right hand was represented by a visual target only when it was within the black dotted circle (4-cm diameter) surrounding the r-IHP cross at the beginning of each trial. The subject's arm was then passively moved by the KINARM to a location within the red dotted circle (12-cm diameter), indicated by the solid red dot. For each trial, randomly one of the visual targets (black dots) within one of the four quadrants would appear. The numbers within each target represent the number of times throughout a block that this particular target would appear. To the left of the target display, four response boxes (black-outlined boxes) were positioned and the subjects' left hand was maintained by the KINARM in the center of this response box at the left initial hand position (l-IHP). Subjects would make a response indicating what location they had perceived the visual target to appear relative to the right hand by moving with their left hand to the associated response box. Text within these boxes is for schematic purposes and did not appear during testing.
Each quadrant of the target grid contained sixteen target positions, located at 1-, 3-, 6-, and 10-cm interval distances from the fingertip position on both the x- and y-axes. The number of times each target was displayed throughout each block is represented by the numbers within the black target circles in Figure 2, totaling 116 trials per block. After the visual target was presented, subjects were required to respond with their left hand, which was passively held at the center of the virtual response box (Figure 2, left) throughout the experiment. Subjects performed a 4-alternative forced-choice task and had to judge (2-s timeout) which quadrant the target appeared in relative to their right fingertip location. 
Subjects completed this task with three different head roll orientations, i.e., −30 deg (toward the left shoulder), 0 deg (no head tilt), and 30 deg (toward the right shoulder). Subjects performed a total of 12 blocks (4 blocks per head roll condition), with head orientation randomized for each block (but fixed within a block). Subjects completed 464 trials for each head roll condition, totaling 1392 trials per subject. 
Data analysis
Angular positions of the shoulder and elbow joints were sampled at a rate of 1000 Hz and were low-pass filtered (25 Hz, two-pass, sixth-order Butterworth). Offline analyses were performed in MATLAB (The Mathworks, Natick, MA). We ran a psychometric function fit algorithm (Wichmann & Hill, 2001a, 2001b) using a bootstrap method to estimate performance variability. The 4-alternative forced-choice responses were fitted to a standard Weibull psychometric function. We computed the just noticeable differences (JND, inverse of the slope of the psychometric function) and points of subjective equality (PSE, bias of psychometric function) separately for each subject (Wichmann & Hill, 2001a, 2001b). We collapsed the x (right–left) and y (close–far) data in order to separately view how head roll affected perception in the y and x directions, respectively. 
Results
A total of 9744 trials were analyzed, with 114 trials being excluded because subjects failed to respond. The goal of this experiment was to determine the accuracy to which subjects could perceive the position of their fingertip in space. This task required subjects to indicate whether they perceived the dot to the right or left of the fingertip, as well as if it was closer or further with respect to the fingertip. 
We plotted subjects' choice probabilities as a function of the target distance for left–right (x) and close–far (y) axes and fitted a psychometric function to the data. Responses are plotted for the left–right direction (Figure 3A) across all subjects for each head roll position (dotted red: −30 deg; solid blue: 0 deg; dashed green: 30 deg). 
Figure 3
 
Performance, threshold, and slope for x-data. (A) Psychometric functions representing left–right performance data (pooled across all subjects). This panel shows the percentage of right responses (y-axis) for each target position (x-axis), under all three head roll conditions: −30 deg (dashed, red), 0 deg (solid, blue), 30 deg (dotted, green). Error bars are bootstrapping estimates of choice variability (SD). (B) PSEs (i.e., x-axis values at chance performance) are shown for each head roll angle (−30 deg, 0 deg, 30 deg). (C) Slopes and JND from the psychometric function are represented for each head roll angle (−30 deg, 0 deg, 30 deg). Means and error bars (between subject SEMs) in (B) and (C) were computed based on individual subjects' PSEs and slopes (JNDs).
Figure 3
 
Performance, threshold, and slope for x-data. (A) Psychometric functions representing left–right performance data (pooled across all subjects). This panel shows the percentage of right responses (y-axis) for each target position (x-axis), under all three head roll conditions: −30 deg (dashed, red), 0 deg (solid, blue), 30 deg (dotted, green). Error bars are bootstrapping estimates of choice variability (SD). (B) PSEs (i.e., x-axis values at chance performance) are shown for each head roll angle (−30 deg, 0 deg, 30 deg). (C) Slopes and JND from the psychometric function are represented for each head roll angle (−30 deg, 0 deg, 30 deg). Means and error bars (between subject SEMs) in (B) and (C) were computed based on individual subjects' PSEs and slopes (JNDs).
Figure 3B displays the hand–target distance at which the transition from left to right responses (point of subjective equality, PSE) for each of the head roll conditions took place as a function of head roll angle (average and SEM PSE across individual subject PSEs). A one-way repeated measures ANOVA on left–right choice PSE with head roll as a factor revealed no significant differences among head roll conditions (−30, 0, and 30—F(2,12) = 0.89, p > 0.05). Figure 3C represents the slope of the psychometric function (and also the just noticeable difference, JND) for each head roll condition, which is an indicator of the variability in the sensory signals underlying the perceptual decision (average and SEM slope across individual subject slopes). A one-way repeated measures ANOVA on left–right slope with head roll as a factor revealed significant differences among head roll conditions (−30, 0, and 30—F(2,12) = 4.469, p < 0.05). Holm–Bonferroni corrected post-hoc paired t-tests revealed significant differences between the 0 and 30 deg and the 0 and −30 deg head roll conditions (p < 0.05). When the head remained upright (0 deg head roll), the JND was smallest, which was indicative of more certain discrimination of the hand–target distance. 
Figure 4A plots the psychometric functions for the close–far direction (data pooled across all subjects). Here, the x-axis represents the target positions closer or further in space relative to the fingertip location, with positive values representing spatial locations further away from the fingertip. For each head roll condition (dotted red: −30 deg; solid blue: 0 deg; dashed green: 30 deg), we plotted the percentage of far responses (i.e., the visual cue appeared further away from the fingertip) as a function of hand–target distance. 
Figure 4
 
Performance, threshold, and slope for y-data. Performance along the close–far axis. Same representation as inFigure 3. (A) Psychometric functions for all head rolls. (B) PSE. (C) Slopes and JND.
Figure 4
 
Performance, threshold, and slope for y-data. Performance along the close–far axis. Same representation as inFigure 3. (A) Psychometric functions for all head rolls. (B) PSE. (C) Slopes and JND.
PSE values are shown for the close–far direction in Figure 4B (average and SEM PSE across individual subject PSEs). A one-way repeated measures ANOVA on y PSEs with head roll as a factor revealed no significant differences among head roll conditions (−30, 0, and 30—F(2,12) = 1.183, p > 0.05). Figure 4C depicts slopes at the 50% threshold point (and JND) for the three head roll conditions (average and SEM slope across individual subject slopes). A one-way repeated measures ANOVA with head roll as a factor revealed significant differences among head roll conditions (−30, 0, and 30—F(2,12) = 4.369, p < 0.05). Holm–Bonferroni corrected post-hoc paired t-tests revealed significant differences between the 0 and 30 deg and the 0 and −30 deg head roll conditions (p < 0.05). This suggests that when the head was straight, subjects were more precise in their judgment of the target position relative to the fingertip. 
It could be that the observed changes in JND due to head roll might have been caused by an erroneous reference frame transformation rather than added head-roll-dependent noise. Indeed, an erroneous transformation would mean that the angle of rotation used to match visual and proprioceptive coordinates is incorrect (either too big or too small) compared to the actual head roll. An error in the required rotation angle could result from a misestimation of head roll and/or ocular counter-roll. Therefore, if the reference frame transformation was incorrect, the perceptual system would compare a rotated version of the visual targets to the location of the hand, with the rotation being either greater than or less than the head roll in the reference frame transformation. 
To investigate whether there was such a visual–spatial misalignment in our data, we rotated the visual display to compensate for potential visual–spatial misalignments. The rationale was that if visual–spatial misalignments existed, compensating for them should improve the discrimination performance as reflected in steeper slopes (smaller JNDs). Figures 5A and 5B show the slopes and JND of the psychometric functions in x and y, respectively, for each head roll condition (dotted red: −30 deg; solid blue: 0 deg; dashed green: 30 deg) across different values of potential visuospatial misalignments (slopes and JNDs calculated from data pooled across all subjects). To plot this graph, we rotated the visual target array in 1 deg steps and then performed the psychometric function fits using the new, rotated x and y coordinates of the visual targets. We found that the slopes (JNDs) generally showed a maximum (minimum) at non-zero visuospatial misalignments. (Note that the difference in slope values betweenFigures 34 and Figure 5 results from within versus across subject psychometric function fitting, respectively). For the zero head roll condition (solid blue lines in Figure 5), the slope maximum across all subjects was at −2 deg along the x dimension (range across subjects: −5 deg to 2 deg) and −1 deg along the y dimension (range across subjects: −6 deg to 2 deg). In the −30 deg head roll condition (dotted red lines in Figure 5), the slope maximum across all subjects was at −7 deg along the x dimension (range across subjects: −9 deg to 0 deg) and −4 deg along the y dimension (range across subjects: −7 deg to 0 deg). Finally, in the 30 deg head roll condition (dashed green lines in Figure 5), the slope maximum across all subjects was at 3 deg along the x dimension (range across subjects: −4 deg to 10 deg) and 2 deg along the y dimension (range across subjects: −4 deg to 7 deg). On average, the display had to be rotated in the direction opposite to the head roll by a few degrees to result in smaller JNDs. This indicates that head roll was overestimated in the coordinate matching transformation (and/or ocular counter-roll was not taken into account). 
Figure 5
 
Analysis of visuospatial misalignment. (A) X (left–right) slopes (JND) of the psychometric functions are shown as a function of the visuospatial misalignment (data pooled across all subjects). This misalignment corresponds to a rotation of the visual display before fitting the psychometric functions. The rationale was that if the reference frame matching transformations misestimated head roll, then most precise discrimination should be observed at non-zero visuospatial misalignment. This is what is shown by the curves for the head straight (solid, blue line), −30 deg head roll (dashed, red line), and 30 deg head roll (dotted, green line). Vertical dotted lines indicate the location of the maxima of the slopes (minima of JND). (B) Same analysis for the slopes and JND along the y (close–far) axis.
Figure 5
 
Analysis of visuospatial misalignment. (A) X (left–right) slopes (JND) of the psychometric functions are shown as a function of the visuospatial misalignment (data pooled across all subjects). This misalignment corresponds to a rotation of the visual display before fitting the psychometric functions. The rationale was that if the reference frame matching transformations misestimated head roll, then most precise discrimination should be observed at non-zero visuospatial misalignment. This is what is shown by the curves for the head straight (solid, blue line), −30 deg head roll (dashed, red line), and 30 deg head roll (dotted, green line). Vertical dotted lines indicate the location of the maxima of the slopes (minima of JND). (B) Same analysis for the slopes and JND along the y (close–far) axis.
The latter analysis raises the question of whether our main findings (Figures 3 and 4) that the slopes are shallower (higher JNDs) for eccentric head rolls compared to the head straight are artifacts. Instead of being due to additional head-roll-dependent noise in the reference frame transformation, this effect could be due to visuospatial misalignments due to errors in the reference frame transformation. However, as can be observed from Figure 5, eccentric head roll slopes are consistently shallower (lower) than the slope for the head straight (0 deg head roll) regardless of whether they are measured at the maximum of the curve or at zero visuospatial misalignment (as done in Figures 3 and 4). Indeed, when carrying out statistical analyses using the slope values at the maximum of the curves (vertical lines) in Figure 5, the results remained qualitatively the same as our analysis at zero (0 deg) visuospatial alignment. Therefore, our results of Figures 3 and 4 hold even if there was an over- or undercompensation of head roll (and/or ocular counter-roll) in the reference frame transformation. 
Discussion
This study showed that the proprioceptive localization of the hand relative to a visual target can be affected by head roll. We found significant differences in slopes (and JNDs) between head straight and head roll conditions, with head roll inducing increased variability in perception. We suggest that head-roll-induced signal-dependent noise could account for these findings; as the head moves away from an upright position, the estimation of head roll (and/or ocular counter-roll—see below) becomes increasingly variable, adding noise to the reference frame transformation and resulting in less precise perceptual discriminations of hand–target distance. 
When we changed the position of head orientation to roll toward the left or right shoulder (−30 or 30 deg), we found that the perceptual precision (JND) of the location of the visual stimuli relative to the fingertip position changed. Head straight conditions displayed significantly smaller JNDs compared to both −30 deg and 30 deg head roll conditions. A smaller JND value indicates a steeper transition in the psychometric curve. Thus, the ability to distinguish the target appearing either to the left or the right of the hand is improved during the head straight conditions. The effect of head roll during whole body rotation on verticality perception has previously been studied, finding that the most precise performance occurred when the head was upright (Tarnutzer, Bockisch, Straumann, & Olasagasti, 2009; Tarnutzer, Bockisch, & Straumann, 2010; Vingerhoets, De Vrijer, Van Gisbergen, & Medendorp, 2009). Their results are indicative of larger noise associated with eccentric head roll orientations, which is in agreement with our hypothesis. Our data thus provides evidence for sensory signal-dependent noise in the coordinate matching transformations underlying perception. 
Our analysis in Figure 5 also showed that the head roll angle used by the perceptual system during the reference frame transformation was overestimated. This is in agreement with previous findings reporting an overestimation of head roll in reference frame transformations that occur during reach planning (Blohm & Crawford, 2007; Burns & Blohm, 2010). However, a rationale for this overestimation of head roll is unclear and contrary to the literature of perceived visual vertical (Dyde, Jenkin, & Harris, 2006; Guerraz, Luyat, Poquin, & Ohlmann, 2000). Alternatively, the observed overcompensation could also be due to an underestimation of ocular counter-roll (or if ocular counter-roll was simply not used). However, this would be contrary to previous observations indicating accurate use of ocular counter-roll for reference frame transformations and spatial remapping (Blohm & Crawford, 2007; Blohm & Lefevre, 2010; Klier, Angelaki, & Hess, 2005; Klier & Crawford, 1998; Medendorp, Smith, Tweed, & Crawford, 2002). 
There is evidence in the literature for signal-dependent eye and head orientation noise (Blohm & Crawford, 2007; Burns & Blohm, 2010; Van Beuzekom & Van Gisbergen, 2000; Wade & Curthoys, 1997). For head orientation, this noise may be coming from the vestibular system or muscle spindles in the neck signaling head orientation (Faisal, Selen, & Wolpert, 2008; Guerraz et al., 2000; Lechner-Steinleitner, 1978; Sadeghi, Chacron, Taylor, & Cullen, 2007; Scott & Loeb, 1994). For eye orientation, it is unclear whether this noise is sensory from extraocular proprioception (Munuera, Morel, Duhamel, & Deneve, 2009; Wang, Zhang, Cohen, & Goldberg, 2007) or neural from efference copy signals (Lewis, Gaymard, & Tamargo, 1998; Munuera et al., 2009). In addition, there is also constant transformation noise that is added to the system (Burns & Blohm, 2010; McGuire & Sabes, 2009; Sober & Sabes, 2003, 2005); however, the latter would have been present across all head roll conditions and could not be isolated in this experiment. 
It is well accepted that reference frame transformation are required for action planning. However, the role of reference frame matching conversions in perception has—to our knowledge—not yet been explicitly considered. Here, we show for the first time that similar to action planning, reference frame transformations can affect perceptual judgments. There are different ways the reference frame transformations for perception and action might interact. On the one hand, since different brain areas underlie perception and action, they might be governed by different transformations that optimize the outcome for each individual subsystem; while action planning requires specific reference frames that are related to the motor output, the perceptual system might be more flexible and less constrained. On the other hand, the perception and action systems are not completely independent, e.g., perception can influence motor actions. In this case, one would predict similarities between the reference frame transformations for perception and action. More research is needed to address these issues. 
In summary, our results show that changes in body geometry can affect how we perceive our environment by making perception less precise. Could there be a behavioral benefit from this strategy? It appears that being less precise also translates into being more fault-tolerant. For example, when feeling (proprioception) and seeing the hand at the same time with both signals indicating different positions with high reliability, we might actually perceive the viewed hand as not being ours, because there is a sensory conflict (such as in Alien Hand Syndrome). Therefore, in unusual or more complicated geometrical arrangement of the body, it could be advantageous to be less precise but more fault-tolerant so that any reference frame transformation errors do not lead to perceived sensory conflict. 
Acknowledgments
We thank Dr. A.Z. Khan for helpful comments on the manuscript. This work was supported by NSERC (Canada), CFI (Canada), the Botterell Fund (Queen's University, Kingston, ON, Canada), and ORF (Canada). 
Commercial relationships: Jessica K. Burns and Joseph Y. Nashed contributed equally to this work. 
Corresponding author: Dr. Gunnar Blohm. 
Email: gunnar.blohm@queensu.ca. 
Address: Centre for Neuroscience Studies, Queen's University, Kingston, ON K7L 3N6, Canada. 
References
Blohm G. Crawford J. D. (2007). Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision, 7, (5):4, 1–22, http://www.journalofvision.org/content/7/5/4, doi:10.1167/7.5.4. [PubMed] [Article] [CrossRef] [PubMed]
Blohm G. Lefevre P. (2010). Visuomotor velocity transformations for smooth pursuit eye movements. Journal of Neurophysiology, 104, 2103–2115. [PubMed] [CrossRef] [PubMed]
Buneo C. A. Andersen R. A. (2006). The posterior parietal cortex: Sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia, 44, 2594–2606. [PubMed] [CrossRef] [PubMed]
Burns J. K. Blohm G. (2010). Multi-sensory weights depend on contextual noise in reference frame transformations. Frontiers in Human Neuroscience, 4, 221. [CrossRef] [PubMed]
Clark F. J. Larwood K. J. Davis M. E. Deffenbacher K. A. (1995). A metric for assessing acuity in positioning joints and limbs. Experimental Brain Research, 107, 73–79. [PubMed] [CrossRef] [PubMed]
Cohen Y. E. Andersen R. A. (2002). A common reference frame for movement plans in the posterior parietal cortex. Nature Reviews Neuroscience, 3, 553–562. [PubMed] [CrossRef] [PubMed]
Dyde R. T. Jenkin M. R. Harris L. R. (2006). The subjective visual vertical and the perceptual upright. Experimental Brain Research, 173, 612–622. [PubMed] [CrossRef] [PubMed]
Engel K. C. Flanders M. Soechting J. F. (2002). Oculocentric frames of reference for limb movement. Archives Italiennes de Biologie, 140, 211–219. [PubMed] [PubMed]
Ernst M. O. Banks M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433. [PubMed] [CrossRef] [PubMed]
Faisal A. A. Selen L. P. Wolpert D. M. (2008). Noise in the nervous system. Nature Reviews Neuroscience, 9, 292–303. [PubMed] [CrossRef] [PubMed]
Fuentes C. T. Bastian A. J. (2010). Where is your arm Variations in proprioception across space and tasks. Journal of Neurophysiology, 103, 164–171. [PubMed] [CrossRef] [PubMed]
Goodale M. A. (2011). Transforming vision into action. Vision Research, 51, 1567–1587. [PubMed] [CrossRef] [PubMed]
Goodwin G. M. McCloskey D. I. Matthews P. B. (1972). The contribution of muscle afferents to kinaesthesia shown by vibration induced illusions of movement and by the effects of paralysing joint afferents. Brain, 95, 705–748. [PubMed] [CrossRef] [PubMed]
Guerraz M. Luyat M. Poquin D. Ohlmann T. (2000). The role of neck afferents in subjective orientation in the visual and tactile sensory modalities. Acta Oto-Laryngologica, 120, 735–738. [PubMed] [CrossRef] [PubMed]
Klier E. M. Angelaki D. E. Hess B. J. (2005). Roles of gravitational cues and efference copy signals in the rotational updating of memory saccades. Journal of Neurophysiology, 94, 468–478. [PubMed] [CrossRef] [PubMed]
Klier E. M. Crawford J. D. (1998). Human oculomotor system accounts for 3-D eye orientation in the visual-motor transformation for saccades. Journal of Neurophysiology, 80, 2274–2294. [PubMed] [PubMed]
Knill D. C. Pouget A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences, 27, 712–719. [PubMed] [CrossRef] [PubMed]
Knudsen E. I. du Lac S. Esterly S. D. (1987). Computational maps in the brain. Annual Reviews in Neuroscience, 10, 41–65. [PubMed] [CrossRef]
Körding K. P. Wolpert D. M. (2006). Bayesian decision theory in sensorimotor control. Trends in Cognitive Sciences, 10, 319–326. [PubMed] [CrossRef] [PubMed]
Lacquaniti F. Caminiti R. (1998). Visuo-motor transformations for arm reaching. European Journal of Neuroscience, 10, 195–203. [PubMed] [CrossRef] [PubMed]
Lechner-Steinleitner S. (1978). Interaction of labyrinthine and somatoreceptor inputs as determinants of the subjective vertical. Psychological Research, 40, 65–76. [PubMed] [CrossRef] [PubMed]
Lewis R. F. Gaymard B. M. Tamargo R. J. (1998). Efference copy provides the eye position information required for visually guided reaching. Journal of Neurophysiology, 80, 1605–1608. [PubMed] [PubMed]
McGuire L. M. Sabes P. N. (2009). Sensory transformations and the use of multiple reference frames for reach planning. Nature Neuroscience, 12, 1056–1061. [PubMed] [CrossRef] [PubMed]
Medendorp W. P. Smith M. A. Tweed D. B. Crawford J. D. (2002). Rotational remapping in human spatial memory during eye and head motion. Journal of Neuroscience, 22, RC196. [PubMed]
Munuera J. Morel P. Duhamel J. R. Deneve S. (2009). Optimal sensorimotor control in eye movement sequences. Journal of Neuroscience, 29, 3026–3035. [PubMed] [CrossRef] [PubMed]
Sadeghi S. G. Chacron M. J. Taylor M. C. Cullen K. E. (2007). Neural variability, detection thresholds, and information transmission in the vestibular system. Journal of Neuroscience, 27, 771–781. [PubMed] [CrossRef] [PubMed]
Scott S. H. Loeb G. E. (1994). The computation of position sense from spindles in mono- and multiarticular muscles. Journal of Neuroscience, 14, 7529–7540. [PubMed] [PubMed]
Sober S. J. Sabes P. N. (2003). Multisensory integration during motor planning. Journal of Neuroscience, 23, 6982–6992. [PubMed] [PubMed]
Sober S. J. Sabes P. N. (2005). Flexible strategies for sensory integration during motor planning. Nature Neuroscience, 8, 490–497. [PubMed] [PubMed]
Soechting J. F. Flanders M. (1989). Errors in pointing are due to approximations in sensorimotor transformations. Journal of Neurophysiology, 62, 595–608. [PubMed] [PubMed]
Soechting J. F. Flanders M. (1992). Moving in three-dimensional space: Frames of reference, vectors, and coordinate systems. Annual Reviews in Neuroscience, 15, 167–191. [PubMed] [CrossRef]
Tagliabue M. McIntyre J. (2011). Necessity is the mother of invention: Reconstructing missing sensory information in multiple, concurrent reference frames for eye–hand coordination. Journal of Neuroscience, 31, 1397–1409. [PubMed] [CrossRef] [PubMed]
Tarnutzer A. A. Bockisch C. Straumann D. Olasagasti I. (2009). Gravity dependence of subjective visual vertical variability. Journal of Neurophysiology, 102, 1657–1671. [PubMed] [CrossRef] [PubMed]
Tarnutzer A. A. Bockisch C. J. Straumann D. (2010). Roll-dependent modulation of the subjective visual vertical: Contributions of head- and trunk-based signals. Journal of Neurophysiology, 103, 934–941. [PubMed] [CrossRef] [PubMed]
van Beers R. J. Baraduc P. Wolpert D. M. (2002). Role of uncertainty in sensorimotor control. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 357, 1137–1145. [PubMed] [CrossRef]
van Beers R. J. Sittig A. C. Denier van der Gon J. J. (1998). The precision of proprioceptive position sense. Experimental Brain Research, 122, 367–377. [PubMed] [CrossRef] [PubMed]
van Beers R. J. Sittig A. C. Gon J. J. (1999). Integration of proprioceptive and visual position-information: An experimentally supported model. Journal of Neurophysiology, 81, 1355–1364. [PubMed] [PubMed]
Van Beuzekom A. D. Van Gisbergen J. A. (2000). Properties of the internal representation of gravity inferred from spatial-direction and body-tilt estimates. Journal of Neurophysiology, 84, 11–27. [PubMed] [PubMed]
Vingerhoets R. A. De Vrijer M. Van Gisbergen J. A. Medendorp W. P. (2009). Fusion of visual and vestibular tilt cues in the perception of visual vertical. Journal of Neurophysiology, 101, 1321–1333. [PubMed] [CrossRef] [PubMed]
Wade S. W. Curthoys I. S. (1997). The effect of ocular torsional position on perception of the roll-tilt of visual stimuli. Vision Research, 37, 1071–1078. [PubMed] [CrossRef] [PubMed]
Wang X. Zhang M. Cohen I. S. Goldberg M. E. (2007). The proprioceptive representation of eye position in monkey primary somatosensory cortex. Nature Neuroscience, 10, 640–646. [PubMed] [CrossRef] [PubMed]
Wichmann F. A. Hill N. J. (2001a). The psychometric function: I Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63, 1293–1313. [PubMed] [CrossRef]
Wichmann F. A. Hill N. J. (2001b). The psychometric function: II. Bootstrap-based confidence intervals and sampling. Perception & Psychophysics, 63, 1314–1329. [PubMed] [CrossRef]
Wilson E. T. Wong J. Gribble P. L. (2010). Mapping proprioception across a 2D horizontal workspace. PLoS One, 5, e11851. [CrossRef] [PubMed]
Figure 1
 
Schematic of the experimental hypothesis. Schematic of how head roll might influence coordinate transformations. To generate a perceptual judgment, hand and target positions have to be transformed into a common frame of reference. The required coordinate transformations are head orientation dependent.
Figure 1
 
Schematic of the experimental hypothesis. Schematic of how head roll might influence coordinate transformations. To generate a perceptual judgment, hand and target positions have to be transformed into a common frame of reference. The required coordinate transformations are head orientation dependent.
Figure 2
 
Experimental display. Subjects began each trial by aligning their finger to the right initial hand position cross (r-IHP) that was aligned to the midline of the body. The right hand was represented by a visual target only when it was within the black dotted circle (4-cm diameter) surrounding the r-IHP cross at the beginning of each trial. The subject's arm was then passively moved by the KINARM to a location within the red dotted circle (12-cm diameter), indicated by the solid red dot. For each trial, randomly one of the visual targets (black dots) within one of the four quadrants would appear. The numbers within each target represent the number of times throughout a block that this particular target would appear. To the left of the target display, four response boxes (black-outlined boxes) were positioned and the subjects' left hand was maintained by the KINARM in the center of this response box at the left initial hand position (l-IHP). Subjects would make a response indicating what location they had perceived the visual target to appear relative to the right hand by moving with their left hand to the associated response box. Text within these boxes is for schematic purposes and did not appear during testing.
Figure 2
 
Experimental display. Subjects began each trial by aligning their finger to the right initial hand position cross (r-IHP) that was aligned to the midline of the body. The right hand was represented by a visual target only when it was within the black dotted circle (4-cm diameter) surrounding the r-IHP cross at the beginning of each trial. The subject's arm was then passively moved by the KINARM to a location within the red dotted circle (12-cm diameter), indicated by the solid red dot. For each trial, randomly one of the visual targets (black dots) within one of the four quadrants would appear. The numbers within each target represent the number of times throughout a block that this particular target would appear. To the left of the target display, four response boxes (black-outlined boxes) were positioned and the subjects' left hand was maintained by the KINARM in the center of this response box at the left initial hand position (l-IHP). Subjects would make a response indicating what location they had perceived the visual target to appear relative to the right hand by moving with their left hand to the associated response box. Text within these boxes is for schematic purposes and did not appear during testing.
Figure 3
 
Performance, threshold, and slope for x-data. (A) Psychometric functions representing left–right performance data (pooled across all subjects). This panel shows the percentage of right responses (y-axis) for each target position (x-axis), under all three head roll conditions: −30 deg (dashed, red), 0 deg (solid, blue), 30 deg (dotted, green). Error bars are bootstrapping estimates of choice variability (SD). (B) PSEs (i.e., x-axis values at chance performance) are shown for each head roll angle (−30 deg, 0 deg, 30 deg). (C) Slopes and JND from the psychometric function are represented for each head roll angle (−30 deg, 0 deg, 30 deg). Means and error bars (between subject SEMs) in (B) and (C) were computed based on individual subjects' PSEs and slopes (JNDs).
Figure 3
 
Performance, threshold, and slope for x-data. (A) Psychometric functions representing left–right performance data (pooled across all subjects). This panel shows the percentage of right responses (y-axis) for each target position (x-axis), under all three head roll conditions: −30 deg (dashed, red), 0 deg (solid, blue), 30 deg (dotted, green). Error bars are bootstrapping estimates of choice variability (SD). (B) PSEs (i.e., x-axis values at chance performance) are shown for each head roll angle (−30 deg, 0 deg, 30 deg). (C) Slopes and JND from the psychometric function are represented for each head roll angle (−30 deg, 0 deg, 30 deg). Means and error bars (between subject SEMs) in (B) and (C) were computed based on individual subjects' PSEs and slopes (JNDs).
Figure 4
 
Performance, threshold, and slope for y-data. Performance along the close–far axis. Same representation as inFigure 3. (A) Psychometric functions for all head rolls. (B) PSE. (C) Slopes and JND.
Figure 4
 
Performance, threshold, and slope for y-data. Performance along the close–far axis. Same representation as inFigure 3. (A) Psychometric functions for all head rolls. (B) PSE. (C) Slopes and JND.
Figure 5
 
Analysis of visuospatial misalignment. (A) X (left–right) slopes (JND) of the psychometric functions are shown as a function of the visuospatial misalignment (data pooled across all subjects). This misalignment corresponds to a rotation of the visual display before fitting the psychometric functions. The rationale was that if the reference frame matching transformations misestimated head roll, then most precise discrimination should be observed at non-zero visuospatial misalignment. This is what is shown by the curves for the head straight (solid, blue line), −30 deg head roll (dashed, red line), and 30 deg head roll (dotted, green line). Vertical dotted lines indicate the location of the maxima of the slopes (minima of JND). (B) Same analysis for the slopes and JND along the y (close–far) axis.
Figure 5
 
Analysis of visuospatial misalignment. (A) X (left–right) slopes (JND) of the psychometric functions are shown as a function of the visuospatial misalignment (data pooled across all subjects). This misalignment corresponds to a rotation of the visual display before fitting the psychometric functions. The rationale was that if the reference frame matching transformations misestimated head roll, then most precise discrimination should be observed at non-zero visuospatial misalignment. This is what is shown by the curves for the head straight (solid, blue line), −30 deg head roll (dashed, red line), and 30 deg head roll (dotted, green line). Vertical dotted lines indicate the location of the maxima of the slopes (minima of JND). (B) Same analysis for the slopes and JND along the y (close–far) axis.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×