July 2016
Volume 16, Issue 9
Open Access
Article  |   July 2016
How various aspects of motion parallax influence distance judgments, even when we think we are standing still
Author Affiliations
  • Cristina de la Malla
    Research Institute MOVE, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    c.delamalla@vu.nl
  • Stijn Buiteman
    Research Institute MOVE, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
  • Wilmer Otters
    Research Institute MOVE, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
  • Jeroen B. J. Smeets
    Research Institute MOVE, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
  • Eli Brenner
    Research Institute MOVE, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
Journal of Vision July 2016, Vol.16, 8. doi:https://doi.org/10.1167/16.9.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cristina de la Malla, Stijn Buiteman, Wilmer Otters, Jeroen B. J. Smeets, Eli Brenner; How various aspects of motion parallax influence distance judgments, even when we think we are standing still. Journal of Vision 2016;16(9):8. https://doi.org/10.1167/16.9.8.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is well known that when we intentionally make large head movements, the resulting motion parallax helps us judge objects' distances. The information about distance could be obtained in various ways: from the changes in the object's position with respect to ourselves, from the changes in its orientation relative to the line of sight, and from the relative retinal motion between the target's image and that of the background. We explore here whether these motion parallax cues are used when we think we are standing still. To answer this question we asked subjects to indicate the position of a virtual target with their unseen finger. The position and the size of the target changed across trials. There were pairs of trials in which the same target was presented at the same location, except that one or more of the three motion parallax cues indicated that the target was either 10 cm closer or 10 cm farther away than the ‘true’ distance. Any systematic difference between the positions indicated for the closer and further targets of such pairs indicates that the cues in question influence subjects' judgments. The results show that motion parallax cues have a detectable influence on our judgments, even when the head only moves a few millimeters. Relative retinal image motion has the clearest effect. Subjects did not move their head differently when we presented the targets to only one eye in order to increase the benefit of considering motion parallax.

Introduction
We live in a three-dimensional (3-D) world, so most of the tasks that people perform in daily life require judgments of distance as well as of elevation and azimuth. It has been shown that people consider various sources of information when judging objects' distances. Static observers consider binocular disparities (e.g., Rogers & Graham, 1982; Johnston, Cumming, & Landy, 1994; Bradshaw, Parton, & Eagle, 1998; Bradshaw, Parton, & Glennerster, 2000; Sousa, Brenner, & Smeets, 2010, 2011a), the object's retinal image size (e.g., Gillam, 1995; McIntosh & Lashley, 2008; Lugtigheid & Welchman, 2010; Sousa et al., 2010, 2011a, 2011b; Sousa, Smeets, & Brenner, 2012a, 2012b), accommodation (e.g., Wallach & Floor, 1971; Leibowitz & Moore, 1966), and vergence (e.g., Gogel, 1961, 1977; Brenner & van Damme, 1998). A moving observer can also consider information from motion parallax (e.g., J. J. Gibson, 1950, 1966; E. J. Gibson, Gibson, Smith, & Flock, 1959; Braunstein, 1966; Dees, 1966; Ferris, 1972; Gogel & Tietz, 1973, 1979; Rogers & Graham, 1979; Rogers, 2009). 
We use the term motion parallax to refer to any information about structures' distances that could be obtained by an observer changing his or her viewing position. As observers move around, they perceive objects from different vantage points. This sometimes even means that they see different parts of an object at different times. The extent to which the view changes depends on the object's distance, as well as on how much the observer moved. Consequently, the retinal images of objects at different distances move relative to each other. Although changes in an object's orientation with respect to the line of sight and changes in two objects' relative retinal positions are equivalent in terms of the relative motion involved, only differing in whether the comparison is made between parts of the same object or between separate objects, they might be obtained differently (through changing orientation or changing relative position respectively). Beside relying on such retinal cues to judge objects' distances from the change in vantage point, observers might also register changes in the object's position relative to themselves from the extent to which they have to turn their head and eyes in order to keep looking at the object. In the present study we independently manipulated the three above-mentioned cues. We only manipulated changes that occur when the observer moves laterally or vertically. The way in which an object's retinal image changes when the observer moves toward or away from an object might also provide information about the object's distance. However, since both retinal image size (e.g., Sousa et al., 2010, 2011a, 2011b) and change in image size (e.g., Brenner, van den Berg, & van Damme, 1996) are known to influence the perceived distance, it would be difficult to distinguish between such direct influences and the effects of changing size in the context of motion parallax. For this reason, we decided not to manipulate the changes in image size that occur when observers move backwards and forwards. 
Calculations have shown that the small eye and head movements that occur when fixating an object while standing still are enough to extract information from motion parallax for nearby objects (Aytekin & Rucci, 2012). However, direct tests of the use of motion parallax to judge distance have generally used much larger head movements. Moreover, in such tests the subjects were explicitly instructed to make head movements (e.g., Rogers & Graham, 1979; van Damme & van de Grind, 1996). An exception is the study of Louw, Smeets, and Brenner (2007), who showed that subjects used motion parallax to judge surface slant when they were not instructed to move their head. In that study, subjects moved their head 4–10 cm while placing an object on the surface in question. Thus, it is plausible but not certain that changes due to small unintentional postural sway influence depth judgments. 
Research on the role of head movements in grasping has shown that after losing one eye, patients make larger and faster horizontal and vertical head movements, presumably to increase the amount of information they get from motion parallax cues (e.g., Marotta, Perrot, Nicolle, & Goodale, 1995). Thus, some of the head movements when standing still might serve to improve depth judgments from motion parallax, rather than just resulting from failures to maintain balance. In the present study we therefore also examine whether temporarily removing binocular cues gives rise to different head movements. 
The aim of this paper is to answer three questions: whether motion parallax influences judgments of distance when people are free to move their head but are not instructed to do so (i.e., when standing still), which of the three motion parallax cues are used, and whether people move their head differently when the need to acquire and the availability of information from motion parallax differs. The three motion parallax cues were manipulated by having or not having a background, by using either spherical or cube-shaped target objects, and by displacing or not displacing the target object when the head moved. We manipulated the need to use motion parallax to judge distance by showing the target to both eyes or only to one eye (if head movements are directed at obtaining more information from motion parallax cues, head movements should be larger when the target is only presented to one eye). We performed two experiments. Each experiment had five conditions. In all cases subjects were asked to move their finger to indicate a target's position while standing still. The target was sometimes accompanied by a background consisting of four cubes, in an otherwise dark room. The subject's hand was invisible and it did not occlude the target or the background. In different sessions we manipulated one or more of the motion parallax cues to indicate that the target was either 10 cm farther away or 10 cm closer than the position indicated by other cues. If the manipulated motion parallax cues influence the judged target distance, the indicated target positions should be biased towards the positions indicated by these motion parallax cues. 
Methods
Subjects
A total of 34 subjects (19 females) took part in the experiments after giving written informed consent. Not all of them took part in all conditions: 12 subjects took part in each condition. None of the subjects was aware of the purpose of the experiment or of the manipulations. All subjects' stereo acuity was better than 60-in. (assessed with the Stereo Fly test), and none of them had evident motor abnormalities. Four of the subjects were left-handed (self-report). All subjects performed the task with their preferred hand. The study was part of a program that was approved by the local ethical committee. 
Apparatus
We used a setup that allowed us to create 3-D virtual stimuli (see Figure 1). In this setup mirrors reflected the images from two CRT monitors (1096 × 686 pixels, 47.3 × 30.0 cm) that were to the sides of the subjects' head to the two eyes. Subjects looked straight ahead at these mirrors and had the illusion that the 3-D virtual objects were in front of them. New images were created for each eye with the frequency of the refresh rate of the monitors (160 Hz). We recorded the position of the head and of the index finger of the preferred hand at 250 Hz using infrared emitting diodes (IREDs) and an Optotrak 3020 System (Northern Digital, Waterloo, ON, Canada). One IRED was attached to the nail of the index finger and three to a mouthpiece with a dental imprint. Subjects were allowed to freely move their head during the experiments (although the setup did not encourage large head movements since subjects had to look into the mirrors). Tracking the head's position allowed us to adapt the images to movements of the head with a very short delay (about 20 ms). The positions of subjects' eyes relative to the mouthpiece were determined in advance following the same calibration procedure as in Sousa et al. (2010). 
Figure 1
 
Schematic top view of the setup. Two monitors' images were visible via two mirrors (one for each eye). Subjects indicated the position of the virtual target object (represented by the red cube) that was presented within a restricted region in front of them (indicated by the dashed rectangle) with their index finger. Hand and head movements were recorded by an Optotrak system (for details see Sousa et al., 2010).
Figure 1
 
Schematic top view of the setup. Two monitors' images were visible via two mirrors (one for each eye). Subjects indicated the position of the virtual target object (represented by the red cube) that was presented within a restricted region in front of them (indicated by the dashed rectangle) with their index finger. Hand and head movements were recorded by an Optotrak system (for details see Sousa et al., 2010).
Procedure
Subjects stood in front of the mirrors (except for one subject who took part in three conditions of Experiment 1 and in one condition of Experiment 2, who was too tall and had to perform the task sitting. Since the amplitude and the peak velocity of his head movements were above average, his data was included in the analysis). Subjects were not instructed about their head movements. They were allowed to move their head, and they were specifically instructed not to lean with their foreheads on the edges of the mirrors, which might have otherwise resulted in the head movements being negligible. The room was completely dark except for the images on the screen. To start each trial, subjects had to move their index finger near their body. Once they did so, a red target appeared (either a sphere or a cube depending on the condition; see below) and they had to move their unseen index finger to the center of the target. They had to hold the finger at the indicated position until the target disappeared. This happened when the hand had moved less than 1 mm in 300 ms (and was within 30 cm of the center of the volume of possible target positions). At that moment, the finger's position was saved as the indicated position of the target, and the target disappeared. 
The targets were positioned within a volume of space of about 15 × 15 × 25 cm (height × width × depth) that was centered about 45 cm from the subjects' eyes and oriented downwards by about 30° so that subjects pointed at a comfortable height while the space was elongated (in depth) along the line of sight. The position and size of the targets differed across trials, with pairs of trials in which the exact same target was presented at the same location, except that one or more motion parallax cues were manipulated to indicate that the target was either 10 cm farther away or 10 cm closer than the ‘true’ position. By the ‘true’ position we mean the position defined by binocular disparity (when available), the image size (considering the assigned target size), and any unmodified motion parallax cue. Accommodation was obviously always at a fixed distance (that of the actual screen surface). 
Experiments
There were two experiments. Each experiment consisted of five conditions that were investigated in separate sessions on separate days. Each condition consisted of 200 trials (100 pairs of targets) presented in random order. The five conditions differed in which motion parallax cues were manipulated to indicate a closer or further distance (as will be explained below). In Experiment 1, the targets were always presented binocularly. In Experiment 2, half of the trials (50 pairs of targets) were presented binocularly, in the same way as in Experiment 1, and the other half were only shown to one eye (25 pairs of targets were only shown to the left eye and the other 25 pairs were only shown to the right eye). We expected any use of motion parallax cues to become more evident when subjects could not rely on binocular vision. We were also interested in exploring whether subjects would move their heads differently when vision was monocular, given that they could obtain more information from motion parallax by moving their head more. 
Conditions
The five conditions differed in which motion parallax cues indicated that the targets were either 10 cm farther away or 10 cm closer than the ‘true’ position. The extent to which subjects systematically pointed further away or closer when the manipulated cue or cues indicated that the target was further away or closer respectively denotes how much the motion parallax cues in question contribute to the judged distance. Figure 2 illustrates the manipulations that we used. In this figure we illustrate one pair of trials of each condition (different columns), with the upper drawings representing how the motion parallax cues were manipulated to be consistent with a target that was 10 cm farther away, and the lower drawings representing how they were manipulated to be consistent with a target that was 10 cm closer. 
Figure 2
 
Illustration of the manipulation of the motion parallax cues in each condition. Each column represents a pair of trials as seen from above. The upper row shows the manipulation for a target 10 cm farther away and the lower row shows the manipulation for a target 10 cm closer. An initial situation in which the target happens to be straight in front of the subject is shown in green. The positions closer and further away are shown by dotted outlines. The simulated positions and orientations that correspond with the situation after the subject has moved to the right are shown in orange outlines. The red and gray squares and disks below each drawing represent the subject's view before (left) and after (right) the rightward movement.
Figure 2
 
Illustration of the manipulation of the motion parallax cues in each condition. Each column represents a pair of trials as seen from above. The upper row shows the manipulation for a target 10 cm farther away and the lower row shows the manipulation for a target 10 cm closer. An initial situation in which the target happens to be straight in front of the subject is shown in green. The positions closer and further away are shown by dotted outlines. The simulated positions and orientations that correspond with the situation after the subject has moved to the right are shown in orange outlines. The red and gray squares and disks below each drawing represent the subject's view before (left) and after (right) the rightward movement.
In Figure 2, an initial condition with the target straight in front of the subject is represented in green (this situation is chosen because it makes it easier to follow the manipulations in the figure; in reality the initial position could be anywhere within the available range). The dotted outlines show a target located further away or closer to the subject. This target was not visible, but we use it to illustrate how the motion parallax cues involved would differ if the target were at that distance. The orange outlines show how the simulated environment was changed to give rise to the manipulated information from the motion parallax cue in question when the subject moved to the right (without changing the distance indicated by other cues). This therefore represents the scene that was visible to the subject after having moved to the right. The layout and extent of the movement are obviously not to scale; the extent of the movement has been exaggerated tremendously to make the differences clearer. Equivalent changes were made when subjects moved to the left, or up and down. Motion parallax information from moving forward or backward (expansion and contraction) was not manipulated, so such information would contribute to judging the ‘true’ distance. The red and gray symbols illustrate schematically what the subject would see from each vantage point (target in red and background in gray). 
In two of the conditions, the targets were red spheres with simulated diameters between 15 and 35 mm. In the other three conditions the targets were red cubes with simulated side lengths between 15 and 35 mm. When there was a background, it always consisted of four gray cubes with simulated side lengths of 20 mm, 15 cm behind the furthest possible target position (about 75 cm from the subject). The background cubes were arranged in a square with horizontal and vertical separations of 10 cm. 
In the five conditions the motion parallax cues were manipulated as follows: 
  •  
    -All cues (first column): In this condition subjects saw a target cube and four background cubes. Whenever the subject moved laterally or up and down, the simulated position of the target cube either moved in the same direction to produce the motion parallax that corresponds with a more distant object, or moved in the opposite direction to produce the motion parallax that corresponds with an object that is closer to the subject (both by 10 cm). By changing the simulated position of the target, we changed all three motion parallax cues: the target's egocentric position (so that to keep directing their gaze to the target subjects had to turn their eyes and/or head as much as if they were looking at a 10 cm farther away or 10 cm closer target), the orientation of the target cube with respect to the vantage point (the orientation changes less for a target further away and more for a closer target than if the simulated target had not been moved; see how the dashed lines intersect the cubes slightly differently for the two distances), and the position of the object with respect to the background (change in the alignment of the target with the background cubes). Note that only the three motion parallax cues were manipulated to indicate different distances for the two trials of each pair. All other cues were identical in the paired trials.
  •  
    -Egocentric position only (second column): In this condition subjects only saw a target sphere. As in the All cues condition, the simulated target moved when subjects moved, so that its position with respect to the subject was consistent with a location 10 cm farther away or 10 cm closer than the ‘true’ distance of the sphere. In this condition the egocentric position was the only cue indicating that the target was further away or closer than the ‘true’ position, because an untextured sphere looks the same from different vantage points and there were no objects in the background that could be used to help detect a change in position.
  •  
    -Orientation only (third column): In this condition subjects only saw a target cube. The only thing that was manipulated was that the cube rotated when subjects moved, so that they received the view of the target that they would get if it were 10 cm farther away or 10 cm closer than the ‘true’ distance (note in Figure 2 how the orange line intersects the orange square in the same way as the dashed line intersects the dashed square). In this case there was no information from the relative motion between objects (because the target was the only object that was visible). Note that in this condition the egocentric position indicated that the target was at the ‘true’ position (because the target did not change its simulated lateral position; it only rotated).
  •  
    -Relative position only (fourth column): In this condition subjects saw a target sphere and four background cubes. The target sphere's position was not manipulated, so again the egocentric position was consistent with the ‘true’ distance of the target. In this condition, the background cubes moved in the opposite direction than the subject to produce relative motion that was consistent with the sphere being 10 cm farther away, or moved in the same direction as the subject to produce relative motion that was consistent with the sphere being 10 cm closer than the ‘true’ position.
  •  
    -Retinal cues (last column): In this condition subjects saw a target cube and four background cubes. We combined the manipulations of the orientation only and the relative position only conditions (when subjects moved, the target cube rotated and the background cubes were displaced in accordance with the target being further away or closer). Again, the target was not displaced, so the egocentric position indicated that the target was at the ‘true’ position.
In Experiment 1, the Retinal cues condition was performed first, followed by the All cues condition. Afterwards the three conditions in which only one of the cues was manipulated were performed in a random order. In Experiment 2, the All cues condition was performed first, followed by the three conditions in which only one cue was manipulated (in a random order) and finally the Retinal cues condition. Twelve of the 34 subjects took part in each condition. Within each experiment, the same 12 subjects took part in the three conditions in which only one cue was manipulated, but some of the subjects were different for the other two conditions (due to availability). Each condition took about 20 min to be completed. 
Data analysis
All analyses were performed using R Statistical Software (R Development Core Team, 2014). To quantify the effect of the manipulation we determined the difference between the mean pointing distance (measured from the position half way between the subjects' eyes at the moment that the pointing position was determined) for targets simulated to be far away and targets simulated to be near. Since the pairs of trials were matched in everything except for the manipulated motion parallax cues, any difference can be attributed to the manipulation. The more subjects relied on the manipulated motion parallax cue(s), the bigger the difference between pointing at the paired targets. 
To get an impression of the extent to which subjects moved their head (i.e., translated the eyes) and whether this depended on the manipulations, we quantified movements of the head by the displacement of the position midway between the eyes. We determined the peak speed at which the head moved (irrespective of direction), and the peak-to-peak amplitude of the lateral, vertical, and sagittal components of the head movement for each trial. These measures were determined for the interval of time from when the target appeared until when subjects finished the pointing movement. 
We used one-tailed one-sample t tests (across subjects) to examine whether motion parallax influenced subjects pointing endpoints in each condition (whether they pointed further away when motion parallax cues indicated that the target was further away). We also used paired, one-tailed t tests to determine whether the magnitudes of such influences (the differences between values for the ‘further away’ and the ‘closer’ condition) were larger in the monocular than in the binocular trials of Experiment 2. 
One might expect an increase in head movements due to monocular viewing. To test whether this occurred on a trial-by-trial basis, we used paired, one-tailed t tests to examine whether the speed and amplitude of the subjects' head movements were larger in the monocular than in the binocular trials of Experiment 2. To test whether the presence of monocular trials in Experiment 2 induced more head movements in that Experiment in general, we used unpaired one-tailed t tests to examine whether the speed and amplitude of the subjects' head movements were increased in the (binocular) conditions of Experiment 2 relative to the same conditions in Experiment 1. To determine whether the head movement is primarily due to sway or the consequence of the pointing movement, we correlated the peak velocity of the head movements with the peak velocity of the hand movements (independently of the condition). 
Results
In Figure 3 we show the pointing endpoints and some head movements of a representative subject in the All cues condition of Experiment 1. As in previous studies (e.g., Brenner & van Damme, 1999; Sousa et al., 2010), the pointed position in depth increased with increasing simulated distance, but there was considerable variability across trials, the range of distances was underestimated, and there were systematic idiosyncratic biases (in this example to point about 10 cm too nearby; e.g., Sousa et al., 2010; Kuling, Brenner, & Smeets, 2016). In the current study, each target was presented twice at the same simulated position. The small differences in ‘true’ distance between the two trials of each pair is due to the target's distance being measured from the position of the subject's head when the pointing movement ended, which was of course not always at the exact same place. 
Figure 3
 
Indicated distances and selected head movements of a representative subject in the all cues condition of Experiment 1. (A) Distance of the endpoints of the subject's pointing movements as a function of the ‘true’ distance from the subject's head. The mean difference between the pointed positions in depth for the closer and further targets was 0.57 cm (SEM = 0.4 cm) for this subject in this condition (brown dots slightly above green ones). Five arbitrarily chosen pairs of settings are represented by larger symbols connected by lines. The difference in ‘true’ distance between the two targets of each pair is the result of the subject's head not being at precisely the same position throughout the session. Note that what subjects saw only changed in a manner that is consistent with the target being 10 cm farther away (brown dots) or 10 cm closer (green dots) when the subjects moved their heads. (B) Lateral, vertical, and sagittal displacement of the head (from its initial position on that trial) during the five selected pairs of trials. The type of line denotes paired trials. No effect of the manipulation is visible in the head movements.
Figure 3
 
Indicated distances and selected head movements of a representative subject in the all cues condition of Experiment 1. (A) Distance of the endpoints of the subject's pointing movements as a function of the ‘true’ distance from the subject's head. The mean difference between the pointed positions in depth for the closer and further targets was 0.57 cm (SEM = 0.4 cm) for this subject in this condition (brown dots slightly above green ones). Five arbitrarily chosen pairs of settings are represented by larger symbols connected by lines. The difference in ‘true’ distance between the two targets of each pair is the result of the subject's head not being at precisely the same position throughout the session. Note that what subjects saw only changed in a manner that is consistent with the target being 10 cm farther away (brown dots) or 10 cm closer (green dots) when the subjects moved their heads. (B) Lateral, vertical, and sagittal displacement of the head (from its initial position on that trial) during the five selected pairs of trials. The type of line denotes paired trials. No effect of the manipulation is visible in the head movements.
The only difference between the two targets within a pair of trials was that when the subject moved his or her head, the motion parallax cues were either consistent with the target being 10 cm farther away (brown dots in Figure 3A) or with the target being 10 cm closer (green symbols). All other cues were consistent with the same distance for both targets of a pair. Five pairs of dots have been highlighted and connected by different types of lines to illustrate that the influence of motion parallax is small in comparison with the variability (so the subject often pointed further away for the target for which motion parallax indicated that it was nearer, as is for example the case for the leftmost highlighted pair of trials). To evaluate how manipulating the motion parallax influenced the estimated distance we therefore averaged the differences across all pairs of trials. 
In Figure 3B we show the head movements for the five pairs of trials that were highlighted in Figure 3A (green lines for the trials in which the target was simulated to be closer and brown lines for the trials in which it was simulated to be further away; paired trials share the same line type). The lateral, vertical, and sagittal components are shown in different panels. Positions are aligned with respect to the initial position of the head (which of course was different for each trial) to illustrate the fact that there was no evident net direction of motion. 
Figure 4 summarizes the influence of the various combinations of motion parallax cues. The values in this figure are the mean differences between pointing at paired targets in each condition (see examples of paired trials in Figure 3A). A value of 0 in this plot means that the manipulation (selected motion parallax cues indicating that the target is 10 cm farther away or 10 cm closer) had no effect. A value of 20 cm would indicate that subjects relied solely and perfectly on the manipulated motion parallax cues. The left part of Figure 4 shows the results of Experiment 1 and the right part shows the results of Experiment 2, distinguishing between pairs of trials in which the target was presented binocularly (red dots) and monocularly (blue dots). 
Figure 4
 
Mean difference between pointing at paired targets for all conditions of both experiments. Color differentiates between targets presented to both eyes (red) or only to one eye (blue). Error bars are standard errors of the mean across subjects. The * symbol indicates that the mean is significantly different from zero. For Experiment 2, the + symbol indicates that manipulating motion parallax cues had significantly more effect when vision was monocular than when it was binocular.
Figure 4
 
Mean difference between pointing at paired targets for all conditions of both experiments. Color differentiates between targets presented to both eyes (red) or only to one eye (blue). Error bars are standard errors of the mean across subjects. The * symbol indicates that the mean is significantly different from zero. For Experiment 2, the + symbol indicates that manipulating motion parallax cues had significantly more effect when vision was monocular than when it was binocular.
The results of Experiment 1 suggest that all the manipulated cues contributed modestly to the judged distance. Despite the similarity between the magnitudes, one-tailed t tests reveal that only the effects in the All cues and the relative position only condition are significantly different from 0 (t(11) = 1.88 and p = 0.044, t(11) = 1.86 and p = 0.045, respectively). The similarity between the mean magnitude of the effect in the All cues condition and in the three single cue conditions suggests that the combined effect of the three cues is not the sum of the effects that each of them has as an independent cue. When the target was presented binocularly in Experiment 2 (red dots), the only condition in which the effect is significant is the All cues condition (one-tailed t test; t[11] = 3.70 and p = 0.002). 
The effect of the manipulation was clearly larger when the target was presented monocularly (blue dots in Figure 4) in three conditions of Experiment 2. One-tailed t tests indicate that the difference between pointing at paired targets is significantly different from 0 for the All cues condition (t11 = 6.14, p < 0.001), the Retinal cues condition (t11 = 7.99, p < 0.001), and the relative position only condition (t11 = 4.37, p < 0.001). The effects in these three conditions are not only different from 0, but one-tailed paired t tests comparing the binocular and the monocular results reveal that the effects are also significantly larger in the monocular case than when images were presented to both eyes; All cues: t11 = 5.03, p < 0.001; Retinal cues: t11 = 5.83, p < 0.001; Relative position only: t11 = 2.28, p = 0.02). 
An obvious reason for the modest influence of motion parallax cues is that subjects only obtained information from motion parallax cues if they moved their heads laterally or vertically (moving backwards or forwards gave rise to changes in the target's image that correspond with the simulated position, so this aspect of motion parallax was consistent with the ‘true’ distance in both targets of each pair). We determined several measures of how much subjects moved their heads. We considered the period from when the target appeared until when the subject finished the pointing movement. We did so separately for the different conditions to see whether having different cues influenced the way subjects moved their heads. Figure 5A shows that the head's peak speed was similar in all conditions of both experiments. Thus, we see no evidence of subjects systematically moving their heads faster to obtain more information from motion parallax cues in certain conditions. We did not find a correlation between the peak velocities of head and hand (r = −0.008; p = 0.22), indicating that the movements of the head are not part of the pointing movement, but are presumably mainly due to postural sway. 
Figure 5
 
(A) Mean peak speed of head movement (irrespective of direction). (B) Mean maximal amplitudes of lateral, vertical and sagittal head displacements. Details as in Figure 4.
Figure 5
 
(A) Mean peak speed of head movement (irrespective of direction). (B) Mean maximal amplitudes of lateral, vertical and sagittal head displacements. Details as in Figure 4.
Figure 5B shows the peak-to-peak amplitude of the head displacement. This measure is largest in the sagittal direction and smallest in the vertical direction. This is in line with the results reported by Aytekin and Rucci (2012), who found that in a task in which people just had to fixate an object located at eye level while standing as still as possible, the head moved most in the sagittal direction. The t tests did not reveal any systematic increase in head movement speed or amplitude when viewing with one eye (Experiment 2) or between the two experiments. Thus, we see no evidence that subjects made larger head movements to get more information from motion parallax cues when binocular information was removed. 
Discussion
Our most important finding is that we show that motion parallax influences judgments of distance when people only make unintentional head movements of a few millimeters. Apparently, these small movements of the eyes and head while standing still are enough to generate useful motion parallax, as proposed by Aytekin and Rucci (2012), because we found small but consistent influences of our manipulations on judgments of distance, even when binocular cues were present. We deduce that the head movements were not specifically made in order to obtain such information from our answer to the question regarding whether the head moves differently when looking with only one eye than when looking with both eyes. The answer to the question about which of the three motion parallax cues are actually used, is less simple. 
The role of the three motion parallax cues
We know that depth judgments are the result of combining many cues, with the weight attributed to each cue depending on its precision (e.g., Landy, Maloney, Johnston, & Young, 1995) and reliability (e.g., van Beers, van Mierlo, Smeets, & Brenner, 2011). The most obvious cues for judging depth are binocular cues such as vergence (e.g., Brenner & van Damme, 1998) and disparity (e.g., Rogers & Graham, 1982), which is why we included monocular trials in the second experiment. By removing binocular cues to distance we expected the influence of motion parallax cues to become larger (Landy et al., 1995; Louw et al., 2007). This was indeed the case. This is not surprising, because binocular information indicating the ‘true’ distance is absent when the targets are presented monocularly, so one must rely more on the remaining cues, such as motion parallax, even if they are not reliable. However, even the combined effect of all three motion parallax cues under monocular viewing is quite modest (considering the difference of 20 cm indicated by the motion parallax cues in question), which probably means that image size (e.g., Sousa et al., 2011b) and perhaps accommodation are important distance cues under these circumstances. 
Due to the modest role of motion parallax under these circumstances, the interpretation of the results for the manipulations of different combinations of cues is not completely straightforward. Considering which cues were manipulated in the conditions that showed a significant effect, there is only clear evidence that relative position is used to judge depth. All five conditions in which the manipulation had a significant effect included this cue, and only three conditions that included this cue did not have a significant effect (Figure 4). Despite this, we do not conclude that only this way of using motion parallax is effective, because if that were the case we should see an equal effect for the conditions that include this manipulation (All cues, relative position only, retinal cues) and no effect at all in the other two conditions (egocentric position only, rotation only). This does not seem to be the case. 
For the rotation only condition, we did not find any significant effect. This does not necessarily mean that rotation never plays a role. In Experiment 1, the magnitude of the nonsignificant effect in the rotation condition was similar to the significant effects in some other conditions. Similarly, in line with the presence of an effect of rotation, the effect with both Retinal cues for the monocular targets of Experiment 2 appears to be larger than that with relative position only. On the other hand, with binocular vision the effect with both Retinal cues is similar (Experiment 1) or even smaller (Experiment 2) than the effect with relative position only. Thus, we cannot yet be certain that changes in an object's orientation with respect to the line of sight when an observer moves do not contribute to judgments of its distance. Our target object's modest extent in depth might just make this cue too imprecise to give rise to a measurable effect in our study. Note that we are here referring to judgments of the distance to an object when making very small head movements. We already know that changes in relative positions within an object when making larger head movements affect judgments of the object's extent in depth, because many of the classical motion parallax studies were done with single corrugated surfaces (e.g., Rogers & Graham, 1979; 1982; Graham & Rogers, 1982; Rogers & Rogers, 1992; van Damme & van de Grind, 1996). 
The support for the use of changes in the target object's egocentric position is stronger. First of all, in all three conditions with egocentric position only there appears to be some effect of the manipulation, although none of them is significant on its own. Moreover, in all cases the effect with all cues present appears to be larger than the effect in the Retinal cues condition, which can only be due to the additional manipulation of the egocentric position cue in the All cues condition. Thus we would tentatively conclude that there is support for the use of the egocentric position cue, although the evidence is less conclusive than for the use of relative position
We did not attempt to analyze our results in terms of linear combinations of effects of the individual cues (e.g., Landy et al., 1995; Louw et al., 2007). The main reason for this is that we know that the way in which we manipulated the cues influences the precision and reliability of other cues as well. The most obvious example of this is that removing the background in order to remove the relative position cue certainly also influences the reliability of binocular cues (Sousa et al., 2010). There is ample evidence that distance judgments are more precise or accurate in the presence of reference objects (e.g., Ferris, 1972; Brenner & van Damme, 1999; Glennerster, Rogers, & Bradshaw, 1998; Coello & Magne, 2000). A less obvious example is that changing the target object's shape might influence other cues. Not having the same subjects in all conditions also makes it more difficult to directly compare the results across conditions. 
Head movements
Although very little head movement is theoretically needed to get information from motion parallax (Aytekin & Rucci, 2012), larger head movements obviously provide more reliable information. When subjects were previously explicitly instructed to make larger head movements, they were better at solving tasks that require depth judgments (e.g., Gonzalez, Steinbach, Ono, & Wolf, 1989; Steinbach, Ono, & Wolf, 1991). The fact that we did not find more head movements when vision was monocular than when it was binocular is consistent with earlier studies showing that it takes quite long for people to learn to make larger head movements to compensate for vision being restricted to one eye. Marotta, Perrot, Nicolle, and Goodale (1995) reported that enucleated patients made larger lateral and vertical head movements as the time after enucleation progressed. In a different study, Marotta, Perrot, Nicolle, Servos, and Goodale (1995) reported that subjects with normal vision do not make larger head movements when one eye is covered. 
Since our subjects' head movements were quite small (see Figures 3B and 5), finding small effects of motion parallax cues (Figure 4) is not surprising. Rogers and Graham (1982) pointed out that in some ways, the use of motion parallax is similar to the use of binocular stereopsis for 3-D perception. Where recovering depth from stereopsis is based on the differences between a scene as observed by two eyes that are about 6.5 cm apart, recovering depth from motion parallax is based on the differences between the scenes when observed at different moments in time by a moving eye. In their experiments subjects were instructed to move laterally by about 15 cm, and motion parallax was about as reliable as binocular information. In our study, the amplitude of the head movements is an order of magnitude smaller than the distance between the two eyes, so considering the analogy with stereopsis it is evident that motion parallax should be much less effective than stereopsis. 
Sources of information in the monocular conditions
Since the overall effect of manipulating the distance indicated by motion parallax remained relatively modest (about 10% of the simulated displacement at most), subjects must have relied to a large extent on other cues or even on a default ‘expected’ distance. The latter and the distance indicated by accommodation (which did not follow the simulated distance) might be especially important in the monocular conditions, in which there was not much information about the ‘true’ distance. Indeed, when subjects had binocular vision the pointing positions were more closely related to the ‘true’ distances than they were when subjects had monocular vision. For example, in the All cues condition of Experiment 2, where all the possible motion parallax cues were manipulated, the mean slope between where subjects pointed and the ‘true’ distance (i.e., the mean of slopes of linear fits to clouds of points such as shown in Figure 3A) was 0.69 in the binocular trials, while it was only 0.26 in the monocular ones. 
The most important monocular cue (except for motion parallax) is probably retinal image size. Our object's size varied across pairs of targets, but previous work has shown that retinal image size is used to judge distance even when the real size of the object is not known (Lugtigheid & Welchman, 2010, Sousa et al., 2011b, 2012a, 2012b, 2013). This may appear strange, because the same retinal image size can correspond to a large target far away or to a small one nearby, but people apparently make assumptions about the range of credible sizes for the object in question (Collett, Schwarz, & Sobel, 1991; Sousa et al., 2011b; López-Moliner & Keil, 2012). Image size therefore provides a cue that correlates with the ‘true’ distance. As a matter of fact, the slopes between where subjects pointed as a function of the target's size were negative in all conditions, indicating that the smaller the target, the further away subjects pointed. 
A second cue that is consistent with the ‘true’ distance is the expansion and contraction of the images when the subjects moved forward or backwards. Accommodation was not consistent with the ‘true’ distance, because it always indicated the same distance, but any contribution that it had in our subjects' judgments will have counteracted the effect of our manipulation in the same way as does any cue that is consistent with the ‘true’ distance, because it provides evidence for the same distance for both targets of each pair. 
How representative is our study for the role of motion parallax in daily life?
In daily life, people move more than our subjects did when reaching out for objects (as was for instance reported by Louw et al., 2007), because our setup limits the subjects' ability to move forwards and to the sides, and they knew that they had to keep looking into the mirrors. On the other hand, people may sway less when they are in an illuminated environment than when standing in the dark (e.g., Edwards, 1946; Paulus, Straube, & Brandt, 1984; Ashmead & McCarty, 1991; Day, Steiger, Thompson, & Marsden, 1993). In a fully illuminated environment other cues are probably also more reliable than they are when looking at isolated objects in the dark. However, the relative shifts that we find to be the most evident source of motion parallax information will also be more reliable. One reason to suspect that we might be underestimating the role of motion parallax is that we only manipulated the motion parallax that arises when subjects make lateral and vertical head movements, not the motion parallax that arises when they move backwards and forwards, which is actually the direction in which our subjects moved most. Thus, altogether our results suggest that information from motion parallax contributes to judgments of distance, even when people think they are standing still. 
Acknowledgments
This work was supported by grant NWO 464-13-169 from the Dutch Organization for Scientific Research. 
Commercial relationships: none. 
Corresponding author: Cristina de la Malla. 
Email: c.delamalla@vu.nl. 
Address: Research Institute MOVE, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands. 
References
Ashmead, D. H., McCarty M. E. (1991). Postural sway of human infants while standing in light and dark. Child Development, 62, 1276–1287.
Aytekin M., Rucci M. (2012). Motion parallax from microscopic head movements during visual fixation. Vision Research, 70, 7–17.
Bradshaw M. F., Parton A. D., Eagle R. A. (1998). The interaction of binocular disparity and motion parallax in determining perceived depth and perceived size. Perception, 27, 1317–1331.
Bradshaw M. F., Parton A. D., Glennerster A. (2000). The task-dependent use of binocular disparity and motion parallax information. Vision Research, 40, 3725–3734.
Braunstein M. L. (1966). Sensitivity of the observer to transformations of the visual field. Experimental Psychology, 72, 683–689.
Brenner E., van Damme W. J. (1998). Judging distance from ocular convergence. Vision Research, 38, 493–498.
Brenner E., van Damme W. J. (1999). Perceived distance, shape and size. Vision Research, 39, 975–986.
Brenner E., van den Berg A. V., van Damme W. J. (1996). Perceived motion in depth. Vision Research, 36, 699–706.
Coello Y., Magne P. (2000). Determination of target distance in a structured environment: selection of visual information for action. European Journal of Cognitive Psychology, 12, 489–519.
Collett T. S., Schwarz U., Sobel E. C. (1991). The interaction of oculomotor cues and stimulus size in stereoscopic depth constancy. Perception, 20, 733–754.
Day B. L., Steiger M. J., Thompson P. D., Marsden C. D. (1993). Effect of vision and stance width on human body motion when standing: Implications for afferent control of lateral sway. Journal of Physiology, 469, 479–499.
Dees J. W. (1966). Accuracy of absolute visual distance and size estimation in space as a function of stereopsis and motion parallax. Journal of Experimental Psychology, 72, 466–476.
Edwards A. S. (1946). Body sway and vision. Journal of Experimental Psychology, 36, 526–535.
Ferris S. H. (1972). Motion parallax and absolute distance. Journal of Experimental Psychology, 95, 258–263.
Gibson E. J., Gibson J. J., Smith O. W., Flock H. (1959). Motion parallax as a determinant of perceived depth. Journal of Experimental Psychology, 58, 40–51.
Gibson J. J. (1950). Perception of the visual world. Cambridge, MA: Riverside Press.
Gibson J. J. (1966). The senses considered as perceptual systems. London: George Allen and Unwin.
Gillam B. (1995). The perception of spatial layout from static optical information. In Epstein W. Rogers S. (Eds.) Perception of space and motion (pp. 23–67). London: Academic Press.
Glennerster, A., Rogers B. J., Bradshaw M. F. (1998). Cues to viewing distance for stereoscopic depth constancy. Perception, 27, 1357–1365.
Gogel W. C. (1961). Convergence as a cue to the perceived distance of objects in a binocular configuration. Journal of Psychology, 52, 303–315.
Gogel W. C. (1977). An indirect measure of perceived distance from oculomotor cues. Perception and Psychophysics, 21, 3–11.
Gogel W. C., Tietz J. D. (1973). Absolute motion parallax and the specific distance tendency. Perception and Psychophysics, 13, 184–292.
Gogel W. C., Tietz J. D. (1979). A comparison of oculomotor and motion parallax cues of egocentric distance. Vision Research, 19, 1161–1170.
Gonzalez E. G., Steinbach M. J., Ono H., Wolf M. E. (1989). Depth perception in children enucleated at an early age. Clinical Vision Science, 4, 173–177.
Graham M., Rogers B. (1982). Simultaneous and successive contrast effects in the perception of depth from motion parallax and stereoscopic information. Perception, 11, 247–262.
Johnston E. B., Cumming B. G., Landy M. S. (1994). Integration of stereopsis and motion shape cues. Vision Research, 34, 2259–2275.
Kuling I. A., Brenner E., Smeets J. B. J. (2016). Errors in visuo-haptic and haptic-haptic location matching are stable over long periods of time. Acta Psychologica, 166, 31–36.
Landy M. S., Maloney L. T., Johnston E. B., Young M. (1995). Measurement and modelling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389–412.
Leibowitz H., Moore D. (1966). Role of changes in accommodation and convergence in the perception of size. Journal of the Optical Society of America, 56, 1120–1123.
López-Moliner J., Keil M. (2012). People favour imperfect catching by assuming a stable world. PLoS ONE, 7 (4), 1–8.
Louw S., Smeets J. B. J., Brenner E. (2007). Judging surface slant for placing objects: A role for motion parallax. Experimental Brain Research, 183, 149–158.
Lugtigheid A., Welchman A. (2010). A surprising influence of retinal size on disparity-defined distance judgments. Journal of Vision, 10 (7): 63, doi:10.1167/10.7.63. [Abstract]
Marotta J. J., Perrot T. S., Nicolle D., Goodale M. A. (1995). The development of adaptive head movements following enucleation. Eye, 9, 333–336.
Marotta J. J., Perrot T. S., Nicolle D., Servos P., Goodale M. A. (1995). Adapting to monocular vision: grasping with one eye. Experimental Brain Research, 104, 107–114.
McIntosh R. D., Lashley G. (2008). Matching boxes: Familiar size influences action programming. Neuropsychologia, 46, 2441–2444.
Paulus W. M., Straube A., Brandt T. (1984). Visual stabilization of posture: Physiological stimulus characteristics and clinical aspects. Brain, 107, 1143–1163.
R Development Core Team. (2014). A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
Rogers B. (2009). Motion parallax as an independent cue for depth perception: A retrospective. Perception, 38, 907–911.
Rogers B., Graham M. (1979). Motion parallax as an independent cue for depth perception. Perception, 8, 125–134.
Rogers B., Graham M. (1982). Similarities between motion parallax and stereopsis in human depth perception. Vision Research, 22, 261–270.
Rogers S., Rogers B. J. (1992). Visual and nonvisual information disambiguate surfaces specified by motion parallax. Perception and Psychophysics, 52, 446–452.
Sousa R., Brenner E., Smeets J. B. J. (2010). A new binocular cue for absolute distance: Disparity relative to the most distant structure. Vision Research, 50, 1786–1792.
Sousa R., Brenner E., Smeets J. B. J. (2011a). Objects can be localized at positions that are inconsistent with the relative disparity between them. Journal of Vision, 11 (2): 18, 1–6, doi:10.1167/11.2.18. [PubMed] [Article]
Sousa R., Brenner E., Smeets J. B. J. (2011b). Judging an unfamiliar object's distance from its retinal image size. Journal of Vision, 11 (9): 10, 1–6, doi:10.1167/11.9.10. [PubMed] [Article]
Sousa R., Smeets J. B. J., Brenner E. (2012a). The effect of variability in other objects' sizes on the extent to which people rely on retinal image size as a cue for judging distance. Journal of Vision, 12 (10): 6, 1–8, doi:10.1167/12.10.6. [PubMed] [Article]
Sousa R., Smeets J. B. J., Brenner E. (2012b). Does size matter? Perception, 41, 1532–1534.
Sousa R., Smeets J. B. J., Brenner E. (2013). The influence of previously seen objects' sizes in distance judgments. Journal of Vision, 13 (2): 2, 1–8, doi:10.1167/13.2.2. [PubMed] [Article]
Steinbach M. J., Ono H., Wolf M. E. (1991). Motion parallax judgements of depth as a function of the direction and type of head movement. Canadian Journal of Psychology, 45, 92–98.
van Beers R. J., van Mierlo C. M., Smeets J. B. J., Brenner E. (2011). Reweighting visual cues by touch. Journal of Vision, 11 (10): 20, 1–16, doi:10.1167/11.10.20. [PubMed] [Article]
van Damme W. J. M., van de Grind W. A. (1996). Non-visual information in structure-from-motion. Vision Research, 36, 3119–3127.
Wallach H., Floor L. (1971). The use of size matching to demonstrate the effectiveness of accommodation and convergence as cues for distance. Attention, Perception, and Psychophysics, 10 (6), 423–428.
Figure 1
 
Schematic top view of the setup. Two monitors' images were visible via two mirrors (one for each eye). Subjects indicated the position of the virtual target object (represented by the red cube) that was presented within a restricted region in front of them (indicated by the dashed rectangle) with their index finger. Hand and head movements were recorded by an Optotrak system (for details see Sousa et al., 2010).
Figure 1
 
Schematic top view of the setup. Two monitors' images were visible via two mirrors (one for each eye). Subjects indicated the position of the virtual target object (represented by the red cube) that was presented within a restricted region in front of them (indicated by the dashed rectangle) with their index finger. Hand and head movements were recorded by an Optotrak system (for details see Sousa et al., 2010).
Figure 2
 
Illustration of the manipulation of the motion parallax cues in each condition. Each column represents a pair of trials as seen from above. The upper row shows the manipulation for a target 10 cm farther away and the lower row shows the manipulation for a target 10 cm closer. An initial situation in which the target happens to be straight in front of the subject is shown in green. The positions closer and further away are shown by dotted outlines. The simulated positions and orientations that correspond with the situation after the subject has moved to the right are shown in orange outlines. The red and gray squares and disks below each drawing represent the subject's view before (left) and after (right) the rightward movement.
Figure 2
 
Illustration of the manipulation of the motion parallax cues in each condition. Each column represents a pair of trials as seen from above. The upper row shows the manipulation for a target 10 cm farther away and the lower row shows the manipulation for a target 10 cm closer. An initial situation in which the target happens to be straight in front of the subject is shown in green. The positions closer and further away are shown by dotted outlines. The simulated positions and orientations that correspond with the situation after the subject has moved to the right are shown in orange outlines. The red and gray squares and disks below each drawing represent the subject's view before (left) and after (right) the rightward movement.
Figure 3
 
Indicated distances and selected head movements of a representative subject in the all cues condition of Experiment 1. (A) Distance of the endpoints of the subject's pointing movements as a function of the ‘true’ distance from the subject's head. The mean difference between the pointed positions in depth for the closer and further targets was 0.57 cm (SEM = 0.4 cm) for this subject in this condition (brown dots slightly above green ones). Five arbitrarily chosen pairs of settings are represented by larger symbols connected by lines. The difference in ‘true’ distance between the two targets of each pair is the result of the subject's head not being at precisely the same position throughout the session. Note that what subjects saw only changed in a manner that is consistent with the target being 10 cm farther away (brown dots) or 10 cm closer (green dots) when the subjects moved their heads. (B) Lateral, vertical, and sagittal displacement of the head (from its initial position on that trial) during the five selected pairs of trials. The type of line denotes paired trials. No effect of the manipulation is visible in the head movements.
Figure 3
 
Indicated distances and selected head movements of a representative subject in the all cues condition of Experiment 1. (A) Distance of the endpoints of the subject's pointing movements as a function of the ‘true’ distance from the subject's head. The mean difference between the pointed positions in depth for the closer and further targets was 0.57 cm (SEM = 0.4 cm) for this subject in this condition (brown dots slightly above green ones). Five arbitrarily chosen pairs of settings are represented by larger symbols connected by lines. The difference in ‘true’ distance between the two targets of each pair is the result of the subject's head not being at precisely the same position throughout the session. Note that what subjects saw only changed in a manner that is consistent with the target being 10 cm farther away (brown dots) or 10 cm closer (green dots) when the subjects moved their heads. (B) Lateral, vertical, and sagittal displacement of the head (from its initial position on that trial) during the five selected pairs of trials. The type of line denotes paired trials. No effect of the manipulation is visible in the head movements.
Figure 4
 
Mean difference between pointing at paired targets for all conditions of both experiments. Color differentiates between targets presented to both eyes (red) or only to one eye (blue). Error bars are standard errors of the mean across subjects. The * symbol indicates that the mean is significantly different from zero. For Experiment 2, the + symbol indicates that manipulating motion parallax cues had significantly more effect when vision was monocular than when it was binocular.
Figure 4
 
Mean difference between pointing at paired targets for all conditions of both experiments. Color differentiates between targets presented to both eyes (red) or only to one eye (blue). Error bars are standard errors of the mean across subjects. The * symbol indicates that the mean is significantly different from zero. For Experiment 2, the + symbol indicates that manipulating motion parallax cues had significantly more effect when vision was monocular than when it was binocular.
Figure 5
 
(A) Mean peak speed of head movement (irrespective of direction). (B) Mean maximal amplitudes of lateral, vertical and sagittal head displacements. Details as in Figure 4.
Figure 5
 
(A) Mean peak speed of head movement (irrespective of direction). (B) Mean maximal amplitudes of lateral, vertical and sagittal head displacements. Details as in Figure 4.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×