October 2018
Volume 18, Issue 11
Open Access
Article  |   October 2018
Effects of visual stimulus characteristics and individual differences in heading estimation
Author Affiliations
  • Ksander N. de Winkel
    Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
    ksander.dewinkel@tuebingen.mpg.de
  • Max Kurtz
    Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
    Department of Human Factors and Engineering Psychology, University of Twente, Enschede, The Netherlands
  • Heinrich H. Bülthoff
    Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
Journal of Vision October 2018, Vol.18, 9. doi:10.1167/18.11.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ksander N. de Winkel, Max Kurtz, Heinrich H. Bülthoff; Effects of visual stimulus characteristics and individual differences in heading estimation. Journal of Vision 2018;18(11):9. doi: 10.1167/18.11.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.

Introduction
The central nervous system is able to estimate heading (i.e., the direction of horizontal linear self-motion) from optic flow information (Gibson, 1950). Optic flow refers to the pattern of motion generated as reflections of objects in the field of view (FOV) move across the retina when we move. For movement along a straight path, this pattern radially expands from a single point along the direction of heading—the focus of expansion (FOE). Heading can be estimated by localizing this point. When the FOE lies outside the FOV, the direction of heading cannot be derived from the FOE, but may be estimated from the direction of vectors represented in the optic flow (Crowell & Banks, 1996; Lappe, Bremmer, & Van den Berg, 1999). 
Although errors in heading estimation are generally in the order of a few degrees (Telford, Howard, & Ohmi, 1995; W. H. Warren & Kurtz, 1992; W. H. Warren, Morris, & Kalish, 1988), systematic patterns in both constant (bias) and variable (noise) errors have been reported (Crane, 2012; Cuturi & MacNeilage, 2013; de Winkel, Katliar, & Bülthoff, 2015, 2017). It has been proposed that these errors have a neurophysiological cause: Visual optic flow cues are processed in the dorsal medial superior temporal (MSTd) area of the brain, and the distribution of preferred directions among neurons in this area has been found to be skewed toward lateral directions (Gu, Fetsch, Adeyemo, DeAngelis, & Angelaki, 2010; Saito, Yukie, Tanaka, Hikosaka, Fukada, & Iwai, 1986). It has also been shown that this could lead to constant errors away from the fore-aft axis (overestimation) and to smaller variable errors for heading angles closer to the cardinal axes (Gu et al., 2010). From an evolutionary perspective, such skew in the distribution of preferred directions could be advantageous because we mostly move along the direction of our gaze (Cuturi & MacNeilage, 2013). 
However, whereas some studies indeed find that heading estimates are biased away from the fore-aft axis (overestimation; Cuturi & MacNeilage, 2013; Telford & Howard, 1996; R. Warren, 1976), other studies report biases toward the fore-aft axis (underestimation; d'Avossa & Kersten, 1996; de Winkel et al., 2015; van den Berg & Brenner, 1994a). This result is not readily explained by the observations on preferred directions in neural populations. Moreover, recent work has suggested that the direction of bias may differ between individuals (de Winkel et al., 2015, 2017). 
An assessment of previous studies shows a considerable variability in the characteristics of the visual stimuli that were used, suggesting that the choice of visual stimulus may affect errors in heading estimation. In the present study, we investigate empirically whether and how various characteristics of the visual stimulus affect errors in heading estimation. In the following, we list general properties of visual stimuli and discuss previous findings on how each of these properties may affect errors in heading estimation. 
Field of view
A salient difference between stimuli used in different studies is the size of the FOV. The equipment used in past studies varied between head-mounted displays (Hummel, Cuturi, MacNeilage, & Flanagin, 2016), projection screens (de Winkel et al., 2015, 2017; Gu, DeAngelis, & Angelaki, 2007), LCD screens (Crane, 2014; Cuturi & MacNeilage, 2013), and plain monitors (d'Avossa & Kersten, 1996; W. H. Warren & Kurtz, 1992), varying in resolution and display size. Due to these differences, the FOV ranged from 40° by 32° (W. H. Warren & Kurtz, 1992) to 160° by 100° (Johnston, White, & Cumming, 1973), horizontal by vertical, respectively, with most current studies employing a FOV of around 100° by 80° (Butler, Campos, & Bülthoff, 2015; Crane, 2014; Cuturi & MacNeilage, 2013). 
The FOV may affect heading estimation by means of the FOE: When the FOE is outside the FOV, heading cannot be estimated by locating the FOE. In this case, the central nervous system may estimate heading by triangulation of vectors based on reference points in the optic flow (Koenderink & van Doorn, 1987). For small FOV, constant errors predominantly indicate bias toward the fore-aft axis (Li, Peli, & Warren, 2002; W. H. Warren & Kurtz, 1992), and variable errors are increased compared to larger FOV (Crowell & Banks, 1993; Gu et al., 2007; MacNeilage, Banks, Berger, & Bülthoff, 2007). 
Stereopsis
A second difference between studies is the availability of binocular disparity cues, and thus the possibility of stereopsis. Some studies presented stimuli without stereoscopic disparity cues—either monocular (Banks, Ehrlich, Backus, & Crowell, 1996; Crowell & Banks, 1993) or binocular, but with the same image for both eyes (de Winkel, Weesie, Werkhoven, & Groen, 2010; Johnston et al., 1973). Other studies presented stimuli with binocular disparity cues, using either active stereo glasses (de Winkel et al., 2015, 2017; Fetsch, Turner, DeAngelis, & Angelaki, 2009) or anaglyph glasses (Crane, 2012; Butler, Smith, Campos, & Bülthoff, 2010). 
The availability of disparity cues in optic flow stimuli provides an additional source of information for the observer, especially in terms of depth order. The depth order reflects the relative distance of objects in the environment and can be used to distinguish between self-motion and eye movement or random movements in the optic flow. The ability to make this distinction may improve localization of the FOE, and thereby reduce error in heading estimates. Interestingly, most studies reporting overestimation of heading used stimuli with disparity (Crane, 2012; Cuturi & MacNeilage, 2013; Telford & Howard, 1996) whereas studies without disparity primarily report underestimation (Johnston et al., 1973; Li et al., 2002; W. H. Warren & Kurtz, 1992). However, the availability of disparity cues is not a perfect predictor for the direction of the bias (de Winkel et al., 2015; R. Warren, 1976). Variable errors are smaller when stereoscopic depth cues are available (van den Berg & Brenner, 1994b). 
Scene layout
Various visual scenes have been used throughout studies, differing in both content and layout. Most studies used either a variation of a random dot cloud or star field stimulus (Butler et al., 2010; Crane, 2012; W. H. Warren & Kurtz, 1992), but differed considerably with respect to the objects that the clouds consisted of: Gaussian blobs (Butler et al., 2010), frontoparallel triangles (Crane, 2012; Cuturi & MacNeilage, 2013; Gu et al., 2010), circles (de Winkel et al., 2010), round particles (de Winkel, 2015, 2017), single pixels (W. H. Warren & Kurtz, 1992), or even human figures (MacNeilage et al., 2007). Likewise, studies also differed with respect to the scene layout: Some included a ground plane (Royden, Banks, & Crowell, 1992; W. H. Warren & Kurtz, 1992; de Winkel et al., 2015, 2017), whereas others did not (Crane, 2012; Cuturi & MacNeilage, 2013; Gu et al., 2007). Early research has shown that basic optic flow patterns are sufficient for perceiving heading and that heading judgments are largely independent of three-dimensional (3-D) layout and density of dots (W. H. Warren et al., 1988). However, similar to the presence or absence of binocular disparity cues, the presence or absence of a ground plane affects the type and amount of information from which the observer can judge heading. Koenderink and van Doorn (1987) pointed out that reference points in the visual scene support distance judgment of self-motion when knowledge about the layout of the points is available. This is, for example, the case when the points lie in a plane. In this case, the availability of more reference points helps to resolve uncertainty in the motion parallax and could affect error in heading estimates. Li and colleagues (2002) found that textured ground planes reduce constant error for small FOVs. Moreover, heading estimation could be affected by ground planes because the horizon adds a prominent cue to the scene. For instance, van den Berg and Brenner (1994a) found a bias toward the fore-aft axis when the depth of the horizon was reduced. Most studies reporting a bias away from the fore-aft axis used clouds of dots (Crane, 2012; Cuturi & MacNeilage, 2013; Hummel et al., 2016), while studies reporting a bias toward the fore-aft axis mostly relied on displays with a ground plane (de Winkel et al., 2015; Li et al., 2002; W. H. Warren & Kurtz, 1992). 
Motion profile
As a final factor, we consider the stimulus motion profile. Some studies have used constant velocity profiles (Johnston et al., 1973; Li et al., 2002; W. H. Warren & Kurtz, 1992), whereas others have used variable velocity profiles (Crane, 2012; Cuturi & MacNeilage, 2013; Telford & Howard, 1996). The latter, featuring acceleration and deceleration phases, may be considered more naturalistic, as self-motion in the real world has similar characteristics (Butler et al., 2015). Most studies reporting overestimation used raised cosine-bell velocity profiles (Crane, 2012; Cuturi & MacNeilage, 2013; Hummel et al., 2016), whereas most studies reporting underestimation use constant velocity profiles (Johnston et al., 1973; Li et al., 2002; W. H. Warren & Kurtz, 1992). However, there are again exceptions to this pattern (de Winkel et al., 2015, 2017; R. Warren, 1976). Butler and colleagues (2015) report that heading estimates are more precise for raised cosine-bell velocity profiles than for constant velocity profiles. 
Vection
Taken together, the above factors suggest another possible factor that may affect heading estimation errors. Whereas FOV, stereopsis, scene layout, and motion profile may have direct effects on heading estimation, these factors have also been found to affect vection, which is the visually induced sensation of self-motion. When vection is rated higher, observers feel more present in the displayed scene and the scene feels more compelling to them (Riecke, Schulte-Pelkum, Avraamides, Heyde, & Bülthoff, 2006). With more compelling vection, observers are more likely to interpret a visual stimulus as reflective of self-motion rather than object motion (Palmisano, 1996). Peripheral vision, and therefore the size of the FOV, strongly affects vection (Habak, Casanova, & Faubert, 2002). Although FOVs of around 60° are sufficiently large to induce vection (Andersen & Braunstein, 1985; Pretto, Ogier, Bülthoff, & Bresciani, 2009), displays with larger FOVs or even floor projection improve vection nonetheless (Trutoiu, Streuber, Mohler, Schulte-Pelkum, & Bülthoff, 2008). Similarly, stereoscopic displays have been shown to induce more compelling vection (Palmisano, 1996, 2002; Ziegler & Roy, 1998). More realistic scene layouts may also improve vection: The presence of a ground plane was shown to affect vection (Brandt, Wist, & Dichgans, 1975; Nakamura & Shimojo, 1999), and more naturalistic scenes were also shown to increase vection (Riecke et al., 2006). Finally, motion profiles with variable velocity have been shown to more effectively induce vection than constant velocity profiles (Palmisano, Allison, & Pekin, 2008). 
Since vection may be considered a reflection of the compellingness of a visual stimulus as indicative of self-motion, and the perceived direction of motion depends on the interpretation of the visual stimulus' causality, vection may have a mediating role on the nature of errors in heading estimation. 
Present study
Although the above factors each appear to affect errors in heading estimation predominantly in a particular direction, none of the factors alone forms a perfect predictor. The interpretation of the findings of previous studies is further complicated by the fact that whereas a particular factor may have been systematically varied, other factors were kept constant, with different choices for these constants between studies. We hypothesize that the size of the FOV, stereopsis, scene layout, and motion profile each affect errors in heading estimation, and that vection acts as a mediating variable. In an experiment, we presented participants with optic flow stimuli with heading angles ranging the full circle in 16 equidistant steps, and asked them to report heading and vection, while we systematically varied the factors mentioned above. Using path analyses, we determined whether the above factors affect the nature of errors in visual heading estimation. 
Methods
Ethics statement
The study was conducted in accordance with the Declaration of Helsinki. Participants were employees of the Max Planck Institute in Tübingen, Germany, or were recruited from the institute's participant database. Written informed consent was obtained from all participants prior to participation. The experiment was approved by the ethical committee of the faculty of Behavioural, Management, and Social Sciences of the University of Twente in Enschede, the Netherlands (Request number 17217). 
Participants
Measurements of 20 participants (12 female) were obtained in the study. Participants had a mean age of 25.80 years (SD = 3.53), ranging from 22 to 38 years. All subjects had normal or corrected-to-normal vision. 
Equipment
The experiment was conducted in the virtual environment of the BackproLab at the Max Planck Institute in Tübingen, Germany. The setup consisted of a large back projection screen, spanning over 2.16 m horizontally and 1.62 m vertically, and a single SXGA+ projector (Christie Mirage S+3K DLP; Christie, Cypress, CA) with a resolution of 1400 by 1050 pixels. Participants were seated at an eye-height of around 1.33 m and a distance of around 1.10 m in front of the projection screen (Figure 1). Stereoscopic stimuli were displayed using active 3-D glasses (NVIDIA 3D Vision Pro LCD shuttered glasses; NVIDIA, Santa Clara, CA). The resulting setup filled a field of view of about 90° horizontally by 50° vertically. During the experiment, participants' heads were stabilized using a chin rest. The view of the participants was masked so that only the screen and devices for response collection were visible. A fixation cross was not implemented because it would allow participants to adopt a strategy where elements of the scene are tracked relative to this point, rather than provide estimates of their heading. Heading estimates were collected using a pointer device. The pointer device consisted of a stainless steel rod of about 15 cm, connected to a potentiometer. Participants held the rod at the long end and aligned the short end such that it pointed in the direction of the perceived heading (Figure 1, left panel). The rod could be freely rotated on the horizontal plane and provided a < 0.1° resolution. Vection ratings were collected using a numerical keypad, attached to the pointer device (Figure 1, right panel). 
Figure 1
 
Left panel: View of the back projection screen, with in the middle the head rest. Right panel: Photograph of the pointer device and numerical keypad used to register responses.
Figure 1
 
Left panel: View of the back projection screen, with in the middle the head rest. Right panel: Photograph of the pointer device and numerical keypad used to register responses.
Task and stimuli
We used optic flow stimuli that simulated linear translation in the horizontal plane with different headings. Heading direction and properties of the visual stimulus were varied across trials. For each experimental trial, participants were tasked to indicate the perceived heading using the pointer device, and to report the strength of vection using the numerical keypad. The heading estimates obtained from the pointer device were practically continuous circular data (resolution < 0.1°); vection was measured on a 7-point scale, where a score of 1 = absence of a feeling of self-motion, and 7 = compelling sensation of self-motion
Properties of the visual scene were varied by manipulating the two-level factors FOV, disparity (stereopsis), scene (scene layout), and profile (motion profile). The levels of the factor FOV varied between an unrestricted (dummy coded “1”) and a restricted view (“0”). For the unrestricted view, the FOV was 90° (horizontal) × 50° (vertical); for the restricted view, the edges of the simulation were masked digitally, resulting in an FOV of 45° × 50°. 
Stereopsis was manipulated by generating visual stimuli with (“1”) or without disparity information (“0”) in conjunction with active 3-D glasses. Participants wore the glasses throughout the experiment. 
The two types of visual scene were either a random dot cloud (“1”, Figure 2, left panel) or a random dot ground plane (“0,” Figure 2, right panel). Dots in both types of stimuli were limited-lifetime particles with a lifetime of 1 s, randomly distributed in either a rectangular volume or along the ground plane. The rectangular volume had dimensions of 10 × 10 × 10 m, and the ground plane was 10 × 10 m. Dot density in the cloud was 10 dots/m3 and dots had a size of 5 cm. Dot density in the plane was 50 dots/m3 and dots had a size of 2.5 cm. Dimensions, size, and density of the dots were based on pilot sessions. They were chosen so that their size, number, and the resulting impression of depth were comparable across conditions. Dots in both scenes were bright gray on a dark gray background to improve the efficacy of the stereoscopic display and prevent ghosting. 
Figure 2
 
Left panel: Monoscopic screenshot of random dot cloud stimulus. Right panel: Monoscopic screenshot of random dot ground plane stimulus.
Figure 2
 
Left panel: Monoscopic screenshot of random dot cloud stimulus. Right panel: Monoscopic screenshot of random dot ground plane stimulus.
Translations of the camera through the virtual environment either had a motion profile with constant velocity (“0”) or a variable velocity motion profile (“1”). The constant motion profile had a velocity of 0.075 m/s; the variable velocity profile v(t) was  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}v(t) = {{{v_{{\rm{max}}}}} \over 2}(1 - {\rm{cos}}\omega t)\;.\end{equation}
with ω = 2π f, f = 0.5 Hz, and maximum velocity vmax = 0.15 m/s. Both motion profiles lasted 2 s and both had a displacement of 0.15 m.  
In each experimental condition, heading was varied as a covariate. Headings were sampled from the range of possible headings (±180°) in 16 evenly spaced steps (±22.5°), starting at ±168.75° (i.e., [–168.75, –146.25, –123.75, . . .]). The cardinal axes were omitted because errors for these headings cannot be unambiguously classified as under- or overestimations. Positive angles represented clockwise or rightward directions, and negative angles represented counterclockwise or leftward directions. There were three repetitions of each type of stimulus, resulting in 24 × 16 × 3 = 768 experimental trials. 
Stimuli were generated using the Unity game engine (Unity Technologies, San Francisco, CA, version Unity 4.2.0f4) and were controlled during the experiment using Simulink (MathWorks, Natick, MA). 
Procedure
Prior to the experiment proper, participants were informed about the experimental goals and procedures. After participants provided written informed consent, they were seated in front of the screen and asked to place their head on a chinrest. Once seated, they were familiarized with the setup. Thirty practice trials were conducted to ensure their understanding of the task and to familiarize them with the experiment. Feedback was given if large errors were noticed by the experimenter. The order of conditions and headings in the practice trials was identical for all participants. During the actual trials, conditions and headings were presented in random order. Participants were tasked to provide their heading estimates and vection ratings at the end of each trial as accurately as possible. After confirming their responses by button press, they could initiate the next trial via an additional button press. 
The experiment was divided in two sessions of 90 min each, with short breaks every 12 min. The sessions were completed on two different days to ensure alertness and minimize strain on the participant. 
Definitions
We define constant errors Display Formula\(\epsilon \) as the angular difference between response r and stimulus heading θ (in radians):  
\begin{equation}\tag{2}\epsilon = {\rm{Arg}}\left( {{{{e^{ir}}} \over {{e^{i\theta }}}}} \right)\;.\end{equation}
 
Here, Display Formula\({\rm{Arg}}(\cdot)\) is the argument of the complex number, yielding an angle, and i is the imaginary unit. We interpret values further away from the fore-aft axis than the stimulus heading angle as overestimations. 
To analyze variable errors, we calculated the angular standard deviation ν over rn as obtained in the n repetitions of each experimental condition as  
\begin{equation}\tag{3}R = {1 \over n}\sum\limits_{n = 1}^N {{e^{i{r_n}}}} \end{equation}
 
\begin{equation}\tag{4}\nu = \sqrt { - 2\ln \left| R \right|} \;.\end{equation}
 
Responses that appeared to be shifted 180° relative to the stimulus (see Exploratory data analysis section) were treated as outliers and omitted from this calculation, because they would inflate the estimated value of ν unnecessarily. Conditions for which not enough observations remained to estimate the standard deviation were also excluded from further analyses. 
Results
The data analysis was divided in two parts. First an exploratory data analysis was performed, and second the hypotheses put forward in the Introduction were assessed using path analyses. 
Exploratory data analysis
The exploratory analysis suggested that heading estimates followed a bimodal distribution. Specifically, responses appeared to be either similar to the stimulus heading, or shifted by 180° (πrad). Figure 3 below illustrates this observation. To test this formally, we fitted a unimodal and a bimodal model to the data and compared their fits using the adjusted Akaike Information Criterion (AICc). In the unimodal model, responses r were considered a Von Mises distributed random variable:  
\begin{equation}\tag{5}r\sim {\cal M}(\mu ,\kappa )\end{equation}
with mean μ and concentration parameter κ as  
\begin{equation}\tag{5a}\mu = {\beta _0} + {\rm{atan}}({\beta _1}\sin \theta ,\cos \theta )\end{equation}
 
\begin{equation}\tag{5b}\kappa = {\gamma _0} - {\gamma _1}\left| {\sin 2\theta } \right|\;.\end{equation}
 
Figure 3
 
Responses (dots) and the fitting result of the bimodal model for an example participant (Participant 7). The different colored lines indicate the means of the shifted (purple) and nonshifted (yellow) mixture components; the shaded areas represent the M ±1 SD. For this participant, shifted responses made up 11.7% of the data. Note that effects of experimental manipulations were not evaluated in the exploratory analysis, and that this figure shows all responses regardless of experimental condition.
Figure 3
 
Responses (dots) and the fitting result of the bimodal model for an example participant (Participant 7). The different colored lines indicate the means of the shifted (purple) and nonshifted (yellow) mixture components; the shaded areas represent the M ±1 SD. For this participant, shifted responses made up 11.7% of the data. Note that effects of experimental manipulations were not evaluated in the exploratory analysis, and that this figure shows all responses regardless of experimental condition.
Here atan is the four-quadrant inverse tangent and θ is the heading angle. The model parameters for μ, β0, and β1 reflect the constant error in heading estimates. Specifically, they reflect a constant offset and the strength of the bias, respectively. The parameters for κ, γ0, and γ1 reflect the constant and periodic part of the variable error. This model was adopted from (de Winkel et al., 2015, 2017). The corresponding density function is specified as  
\begin{equation}\tag{6}{\rm{P}}(r|\theta ) = {f_{\cal M}}(\theta |\mu ,\kappa )\;,\end{equation}
where Display Formula\({f_{\cal M}}\) represents the Von Mises density function. The bimodal model was a mixture of the above model and a version of the same model where the mean μ was shifted by 180° (πrad), with parameter Pinv that specifies the weight of the nonshifted component. The density function is  
\begin{equation}\tag{7}{\rm{P}}(r|\theta ) = {{\rm{P}}_{{\rm{inv}}}}{f_{\cal M}}(\theta |\mu ,\kappa ) + (1 - {{\rm{P}}_{{\rm{inv}}}}){f_{\cal M}}(\theta |\mu + \pi ,\kappa ){\rm .}\end{equation}
 
The model parameters β0, β1, γ0, γ1, and Pinv were treated as free parameters, and their values were estimated by minimizing the model negative log-likelihood (i.e., by the method of maximum-likelihood; Fisher, 1925). Parameter β1, indicative of the direction of constant errors, was smaller than 1 (indicating underestimation of heading angle) for five out of 18 participants (Participants 4, 5, 8, 9, and 18), and larger than 1 (indicating overestimation) for the others. All estimated coefficients are available in Supplementary Table S1
Comparison of the model adjusted AICc (Supplementary Table S2) scores indicated that responses should indeed be treated as coming from a bimodal distribution for 15 out of 18 participants. Within this group, the minimum difference between AICc scores was 89.83, providing decisive evidence in favor of the bimodal model (p = 3.12 × 10–20; Motulsky & Christopoulos, 2004). For the remaining participants (Participants 1, 12, and 14), responses were unimodal. The percentage of shifted trials varied between individuals, ranging from 0% to 24.22% (186 out of 768 trials), with an average of 3.16%, and totaled 485 out of 13,824 trials. 
The model fits also indicated an inability to perform the task and/or that data collection was corrupted for Participants 11 and 13, with circular standard deviations exceeding 30° and, for Participant 13, a parameter estimate of β1 near 0, which implies that responses were not related to the stimulus. These two participants were excluded from further analysis. 
Path analyses
The hypothesized relations between characteristics of the visual stimuli, vection, and constant/variable errors in heading estimation, as well as possible relations between these variables and observations made on component membership in the exploratory data analysis, were assessed by performing path analyses. Path analysis is a form of structural equation modeling (SEM; Kline, 2004). This is a family of models and methods used to estimate relationships between measured and latent (i.e., unobserved) variables, where variables can simultaneously function as cause and effect, allowing one to explicitly model mediating effects. Path analysis is a special case of SEM, in which there are no latent factors. 
For each of the dependent variables component membership, constant error, and variable error, the path model had the same structure. The dependent variables were predicted from (a) the dichotomous variables (factors) FOV, disparity, profile, and scene layout; (b) the continuous variable heading; (c) the interactions between heading and the dichotomous variables; and (d) vection rating. Vection rating was itself also predicted by the dichotomous variables, and thus functioned as a mediating variable. The model has 5 degrees of freedom (df) that correspond to the omission of paths that specify interaction effects between heading and the factors on vection. The path model structure is presented in Figure 4
Figure 4
 
Path model structure, visualizing hypothesized relations between dependent variables (DEP) and independent variables FOV, disparity (DSP), profile (PRF), and scene (SCN); mediating variable vection rating (VCT); heading angle θ; and the interactions (:) between the independent variables and θ.
Figure 4
 
Path model structure, visualizing hypothesized relations between dependent variables (DEP) and independent variables FOV, disparity (DSP), profile (PRF), and scene (SCN); mediating variable vection rating (VCT); heading angle θ; and the interactions (:) between the independent variables and θ.
The model was fitted separately for each dependent variable. The reason for this is that the datasets differed: For the analysis of component membership, only data from Participants 5 and 7 were included, as these were the only participants for whom there was at least a single shifted responses in each experimental condition. Data from all the participants were used to model constant error, but responses from the distribution shifted by 180° (as per the exploratory data analysis) were omitted. For the analysis of variable errors, circular standard deviations were calculated over the responses to the three repetitions of conditions, thus reducing the number of data points by a factor of three. Because the data included in the analysis of the dependent variables differed, the coefficients for the regression of vection may also differ slightly. 
To account for the interpersonal variability observed in the exploratory analysis, the models for constant and variable error were fitted separately for participants who were found to either over- or underestimate heading, as per the exploratory data analysis. 
Because of the sinusoidal nature of the relationship between θ and errors in heading estimation, the value of θ was transformed into sin 2θ before being used in the analysis of component membership and constant error, and into Display Formula\(\left| {\sin 2\theta } \right|\) for the analysis of variable error, reflecting the behavior of a model used in previous work (de Winkel et al., 2015, 2017). Interaction terms between the transformed θ variables and each of the factors were included to specifically address the hypotheses on the nature of errors in heading estimation. 
The transformed values of θ as well as the interaction terms were initially included to predict both the dependent variable and the vection rating. However, these models did not provide a very good fit and suggested that vection rating was not affected by these variables. Because these relations were also not specifically hypothesized, they were not included in the version of the models presented here. 
Path analyses were conducted in R (R Foundation for Statistical Computing, Vienna, Austria, version 3.4.3) using the lavaan package (Rosseel, 2012). An R-markdown document with the path analysis scripts is available as Supplementary Appendix S1
As a guide for interpretation of the results and the translation of coefficients presented in Table 1 into the models in Figure 5, consider the following example on how constant errors Display Formula\(\epsilon \) and variable errors ν are calculated for the group of participants who overestimated heading, for the condition where the scene has a level of 1 (dot cloud), and the other factors have level 0. The example uses only coefficients that were found to be significant in the analysis. Constant errors Display Formula\(\epsilon \) for the dot cloud scene (SCN = 1) are composed of the intercept, the periodic effect of heading, and the interaction between heading and scene: Display Formula\(\epsilon \) = –2.018 + (9.847 + 2.355) sin 2θ. The analysis of ν was performed on log-transformed data (νT); therefore, we performed the inverse operation to calculate the standard deviations in degrees. For the present example, the function describing the variable error is: ν = exp(2.031 – 0.240 + (0.337 + 0.358) Display Formula\(\left| {{\rm{sin\ 2}}\theta } \right|\)). The resulting predictions on mean and standard deviation correspond to the blue line and shaded area in the panel with code “0001” of Figure 5 at the end of the section. 
Table 1
 
Coefficients per path model, split by groups who over- or underestimated (Groups 1 and 2, respectively), as per the exploratory analysis. Notes: The models addressed constant errors \(\epsilon \), and variable errors νT. Model predictors were the factors FOV, disparity (DSP), profile (PRF), and scene (SCN); mediator variable vection (VCT); covariate heading angle θ; and the interactions (:) between the factors and θ. Coefficients that differ significantly from 0 \((p(\left| z \right|) \lt 0.05)\) are bold faced.
Table 1
 
Coefficients per path model, split by groups who over- or underestimated (Groups 1 and 2, respectively), as per the exploratory analysis. Notes: The models addressed constant errors \(\epsilon \), and variable errors νT. Model predictors were the factors FOV, disparity (DSP), profile (PRF), and scene (SCN); mediator variable vection (VCT); covariate heading angle θ; and the interactions (:) between the factors and θ. Coefficients that differ significantly from 0 \((p(\left| z \right|) \lt 0.05)\) are bold faced.
Figure 5
 
Experimental data and model fits for each experimental condition, split by those participants who were found to overestimate (blue) or underestimate (orange) heading in the exploratory data analysis. Each panel shows the data of a single experimental condition, which is specified by the four-digit code in the top left of the panel. The code corresponds to the level of the disparity, FOV, profile, and scene factors, in that order. The fitted models take into account only those parameters that were found to differ from zero (see Table 1). Note that whereas the magnitude of the heading-dependent bias varies between conditions, the sign of the bias does not change within the groups. Data for the two groups was offset by ±3° relative to the actual stimulus heading to improve visibility of the data points.
Figure 5
 
Experimental data and model fits for each experimental condition, split by those participants who were found to overestimate (blue) or underestimate (orange) heading in the exploratory data analysis. Each panel shows the data of a single experimental condition, which is specified by the four-digit code in the top left of the panel. The code corresponds to the level of the disparity, FOV, profile, and scene factors, in that order. The fitted models take into account only those parameters that were found to differ from zero (see Table 1). Note that whereas the magnitude of the heading-dependent bias varies between conditions, the sign of the bias does not change within the groups. Data for the two groups was offset by ±3° relative to the actual stimulus heading to improve visibility of the data points.
Component membership
An analysis of component membership was performed to assess whether the bimodal nature of the responses that was observed for the majority of participants could be related to experimental manipulations. The model was fitted to the data of Participants 5 and 7, as these were the only participants who had at least a single shifted response in all experimental conditions. Responses that were classified as being shifted by 180° relative to the stimulus were given a score of 0 for component membership; responses that were not shifted were given a score of 1. To model this dichotomous dependent variable, the lavaan package uses a probit link function. The model was fitted to the data of the two participants individually, yielding 10 df (5 × 2) and 1536 observations. The χ2 test indicated that the null hypothesis that the model fits the theory should not be rejected: χ2(10) = 5.750, p = 0.836. Averaged over participants, the regression for component membership yielded an R2 of 0.135 (individual values: 0.048, 0.223) and a residual variance of 1.138 (values: 1.013, 1.263); the regression for vection yielded an R2 of 0.065 (values: 0.003, 0.123) and a residual variance of 1.070 (values: 0.718, 1.422). Overall model fit indices indicated that the model fitted the data well: Root Mean Square Error of Approximation (RMSEA) = 0.000, Confirmatory Fit Index (CFI) = 1.000, Standardizes Root Mean Square Residual (SRMR) = 0.003 (Kline, 2005). 
Component membership was affected by scene layout for one participant (Participant 5) and by vection rating for both participants. For scene layout, the significant case had a coefficient of –1.047, indicating that the dot cloud stimulus was more likely to yield responses from the shifted distribution than the ground-plane stimulus. The average coefficient for vection rating was 0.172 (0.121, 0.222), indicating that nonshifted responses were associated with higher vection ratings. 
Vection rating itself was found to be affected by FOV and scene layout for one of the two participants (Participant 5). An unrestricted FOV increased the vection rating by 0.751, compared to the restricted condition, and the dot cloud increased vection ratings by 0.345, compared to the dot ground plane condition. 
An overview of all estimated coefficients and the R code to run the analyses are available as supplementary materials in Supplementary Table S3 and Supplementary Appendix S1, respectively. 
Constant error
The second analysis was aimed at assessing whether the nature of constant errors was affected by characteristics of the visual stimulus. Here, coefficients for interactions between factors and heading angle reflect changes in the nature of the relation between heading angle θ and constant errors brought about by the different levels of the factor. The model was fitted to the data of two groups separately, where the groups correspond to participants who were found to either over- (Group 1) or underestimate (Group 2) heading in the exploratory analysis, excluding trials where responses appeared shifted. Combined, this yielded a total of 10 df (5 × 2) and 13,339 (Group 1: 9,758, 13 participants; Group 2: 3581, five participants) observations. The χ2 test indicated that the null hypothesis that the model fits the theorized relations should not be rejected: χ2(10) = 6.562, p = 0.766. Overall model fit indices indicated that the model fitted the data well: RMSEA = 0.000, CFI = 1.000, SRMR = 0.003 (Kline, 2004). 
For Group 1, the regression for constant error yielded an R2 of 0.141 and residual variance of 236.870; for Group 2 these values were 0.038 and 249.270. The regressions for vection yielded R2 values of 0.080, 0.013 and residual variances of 1.560, 2.576, for Groups 1 and 2, respectively. 
For Group 1, the periodic effect of θ had a coefficient of 9.847. Interaction effects with θ were observed Display Formula\((p(\left| {\rm{z}} \right|) \lt 0.05)\) for FOV (–1.494), disparity (–5.694), and for scene (2.355). Hence, the composite coefficient for the effect of θ and its interactions ranged between 3.004 and 12.202. This indicates positive bias, regardless of experimental condition. For Group 2, the coefficient for θ was not significant, but interactions were found between θ and FOV (–1.509), disparity (–4.050) and scene (–2.079). The composite coefficient for θ and its interactions thus ranged –7.638 to 0.00, indicating that this group underestimated heading in some conditions, and did not show apparent bias in others. Notably, the coefficient for the factor scene had an opposite sign in the latter group. No mediating effects of vection were found. 
The results for the regression of vection showed that all the factors affected the ratings for participants in Group 1 (FOV: 0.106, disparity: –0.100, profile: 0.166, scene: –0.672). For Group 2, FOV (0.232) and scene (–0.267) affected the ratings. All coefficients are presented in Table 1, and effects for all conditions are visualized as the thick lines in Figure 5. A more detailed overview of coefficients and p values is available in Supplementary Table S4
Variable error
To test the hypotheses on the relation between the visual stimuli and variable errors in heading estimation, we fitted a path model with circular standard deviations as the dependent variable. Circular standard deviations were calculated over the three repetitions of each condition, excluding trials where responses appeared shifted. The standard deviation data ν were log-transformed before analysis to make their distribution approximately normal. The transformed variable is designated νT. The model was again fitted separately to the data of participants who over- (Group 1) and underestimated (Group 2), as per the exploratory analysis. Combined, there were a total of 10 df (5 × 2) and 4,536 (Group 1: 3,312, 13 participants; Group 2: 1,224, five participants) observations. The χ2 test indicated that the null-hypothesis that the model fits the theorized relations should not be rejected: χ2(10) = 4.736, p = 0.908. Overall model fit indices indicated that the model fitted the data well: RMSEA = 0.000, CFI = 1.000, SRMR = 0.003 (Kline, 2004). 
For Group 1, the regression for variable error yielded an R2 and a residual variance of 0.039 and 0.721, respectively, and for Group 2, these values were 0.046 and 0.866; the regression for vection yielded R2 and residual variances for Group 1 of 0.129, 0.958 and of 0.022, 1.512 for Group 2. 
Variable errors were found to be affected by scene layout for both groups: The coefficient in Group 1 was –0.240, and –0.541 in Group 2, indicating that variable errors were smaller for the dot cloud scene. Heading angle θ was found to affect the size of the error in Group 1, with a coefficient of 0.337, and an interaction effect between θ and scene was present in both groups (0.358, 0.776, for Groups 1 and 2, respectively). This indicates that for those who overestimate heading angle, a heading dependency of variable errors is visible for all factors, and it is amplified for the dot cloud scene. For the group of participants who underestimated heading angle, a heading dependency of variable errors was only visible for the dot cloud scene. No mediating effect of vection was found. 
The findings of the regression for vection rating were approximately equal to those of the path model for constant error. All the factors affected the ratings for Group 1 (FOV: 0.107, disparity: –0.097, profile: 0.166, scene: –0.668), and for Group 2, FOV (0.228) and scene (–0.263) affected the ratings. All coefficients are presented in Table 1, and effects for all conditions are visualized as the shaded areas in Figure 5. A more detailed overview of coefficients and p values is available in Supplementary Table S5
Figure 5 illustrates how the coefficients in Table 1 translate into models, and shows the models' correspondence to the data in separate panels for each experimental condition, and split by the grouping of participants who over- or underestimated heading in the exploratory analysis. 
Analysis of individual differences
Given that the manipulations of experimental factors provided only a partial explanation of variability in the direction of biases in heading estimation, but could not explain the sign of the bias, we formulated the additional hypothesis that the direction of bias in heading estimates is an idiosyncratic property. To test this hypothesis, we defined a simple Bayesian observer model. According to the model, responses come from a posterior distribution Display Formula\({\rm{P}}(\theta |x)\), which is proportional to the product of a likelihood function Display Formula\({\rm{P}}(x|\theta )\) and a prior P(θ):  
\begin{equation}\tag{8}{\rm{P}}(\theta |x) \propto {\rm{P}}(x|\theta ){\rm{\ P}}(\theta )\;.\end{equation}
 
The likelihood function expresses the probability of an internal heading estimate given a stimulus, which was modeled as a nonbiased Von Mises probability density function with concentration parameter κx. The prior expresses beliefs on how probable the occurrence of any stimulus is. The general shape of the prior was derived from previous research showing that visual estimation of orientation is biased toward the cardinal axes (Girshick, Landy, & Simoncelli, 2011). Specifically, we used an equal-weights convex combination of four Von Mises functions, with peaks (component means) aligned with the north (N), east (E), south (S), and west (W). Here north and south lie on the fore-aft axis, with north equivalent to straight ahead; and east and west lie on the interaural axis. The peakedness of each component, as expressed by concentration parameters κi (with i = N, E, S, W), was allowed to vary. Because biases in heading estimation are symmetrical around the origin, we imposed the additional constraint that κE = κW. The general shape of the prior is visualized in Figure 6
Figure 6
 
Visualization of the prior, with arbitrary values for the prior components κS = 5, κN = 7.5, κE,W = 15.
Figure 6
 
Visualization of the prior, with arbitrary values for the prior components κS = 5, κN = 7.5, κE,W = 15.
Consistent with the findings of the exploratory analysis, it was found that for those participants who overestimated heading angle, the data were best described using a prior with twin peaks on the interaural axis (κE,W = 11.05) and a minor contribution of the north peak (κN = 0.86); for those who underestimated heading angle, the data were best described using a prior with peaks on the fore-aft axis (κN = 4.67; κS = 3.44). 
An analysis of individual data provided strong evidence that a model with peaks on the interaural axis provided the best explanation for four of the participants (ΔAICc ≥ 59.19; median κE,W = 5.99, average R2 = 0.372); a model with peaks on the fore-aft axis best explained the data for three other participants (ΔAICc ≥ 8.50; medians κN = 4.05, κS = 3.22, average R2 = 0.092). For the remaining 11 participants, the data were best described by a model with four peaks (ΔAICc ≥ 10.47; medians κE,W = 5.90, κN = 2.23, κS = 0.37, average R2 = 0.222). The patterns of bias resulting from the different priors are illustrated in Figure 7
Figure 7
 
Visualization of the three general patterns of constant errors obtained by fitting the Bayesian model to the data. The left and middle panels show cases where a prior with peaks on the interaural or fore-aft axis was preferred, respectively (Participants 18, 9); the right panel shows a case where a prior with peaks on both axes was preferred (Participant 17). Dots represent individual data points; the blue line represents the model predicted constant (circular) error; the shaded area represents one circular standard deviation of the response, according to the model. A small amount of uniform noise was added to the x-position of the data points.
Figure 7
 
Visualization of the three general patterns of constant errors obtained by fitting the Bayesian model to the data. The left and middle panels show cases where a prior with peaks on the interaural or fore-aft axis was preferred, respectively (Participants 18, 9); the right panel shows a case where a prior with peaks on both axes was preferred (Participant 17). Dots represent individual data points; the blue line represents the model predicted constant (circular) error; the shaded area represents one circular standard deviation of the response, according to the model. A small amount of uniform noise was added to the x-position of the data points.
Discussion
Previous research has consistently shown that visual heading estimation is subject to constant and variable errors with a periodic nature. However, the magnitude and direction (under- vs. overestimation) of these errors appear to differ between studies. Based on a review of the literature, we identified the FOV, availability of binocular disparity cues, motion profile characteristics, and scene layout as factors that could potentially explain these differences. In addition, we hypothesized that vection has a mediating effect on these errors. In an experiment, we varied the factors systematically and collected heading estimates and vection ratings. The data were then analyzed by fitting path models to the data, which allowed us to determine empirically if and how these factors affect vection ratings and errors in heading estimation, and whether vection has a mediating effect. Below, we discuss the observations made in the exploratory and confirmatory data analyses and relate them to the literature. 
Vection
Vection refers to a visually induced sensation of self-motion. It has been shown to exhibit low-pass filter characteristics, such that the intensity of the experience builds up gradually over time (Bos, Bles, & Groen, 2008). In general, exposure times exceed those of the stimuli used in the present experiment (Dichgans & Brandt, 1978). However, because the stimuli were reported as compelling and vection-inducing during pilot sessions, a mediating role of vection was nevertheless deemed plausible. 
A mediating effect of vection was modeled by including vection ratings in the path models both as a predictor for the dependent variables, and as a dependent variable; itself being predicted from the factor variables that characterized the visual stimuli. In contrast to the hypotheses, no mediating effects of vection were found for errors in heading estimation. We did find that incidental responses that were shifted by 180° were associated with lower vection ratings in the analysis of component membership. This result is discussed in detail in the following section on component membership. 
The effects of the factor variables on vection ratings were consistent between the different path models. The specific findings were also mostly consistent with previous literature: It was found that a larger FOV increased vection ratings (Habak et al., 2002; Pretto et al., 2009) and that a ground-plane scene increases vection ratings over a dot cloud scene (Brandt et al., 1975; Nakamura & Shimojo, 1999; Riecke et al., 2006); for those participants who overestimated heading angle, a cosine-bell motion profile increased vection ratings compared to a constant velocity motion profile (Palmisano et al., 2008). 
The effect of the availability or absence of binocular disparity cues was also significant. However, in contrast to earlier literature (Palmisano, 1996, 2002; Ziegler & Roy, 1998), the present findings suggested that higher vection ratings are obtained for 2-D than for 3-D stimuli, whereas it is generally assumed that stereoscopic displays enhance vection due to their higher degree of ecological validity. Although the effect was small, we speculate that it may be caused by switching between 2-D and 3-D stimuli on a trial-to-trial basis randomly, which requires participants to adjust to the presence of disparity cues every time they are presented. 
Component membership
Visualizations of raw data suggested that responses of a majority of participants were occasionally in the direction opposite to the stimulus heading angle. This observation was corroborated by a comparison of the fit of a unimodal and a bimodal model of the responses, which provided decisive statistical evidence for bimodality of responses for 15 out of 18 participants. Such inversions have also been reported in a number of other recent studies (Crane, 2012; Cuturi & MacNeilage, 2013; Hummel et al., 2016), and are thought to reflect different interpretations of the causality of visually perceived motion: An observer can perceive him- or herself as being stationary in a moving visual scene, or may perceive the actually moving scene as stable and him- or herself as moving through the scene. For any given fixed stimulus motion direction, the two causal interpretations thus translate into opposite response directions (Brandt, Dichgans, & Koenig, 1973; Fischer & Kornmüller, 1930). 
We investigated whether the interpretation of the visual stimulus was associated with the experimental manipulations. This analysis was, however, complicated by the fact that participants differed considerably in their susceptibility to the effect, with inversions occurring on average in only 3.16% of the trials, ranging between 0% and 24.22%. For only two participants (Participant 5: 24.22% of trials, Participant 7: 11.72%), inversions of direction were observed at least once in each experimental condition. Analysis of their data indicated vection affected the interpretation of the visual stimulus. Specifically, inversions were associated with lower vection ratings, which is consistent with the aforementioned different interpretations of stimulus causality. 
It is interesting to note that the dot cloud scene had a negative effect on component membership, thus increasing the probability of inversions, whereas it had a positive effect on vection ratings for one of the two participants. Given that vection itself had also a positive effect on component membership, this suggests that the scene variable may have opposite direct and indirect effects on the inferred causality of visual stimuli. Although this finding may be a chance result, it could also indicate that vection and scene layout affect inferred causality via separate pathways. 
Constant error
To explain differences between studies reporting overestimation of heading angle (Crane, 2012; Cuturi & MacNeilage, 2013; de Winkel et al., 2017; Hummel et al., 2016; Telford et al., 1995; Telford & Howard, 1996; R. Warren, 1976) and other studies that report underestimation of heading angle (d'Avossa & Kersten, 1996; de Winkel et al., 2015; Johnston et al., 1973; Li et al., 2002; Llewellyn, 1971; van den Berg & Brenner, 1994a), we hypothesized that characteristics of the visual stimulus determine the nature of the bias. These hypotheses are reflected in interaction effects between the factor variables and stimulus heading angle included in the path models: When the coefficient for any of these interactions is significant, this implies that the effect of heading angle changes depending on the level of the factor. 
Exploratory data analyses indicated that heading estimates predominantly showed underestimation (i.e., bias toward the fore-aft axis) for five (out of 18) participants. This suggested there could be cluster effects, and we therefore fitted and evaluated the path models for two groups that included participants who either over- or underestimated. 
FOV, availability of disparity information, and scene layout affected the size of the constant error for both groups. The interpretation of the coefficients, however, differs for the two groups. 
If we consider the experimental condition with a small FOV, no disparity information, constant velocity motion profile, and ground plane visual scene as the reference condition, we find that participants from the group who tended to overestimate in the exploratory analysis indeed show a considerable positive heading dependent bias that is similar to the findings in a previous study by our group using the same response method (de Winkel et al., 2017). This bias increased for the dot cloud scene, and reduced when the FOV was enlarged or disparity information was made available. These findings may be explained by the effect of these factors on observers' ability to localize the FOE: For larger FOV, less FOE lie outside the visual field, allowing observers to estimate heading by localization of this point rather than by triangulation (Koenderink & van Doorn, 1987), and disparity cues offer information about the depth order, potentially helping participants to resolve uncertainty in the motion parallax (Koenderink & van Doorn, 1987). The finding of smaller heading dependent errors for the ground plane scene may have a cause similar to that of disparity, as the layout of the scene was more readily apparent than for the dot cloud scene, thus offering more easily accessible information about the depth order. 
In the group of participants who generally underestimated heading, there was no notable bias in the reference condition, but negative additive biases were found for the large FOV, the availability of disparity information, and the dot cloud scene. Consequently, the model fits indicate that whereas the coefficients for FOV and disparity information had comparable sizes in both groups, a larger FOV and disparity information reduced bias for participants who overestimate heading, but actually increased bias for participants who tended to underestimate heading. This finding may reflect an overcompensation. 
Although the factors were found to affect the magnitude of bias in both groups, the sign of the bias was not found to change depending of experimental condition. This supports the notion that the nature of constant errors is an idiosyncratic property (de Winkel et al., 2015). We investigated this further by fitting a Bayesian observer model to the data of individual participants. Bayesian models formalize how sensory estimates may interact with prior knowledge to form perceptions. In the present implementation of the model, constant errors in heading estimation were attributed to a prior with peaks aligned with the cardinal axes; individual differences in the concentration of the peaks would then account for the direction of constant errors. It should be noted that because the prior is constant and independent of the sensory estimates, it cannot account for the effects that manipulation of the factors had on the errors. Instead, in a Bayesian framework, differences in the magnitude of constant errors would manifest as effects of experimental manipulations on the noise of the sensory estimates. The model fits showed that for the majority of participants who overestimated heading angle, the data were adequately described using a prior with peaks on the interaural axis, possibly with the addition of a small contribution from a peak at straight ahead for a subset of these participants. For the remainder of the participants, constant errors were best described using a prior with peaks on the fore-aft axis. 
As we generally move in the direction of our gaze, the prior probability of heading angles may be expected to peak at straight ahead rather than at the interaural axes. Nevertheless, similar results have been reported previously (Cuturi & MacNeilage, 2013). These authors argue that the findings are consistent with population vector decoding of neural activity in area MSTd. Lateral directions have been found to be overrepresented in the distribution of preferred directions for neurons in this population (Fernandez & Goldberg, 1976; Gu et al., 2010), and computational work has shown that this may bias heading estimates toward the lateral axes, and increases sensitivity to deviations from straight-ahead motion. As such, the priors that were estimated for each participant in the present experiment may reflect individual differences in the distribution of preferred directions in the MSTd area, rather than a knowledge of behavioral statistics per se. 
Taken together, the findings fit in with those of previous studies reporting either over- (Crane, 2012; Cuturi & MacNeilage, 2013; de Winkel et al., 2017; Hummel et al., 2016; Telford et al., 1995; Telford & Howard, 1996; R. Warren, 1976) or underestimation (d'Avossa & Kersten, 1996; de Winkel et al., 2015; Johnston et al., 1973; Li et al., 2002; Llewellyn, 1971; van den Berg & Brenner, 1994a), and suggest that different conclusions regarding the nature of the errors resulted both from the specific choice of stimulus characteristics and averaging over groups of participants, with heading being either predominantly overestimated or underestimated in the sample. 
Variable error
In previous work, variable errors in heading estimation have been found to be minimal around the cardinal axes, and to increase for heading angles further away from these axes (e.g., Crane, 2012, 2014; Cuturi & MacNeilage, 2013; de Winkel et al., 2015, 2017). Overall, this result is supported by the findings of the present study. 
Variable errors were found to be affected by scene layout for both groups of participants, with smaller errors for the dot cloud scene than for the ground plane scene. Variable error was found to be the smallest around the cardinal axes for participants who tended to overestimate heading angle, and this effect was increased for the dot cloud scene. Participants who underestimated heading showed a heading dependency of the size of the variable error for dot cloud scene stimuli. The size of the variable errors was similar to previous studies (de Winkel et al., 2015, 2017). 
Limitations
Although the present study was aimed at evaluating the effects of visual stimulus characteristics on heading estimation, the list of varied factors was not exhaustive: Previous studies also differ with respect to the presence (Crane, 2012; d'Avossa & Kersten, 1996; Li et al., 2002; Telford & Howard, 1996; Telford et al., 1995; van den Berg & Brenner, 1994a; R. Warren, 1976; W. H. Warren & Kurtz, 1992) or absence (Telford & Howard, 1996; Telford et al., 1995) of a fixation point; the nature of the objects in the visual scene, varying with respect to density, shape, luminance, and lifetime; speed of the visual motion, ranging from centimeters (e.g., Hummel et al., 2016) to tens of meters per second (e.g., Johnston et al., 1973); and the response method, where studies employed physical pointer devices (e.g. de Winkel et al., 2015, 2017), virtual dials (e.g., Cuturi & MacNeilage, 2013; Hummel et al., 2016), and even written (Hummel et al., 2016) and verbal (Crane, 2012) estimates. It is plausible that these factors affect heading estimates. 
It should nevertheless be noted that factors that were not considered cannot explain the observed interpersonal variability in sign of the errors, simply because this variability was observed only between individuals. This is consistent with the notion that errors in heading estimation have a strong idiosyncratic component. Consequently, even for a fixed experimental condition, different conclusions on the sign of heading estimation errors will be drawn when a sample of participants either predominantly under- or overestimates heading, which can explain contrasting findings on the sign of constant errors in two subsequent studies that used identical visual stimuli (de Winkel et al., 2015, 2017). 
Conclusion
We conclude that whereas characteristics of the visual stimulus affect the magnitude of both constant and variable errors, there is a strong idiosyncratic component to the nature of errors in heading estimation. Vection was found to have a small effect on variable errors. Future neurophysiological studies may address whether the observed differences in the direction of constant errors are consistent with individual differences in the distribution of preferred directions in cortical area MSTd. 
Acknowledgments
Commercial relationships: none. 
Corresponding author: Ksander N. de Winkel. 
Address: Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany. 
References
Andersen, G. J., & Braunstein, M. L. (1985). Induced self-motion in central vision. Journal of Experimental Psychology: Human Perception and Performance, 11 (2), 122.
Banks, M. S., Ehrlich, S. M., Backus, B. T., & Crowell, J. A. (1996). Estimating heading during real and simulated eye movements. Vision Research, 36 (3), 431–443.
Bos, J. E., Bles, W., & Groen, E. L. (2008). A theory on visually induced motion sickness. Displays, 29 (2), 47–57.
Brandt, T., Wist, E. R., & Dichgans, J. (1975). Foreground and background in dynamic spatial orientation. Attention, Perception, & Psychophysics, 17 (5), 497–503.
Brandt, T., Dichgans, J., & Koenig, E. (1973). Differential effects of central versus peripheral vision on egocentric and exocentric motion perception. Experimental Brain Research, 16 (5), 476–491.
Butler, J. S., Campos, J. L., & Bülthoff, H. H. (2015). Optimal visual–vestibular integration under conditions of conflicting intersensory motion profiles. Experimental Brain Research, 233 (2), 587–597.
Butler, J. S., Smith, S. T., Campos, J. L., & Bülthoff, H. H. (2010). Bayesian integration of visual and vestibular signals for heading. Journal of Vision, 10 (11): 23, 1–13, https://doi.org/10.1167/10.11.23. [PubMed] [Article]
Crane, B. T. (2012). Direction specific biases in human visual and vestibular heading perception. PLoS One, 7 (12), e51383.
Crane, B. T. (2014). Human visual and vestibular heading perception in the vertical planes. Journal of the Association for Research in Otolaryngology, 15 (1), 87–102.
Crowell, J. A., & Banks, M. S. (1993). Perceiving heading with different retinal regions and types of optic flow. Attention, Perception, & Psychophysics, 53 (3), 325–337.
Crowell, J. A., & Banks, M. S. (1996). Ideal observer for heading judgments. Vision Research, 36 (3), 471–490.
Cuturi, L. F., & MacNeilage, P. R. (2013). Systematic biases in human heading estimation. PLoS One, 8 (2), e56862.
d'Avossa, G., & Kersten, D. (1996). Evidence in human subjects for independent coding of azimuth and elevation for direction of heading from optic flow. Vision Research, 36 (18), 2915–2924.
de Winkel, K. N., Katliar, M., Bülthoff, H. H. (2015). Forced fusion in multisensory heading estimation. PLoS One, 10 (5), e0127104, https://doi.org/10.1371/journal.pone.0127104.
de Winkel, K. N., Katliar, M., & Bülthoff, H. H. (2017). Causal inference in multisensory heading estimation. PLoS One, 12 (1), e0169676.
de Winkel, K. N., Weesie, J., Werkhoven, P. J., & Groen, E. L. (2010). Integration of visual and inertial cues in perceived heading of self-motion. Journal of Vision, 10 (12): 1, 1–10, https://doi.org/10.1167/10.12.1. [PubMed] [Article]
Dichgans, J., & Brandt, T. (1978). Visual-vestibular interaction: Effects on self-motion perception and postural control. In Perception (pp. 755–804). Berlin, Germany: Springer.
Fernandez, C., & Goldberg, J. M. (1976). Physiology of peripheral neurons innervating otolith organs of the squirrel monkey. II. Directional selectivity and force-response relations. Journal of Neurophysiology, 39 (5), 985–995.
Fetsch, C. R., Turner, A. H., DeAngelis, G. C., & Angelaki, D. E. (2009). Dynamic reweighting of visual and vestibular cues during self-motion perception. Journal of Neuroscience, 29 (49), 15601–15612.
Fischer, M. H., & Kornmüller, A. E. (1930). Der schwindel [Dizziness]. In Handbuch der normalen und pathologischen Physiologie [Handbook of Normal and Pathological Physiology] (pp. 442–494). Berlin, Germany: Springer.
Fisher, R. A. (1925). Statistical methods for research workers. Edinburgh, UK: Oliver and Boyd.
Gibson, J. J. (1950). The perception of the visual world. Boston, MA: Houghton Mifflin.
Girshick, A. R., Landy, M. S., & Simoncelli, E. P. (2011). Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics. Nature Neuroscience, 14 (7), 926.
Gu, Y., DeAngelis, G. C., & Angelaki, D. E. (2007). A functional link between area mstd and heading perception based on vestibular signals. Nature Neuroscience, 10 (8), 1038–1047.
Gu, Y., Fetsch, C. R., Adeyemo, B., DeAngelis, G. C., & Angelaki, D. E. (2010). Decoding of mstd population activity accounts for variations in the precision of heading perception. Neuron, 66 (4), 596–609.
Habak, C., Casanova, C., & Faubert, J. (2002). Central and peripheral interactions in the perception of optic flow. Vision Research, 42 (26), 2843–2852.
Hummel, N., Cuturi, L. F., MacNeilage, P. R., & Flanagin, V. L. (2016). The effect of supine body position on human heading perception. Journal of Vision, 16 (3): 19, 1–11, https://doi.org/10.1167/16.3.19. [PubMed] [Article]
Johnston, I. R., White, G. R., & Cumming, R. W. (1973). The role of optical expansion patterns in locomotor control. The American Journal of Psychology, 86 (2), 311–324.
Kline, R. B. (2004). Principles and Practice of Structural Equation Modeling. New York, NY: The Guilford Press.
Koenderink, J. J., & van Doorn, A. J. (1987). Facts on optic flow. Biological Cybernetics, 56 (4), 247–254.
Lappe, M., Bremmer, F., & Van den Berg, A. (1999). Perception of self-motion from visual flow. Trends in Cognitive Sciences, 3 (9), 329–336.
Li, L., Peli, E., & Warren, W. H. (2002). Heading perception in patients with advanced retinitis pigmentosa. Optometry & Vision Science, 79 (9), 581–589.
Llewellyn, K. R. (1971). Visual guidance of locomotion. Journal of Experimental Psychology, 91 (2), 245.
MacNeilage, P. R., Banks, M. S., Berger, D. R., & Bülthoff, H. H. (2007). A bayesian model of the disambiguation of gravitoinertial force by visual cues. Experimental Brain Research, 179 (2), 263–290.
Motulsky, H., & Christopoulos, A. (2004). Fitting models to biological data using linear and nonlinear regression: A practical guide to curve fitting. New York, NY: Oxford University Press.
Nakamura, S., & Shimojo, S. (1999). Critical role of foreground stimuli in perceiving visually induced self-motion (vection). Perception, 28 (7), 893–902.
Palmisano, S. (1996). Perceiving self-motion in depth: The role of stereoscopic motion and changing-size cues. Perception & Psychophysics, 58 (8), 1168–1176.
Palmisano, S. (2002). Consistent stereoscopic information increases the perceived speed of vection in depth. Perception, 31 (4), 463–480.
Palmisano, S., Allison, R. S., & Pekin, F. (2008). Accelerating self-motion displays produce more compelling vection in depth. Perception, 37 (1), 22–33.
Pretto, P., Ogier, M., Bülthoff, H. H., & Bresciani, J.-P. (2009). Influence of the size of the field of view on motion perception. Computers & Graphics, 33 (2), 139–146.
Riecke, B. E., Schulte-Pelkum, J., Avraamides, M. N., Heyde, M. V. D., & Bülthoff, H. H. (2006). Cognitive factors can influence self-motion perception (vection) in virtual reality. ACM Transactions on Applied Perception, 3 (3), 194–216.
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48 (2), 1–36.
Royden, C. S., Banks, M. S., & Crowell, J. A. (1992, December 10). The perception of heading during eye movements. Nature, 360 (6404), 583–585.
Saito, H.-A., Yukie, M., Tanaka, K., Hikosaka, K., Fukada, Y., & Iwai, E. (1986). Integration of direction signals of image motion in the superior temporal sulcus of the macaque monkey. Journal of Neuroscience, 6 (1), 145–157.
Telford, L., & Howard, I. P. (1996). Role of optical flow field asymmetry in the perception of heading during linear motion. Attention, Perception, & Psychophysics, 58 (2), 283–288.
Telford, L., Howard, I. P., & Ohmi, M. (1995). Heading judgments during active and passive self-motion. Experimental Brain Research, 104 (3), 502–510.
Trutoiu, L. C., Streuber, S., Mohler, B. J., Schulte-Pelkum, J., & Bülthoff, H. H. (2008). Tricking people into feeling like they are moving when they are not paying attention. In S. Creem-Regehr & K. Myszkowski (Eds.), Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization (pp. 190). New York, NY: ACM.
van den Berg, A., & Brenner, E. (1994a). Humans combine the optic flow with static depth cues for robust perception of heading. Vision Research, 34 (16), 2153–2167.
van den Berg, A., & Brenner, E. (1994b, October 20). Why two eyes are better than one for judgements of heading. Nature, 371 (6499), 700–702.
Warren, R. (1976). The perception of egomotion. Journal of Experimental Psychology: Human Perception and Performance, 2 (3), 448.
Warren, W. H., & Kurtz, K. J. (1992). The role of central and peripheral vision in perceiving the direction of self-motion. Perception & Psychophysics, 51 (5), 443–454.
Warren, W. H., Morris, M. W., & Kalish, M. (1988). Perception of translational heading from optical flow. Journal of Experimental Psychology: Human Perception and Performance, 14 (4), 646.
Ziegler, L. R., & Roy, J.-P. (1998). Large scale stereopsis and optic flow: Depth enhanced by speed and opponent-motion. Vision Research, 38 (9), 1199–1209.
Figure 1
 
Left panel: View of the back projection screen, with in the middle the head rest. Right panel: Photograph of the pointer device and numerical keypad used to register responses.
Figure 1
 
Left panel: View of the back projection screen, with in the middle the head rest. Right panel: Photograph of the pointer device and numerical keypad used to register responses.
Figure 2
 
Left panel: Monoscopic screenshot of random dot cloud stimulus. Right panel: Monoscopic screenshot of random dot ground plane stimulus.
Figure 2
 
Left panel: Monoscopic screenshot of random dot cloud stimulus. Right panel: Monoscopic screenshot of random dot ground plane stimulus.
Figure 3
 
Responses (dots) and the fitting result of the bimodal model for an example participant (Participant 7). The different colored lines indicate the means of the shifted (purple) and nonshifted (yellow) mixture components; the shaded areas represent the M ±1 SD. For this participant, shifted responses made up 11.7% of the data. Note that effects of experimental manipulations were not evaluated in the exploratory analysis, and that this figure shows all responses regardless of experimental condition.
Figure 3
 
Responses (dots) and the fitting result of the bimodal model for an example participant (Participant 7). The different colored lines indicate the means of the shifted (purple) and nonshifted (yellow) mixture components; the shaded areas represent the M ±1 SD. For this participant, shifted responses made up 11.7% of the data. Note that effects of experimental manipulations were not evaluated in the exploratory analysis, and that this figure shows all responses regardless of experimental condition.
Figure 4
 
Path model structure, visualizing hypothesized relations between dependent variables (DEP) and independent variables FOV, disparity (DSP), profile (PRF), and scene (SCN); mediating variable vection rating (VCT); heading angle θ; and the interactions (:) between the independent variables and θ.
Figure 4
 
Path model structure, visualizing hypothesized relations between dependent variables (DEP) and independent variables FOV, disparity (DSP), profile (PRF), and scene (SCN); mediating variable vection rating (VCT); heading angle θ; and the interactions (:) between the independent variables and θ.
Figure 5
 
Experimental data and model fits for each experimental condition, split by those participants who were found to overestimate (blue) or underestimate (orange) heading in the exploratory data analysis. Each panel shows the data of a single experimental condition, which is specified by the four-digit code in the top left of the panel. The code corresponds to the level of the disparity, FOV, profile, and scene factors, in that order. The fitted models take into account only those parameters that were found to differ from zero (see Table 1). Note that whereas the magnitude of the heading-dependent bias varies between conditions, the sign of the bias does not change within the groups. Data for the two groups was offset by ±3° relative to the actual stimulus heading to improve visibility of the data points.
Figure 5
 
Experimental data and model fits for each experimental condition, split by those participants who were found to overestimate (blue) or underestimate (orange) heading in the exploratory data analysis. Each panel shows the data of a single experimental condition, which is specified by the four-digit code in the top left of the panel. The code corresponds to the level of the disparity, FOV, profile, and scene factors, in that order. The fitted models take into account only those parameters that were found to differ from zero (see Table 1). Note that whereas the magnitude of the heading-dependent bias varies between conditions, the sign of the bias does not change within the groups. Data for the two groups was offset by ±3° relative to the actual stimulus heading to improve visibility of the data points.
Figure 6
 
Visualization of the prior, with arbitrary values for the prior components κS = 5, κN = 7.5, κE,W = 15.
Figure 6
 
Visualization of the prior, with arbitrary values for the prior components κS = 5, κN = 7.5, κE,W = 15.
Figure 7
 
Visualization of the three general patterns of constant errors obtained by fitting the Bayesian model to the data. The left and middle panels show cases where a prior with peaks on the interaural or fore-aft axis was preferred, respectively (Participants 18, 9); the right panel shows a case where a prior with peaks on both axes was preferred (Participant 17). Dots represent individual data points; the blue line represents the model predicted constant (circular) error; the shaded area represents one circular standard deviation of the response, according to the model. A small amount of uniform noise was added to the x-position of the data points.
Figure 7
 
Visualization of the three general patterns of constant errors obtained by fitting the Bayesian model to the data. The left and middle panels show cases where a prior with peaks on the interaural or fore-aft axis was preferred, respectively (Participants 18, 9); the right panel shows a case where a prior with peaks on both axes was preferred (Participant 17). Dots represent individual data points; the blue line represents the model predicted constant (circular) error; the shaded area represents one circular standard deviation of the response, according to the model. A small amount of uniform noise was added to the x-position of the data points.
Table 1
 
Coefficients per path model, split by groups who over- or underestimated (Groups 1 and 2, respectively), as per the exploratory analysis. Notes: The models addressed constant errors \(\epsilon \), and variable errors νT. Model predictors were the factors FOV, disparity (DSP), profile (PRF), and scene (SCN); mediator variable vection (VCT); covariate heading angle θ; and the interactions (:) between the factors and θ. Coefficients that differ significantly from 0 \((p(\left| z \right|) \lt 0.05)\) are bold faced.
Table 1
 
Coefficients per path model, split by groups who over- or underestimated (Groups 1 and 2, respectively), as per the exploratory analysis. Notes: The models addressed constant errors \(\epsilon \), and variable errors νT. Model predictors were the factors FOV, disparity (DSP), profile (PRF), and scene (SCN); mediator variable vection (VCT); covariate heading angle θ; and the interactions (:) between the factors and θ. Coefficients that differ significantly from 0 \((p(\left| z \right|) \lt 0.05)\) are bold faced.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×