During self-motion, vision provides us with both pictorial (such as linear perspective and relative size) and motion-based (such as motion perspective/motion parallax, changing-size, and dynamic occlusion) information about 3-D layout (e.g., Gibson et al.,
1955; DeLucia,
1991; Palmisano,
1996; Kim, Khuu, & Palmisano,
2016). Of all these sources of information, Gibson (
1950,
1966; Gibson et al.,
1955) argued that
monocular motion perspective was the most important source of information for perceived scene layout. For the current purposes, monocular motion perspective will be defined as the perspective change in the locations of objects in the optic array over time (i.e., the gradient of optical velocity presented to a single eye). According to Gibson's theory of direct perception, the properties of this motion perspective directly specify the nature of the observer's self-motion as well as his/her environmental layout. For example, under ideal conditions (e.g., self-motion over a rigid ground plane), monocular motion perspective provides useful information about relative environmental distances (Braunstein & Andersen,
1981). However, this information should become more difficult to interpret when travelling through nonrigid and/or nonplanar environments (e.g., self-motion in the presence of object-motion or relative to a 3-D cloud of randomly positioned objects). Thus, it is possible that stereoscopic optic flow might improve self-motion perception by providing supplementary binocular information about 3-D scene layout (Palmisano,
1996,
2002; Allison et al.,
2014). When we observe the world binocularly, the images of individual objects in the environment often fall on different (i.e., noncorresponding) retinal positions in our left and right eyes--referred to as
binocular positional disparities (Howard & Rogers,
2012). Although horizontal binocular disparities are known to generate compelling stereoscopic perceptions of
relative distance/depth (e.g., Wheatstone,
1838), convergence and vertical binocular disparities also provide information about
absolute egocentric distances (e.g., Tresilian, Mon-Williams, & Kelly,
1999; Rogers & Bradshaw,
1993). Research suggests that binocular depth perception is enhanced by the stereoscopic optic flow produced by typical self-motions (e.g., Ziegler & Roy,
1998). There are also numerous ways this stereoscopic information about 3-D layout might contribute to self-motion perception. For example, as noted above, monocular motion perspective is often ambiguous: The optic flow might represent either a fast self-motion in a large environment or a slow self-motion in a smaller environment. Binocular information about absolute distance could resolve this ambiguity by scaling the monocularly available self-motion/layout information—one result being a more accurate visual perception of the speed of self-motion (see Palmisano,
2002). Stereoscopic information might also increase perceptions of self-motion
in depth by making the visual environment appear more 3-D (e.g., by countering the unintended depth compression effects present in many virtual displays; see Grechkin, Nguyen, Plumert, Cremer, & Kearney,
2010; Sahm, Creem-Regehr, Thompson, & Willemsen,
2005; Thompson et al.,
2004; Willemsen, Gooch, Thompson, & Creem-Regehr,
2008).