When an observer moves in space, the retinal projection of a stationary object either expands if the motion is toward the object or shifts horizontally if the motion contains a lateral component. This study examined the impact of expansive optic flow and lateral motion parallax on the accuracy of depth perception for observers with normal or artificially reduced acuity and asked whether any benefit is due to the continuous motion or to the discrete object image displacement. Stationary participants viewed a virtual room on a computer screen. They used an on-screen slider to estimate the depth of a target object relative to a reference object after seeing 2-second videos simulating five conditions: static viewing, expansive optic flow, and lateral motion parallax in either continuous motion or image displacement. Ten participants viewed the stimuli with normal acuity in Experiment 1 and 11 with three levels of artificially reduced acuity in Experiment 2. Linear regression models represented the relationship between the depth estimates of participants and the ground truth. Lateral motion parallax produced more accurate depth estimates than expansive optic flow and static viewing. Depth perception with continuous motion was more accurate than that with displacement under mild and moderate, but not severe, acuity reduction. For observers with both normal and artificially reduced acuity, lateral motion parallax was more helpful for object depth estimation than expansive optic flow, and continuous motion parallax was more helpful than object image displacement.

*y*is the dependent variable, participants’ slider setting, and

*x*is the ground-truth depth separation. Coefficient

*a*was termed the slope and coefficient

*b*was the intercept of the model. The slope indicated the scale bias in participants’ depth perception, and the intercept represented the offset bias. A slope smaller than 1 meant that the participant had a compressive scale bias in depth perception. The closer the slope was to 0, the more compressive the scale bias. The intercept, or offset bias, represented a perceived offset in the overall depth of the target. If the intercept (coefficient

*b*) was greater than 0, it meant that the observer estimated the target object to be farther away than its true depth by

*b*feet, which was a positive offset bias. The closer the slope was to 1 and the intercept was to 0, the more accurate the depth perception.

^{2}values reflected how much the participants’ depth estimation varied around the regression line. The larger the R

^{2}, the less the residual variation, hence a higher consistency in the participants’ response and higher depth perception accuracy.

^{2}were taken as the indicators of depth perception accuracy for each viewing condition. We used R package v.4.3.0 (R Core Team, 2018) to conduct data analysis. The glm function was used to fit linear regression models and the lstrends function of the lsmeans package was used to compare regression slopes of different viewing conditions (Lenth, 2016). Fisher's r-to-z transform was used to compare the correlation coefficients of different linear regression models.

*a*in Equation 1, in the static viewing condition was 0.39. The slope of the static viewing regression model was significantly less than 1, showing a compressive scale bias in depth estimation. The intercept, or coefficient

*b*in Equation 1, was slightly less than 0 in both tests. The adjusted R

^{2}was 0.31 and 0.25, showing substantial variability in the participants’ estimates.

*t*= 11.5,

*p*< 0.001. Expansive optic flow also yielded a steeper slope than static viewing

*t*= 3.04,

*p*= 0.007.

*t*= 6.8,

*p*< 0.001. Continuous lateral motion parallax yielded an even steeper slope and a closer-to-0 intercept compared with lateral displacement,

*t*= 8.3,

*p*< 0.001. The 95% confidence interval of regression slope yielded by the continuous lateral motion included 1.0. A Fisher's r-to-z comparison indicated that the Pearson correlation of the regression model fitted with continuous lateral motion trials,

*r*= 0.89, was higher than that of the model fitted with lateral displacement trials,

*r*= 0.71,

*z*= 4.89,

*p*< 0.001, meaning depth perception was more consistent in continuous motion trials than in displacement trials.

^{2}values of the static viewing condition in the three acuity reduction levels. We considered the static viewing conditions to be the baseline.

*p*< 0.01; moderate condition,

*p*< 0.01; and severe condition.

*p*= 0.02.

*a*in Equation 1) in the baseline condition all fell within the range of 0.26 to 0.36, showing no significant difference. This result reflected a substantially compressive scale bias. Severe reduction yielded a significantly lower than 0 intercept (coefficient

*b*in Equation 1) than the other two reduction levels. This meant that the observers estimated the target object to be closer than it was under severe acuity reduction (1.55 logMAR), but not under mild and moderate reduction (0.95 and 1.15 logMAR). These results showed that the baseline depth estimates in the three simulated acuity loss conditions had low accuracy, with a compressive scale bias, a negative offset bias, and substantial variability around the regression line.

*t*= 10.3,

*p*< 0.001; moderate,

*t*= 10.1,

*p*< 0.001; and severe,

*t*= 9.8,

*p*< 0.001. The value of the slopes ranged from 0.70 to 0.82.

*t*= 7.23,

*p*< 0.001; moderate,

*t*= 6.7,

*p*< 0.001; and severe,

*t*= 7.8,

*p*< 0.001. Under mild and moderate acuity reduction, the regression slope in the lateral displacement condition was still lower than that in the lateral continuous motion condition: mild,

*t*= 3.6,

*p*=.002; and moderate,

*t*= 4.0,

*p*< 0.001. However, that difference became insignificant under severe acuity reduction.

*a*in Equation 1, representing the scale bias) was roughly the same in all three blur levels and fell in the same range of value as the regression slope found in the baseline condition with normal-acuity observers. For more severe acuity reduction, offset bias (coefficient

*b*in Equation 1) becomes more negative, which leaves room for the effect of expansive optic flow to show. However, in general, the change in acuity levels did not have a substantial effect on depth perception accuracy in the static viewing condition. This result was consistent with the findings of Tarampi, Creem-Regehr, and Thompson (2010), where participants with normal acuity and participants wearing blur goggles showed the same scale bias in a depth matching task. The reason behind this finding might be that the two pictorial cues in the stimuli—the relative size and the angle of declination—only involve low spatial frequency features in the image, which are the wall–floor boundary and the object contour. These features may be accessible to observers with normal or reduced acuity. This result is consistent with the findings of Rand, Tarampi, Creem-Regehr, and Thompson (2011) and Rand, Tarampi, Creem-Regehr, and Thompson (2012), who found that the angle of declination is a robust depth cue for observers with reduced acuity.

*Investigative Ophthalmology and Visual Science,*54(1), 288–294, https://doi.org/10.1167/iovs.110461. [CrossRef]

*Blender - A 3D modelling and rendering package.*(Patent No). Retrieved from, http://www.blender.org.

*Cognitive Neurodynamics,*14(2), 155–168, https://doi.org/10.1007/s11571-019-09563-8. [CrossRef] [PubMed]

*Journal of Experimental Psychology: Human Perception and Performance*, 21(3), 679–699. [PubMed]

*Behavioural Brain Research*, 14(1), 29–39, https://doi.org/10.1016/0166-4328(84)90017-2. [CrossRef] [PubMed]

*Perception,*40(1), 39–49, https://doi.org/10.1068/p6868. [CrossRef] [PubMed]

*Perception,*26, 1529–1538. [PubMed]

*Progress in Retinal Research,*9, 273–336, https://doi.org/10.1016/0278-4327(90)90009-7.

*Nature,*293, 293–204.

*Journal of Statistical Software*, 69(1), 1–33, https://doi.org/10.18637/jss.v069.i01.

*Investigative Ophthalmology & Visual Science,*60, 1051–1051.

*Proceedings of the Royal Society of London. Series B, Biological Sciences,*208, 385–397. [PubMed]

*Eye,*9(3), 333–336, https://doi.org/10.1038/eye.1995.64.

*Journal of Vision,*10(10), 5, https://doi.org/10.1167/10.10.5. [PubMed]

*BMJ Open Ophthalmology,*3(1), e000076, https://doi.org/10.1136/bmjophth-2017-000076. [PubMed]

*Optometry and Vision Science,*90(10), 1119–1127, www.optvissci.com.

*Journal of Experimental Psychology: Human Perception and Performance,*34(6), 1353–1371. [PubMed]

*Behavior Research Methods,*51, 195–203, https://doi.org/10.3758/s13428-018-0119y. [PubMed]

*R: A language and environment for statistical computing*. Vienna, Austria: R Foundation for Statistical Computing, Available from https://www.R-project.org/.

*Perception,*40(2), 143–154. [PubMed]

*Seeing and Perceiving*, 25(5), 425–447, https://doi.org/10.1163/187847611X620946. [PubMed]

*Attention, Perception, and Psychophysics,*84(3), 878–898, https://doi.org/10.3758/s13414-021-02402-1.

*Perception,*8, 125–134. [PubMed]

*Vision Research,*22, 261–270, https://doi.org/10.1016/0046989(82)90126-2. [PubMed]

*Journal of Vision,*19(14), 1–15, https://doi.org/10.1167/19.14.20.

*Annual Review of Psychology,*41, 635–658, www.annualreviews.org. [PubMed]

*Spatial Vision,*7, 35–75. [PubMed]

*Perception,*35(1), 9–24, https://doi.org/10.1068/p5399. [PubMed]

*Optometry and Vision Science,*98(4), 310–325, https://doi.org/10.1097/OPX.0000000000001672.

*Attention, Perception, and Psychophysics,*72(1), 23–27, https://doi.org/10.3758/APP.72.1.23.

*Journal of Experimental Psychology; Human Perception and Performance,*37, (3), 865–876.

*Perception & Psychophysics*, 57(2), 231–245. [PubMed]

*Psychological Bulletin,*138(6), 1172–1217, https://doi.org/10.1037/a0029333. [PubMed]

*Investigative Ophthalmology and Visual Science,*61(6), 40, https://doi.org/10.1167/IOVS.61.6.40.

*Perception & Psychophysics,*41, 53–59. [PubMed]

*Journal of Vision,*11(9), 1–21, https://doi.org/10.1167/11.9.13.