Open Access
Article  |   October 2024
Embeddedness of Earth's gravity in visual perception
Author Affiliations
  • Abdul-Rahim Deeb
    Department of Psychological Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
    [email protected]
  • Fulvio Domini
    Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
    [email protected]
Journal of Vision October 2024, Vol.24, 4. doi:https://doi.org/10.1167/jov.24.11.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Abdul-Rahim Deeb, Fulvio Domini; Embeddedness of Earth's gravity in visual perception. Journal of Vision 2024;24(11):4. https://doi.org/10.1167/jov.24.11.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Falling objects are commonplace in daily life, requiring precise perceptual judgments for interception and avoidance. We argue that human judgments of projectile motion arise from the interplay between sensory information and predictions constrained by Newtonian mechanics. Our study investigates how individuals perceive falling objects under various gravitational conditions, aiming to understand the role of internalized gravity in visual perception. Through meticulously controlling the available information, we demonstrated that these phenomena cannot be explained solely by simple heuristics nor representational momentum. Instead, we found that the perceptual judgments of humans (n = 11, 13, 14, and 11, respectively, in Experiments 1, 2, 3, and 4) are influenced by a combination of sensory information and gravity predictions, highlighting the role of internalized physical constraints in the perception of projectile motion.

Introduction
Projectile motion, a common phenomenon, involves objects propelled forward while accelerating downward due to Earth's gravity. Humans proficiently intercept such objects, whether catching a baseball or reaching for a rolling item, prompting inquiries into Earth's gravity internalization in our nervous system. Previous research suggests the brain internalizes physical regularities, potentially including the effects of gravity on falling objects (Battaglia, Hamrick, & Tenenbaum, 2013; Deeb, Cesanek, & Domini, 2021; Firestone & Keil, 2016; Firestone & Scholl, 2014; Freyd, Pantzer, & Cheng, 1988; Hamrick, Battaglia, & Tenenbaum, 2011; Shepard, 1987; Shepard, 1994). For example, McIntyre, Zago, Berthoz, and Lacquaniti (2001) proposed that perception internalizes knowledge about gravity effects based on anticipatory peak bicep electromyography activity, and Monache, Lacquaniti, and Bosco (2019) demonstrated fewer ocular pursuit errors with terrestrial gravity, which hints at an internalized physical model. However, it is unclear which features are internalized and whether the visual system internalizes projectile motion laws (Krist, Fieberg, & Wilkening, 1993; Ye et al., 2017) or relies on real-time sensory information (Chapman, 1968; Fink, Foo, & Warren, 2009; Khomut & Warren, 2010; Kistemaker, Faber, & Beek, 2009; McLeod et al., 2006; McLeod, Reed, Gilson, & Glennerster, 2008; Lee, 2007). To investigate, we created a scene barring online strategies, forcing reliance on predictive mechanisms, if such exist (Baurès, Benguigui, Amorim, & Siegler, 2007; Belousov, Neumann, Rothkopf, & Peters, 2016; Zhao & Warren, 2015). 
Participants in a three-dimensional (3D) virtual environment observed a projectile event, reporting the perceived final location of the projectile in depth under varied velocity in depth and gravity. We hypothesized that, in the absence of online data, the nervous system resorts to a second-order representation of Earth's gravity, predicting acceleration mirroring Earth's gravitational acceleration (Smith, Battaglia, & Vul, 2013). Consider Figure 1a, where a ball with initial speed vz moves toward the observer and then falls off an edge from height h. Applying the principles of projectile motion, we derive the expected impact location of the projectile as follows:  
\begin{eqnarray} {z_p} = {v_z}{t_E} = {v_z}\sqrt{2h/g}\quad \end{eqnarray}
(1)
here, g represents the acceleration of gravity on Earth (9.81 m/s2), and zp denotes the predicted distance of the fall location from the edge. The predicted duration of the fall, tE, depends only on the height of the fall (h) and gravitational acceleration (g), given by \( \sqrt{2h/g} \). If perception of velocity in depth is unbiased, then zp would match the physical ground truth on Earth. This prediction is then combined with available sensory information as an independent source. 
Displays replicated Earth's gravitational acceleration but also introduced conditions involving accelerations of gravity greater than that of Earth, such as on Jupiter, and weaker accelerations, such as on Mercury. A falling projectile with the same initial velocity vz will land at different depth locations based on the gravitational acceleration applied. For example, it will fall closer to the edge when pulled by the stronger gravitational field of Jupiter compared to Earth's gravity. Conversely, it will fall farther if under Mercury's gravity. To ensure identical parabolic trajectories ending at six fixed locations, we adjusted the initial speed vz for each gravity condition. Trials with simulated gravity aligned with Earth's force resulted in cue-consistent stimuli, where predictive and sensory sources converged. Conversely, when non-terrestrial accelerations of gravity contradicted this prediction, we classified them as cue-conflict stimuli. In these cases, sensory information diverged from the prediction based on Earth's gravity, and we predict biases toward the projected location of the projectile, zp. Biases toward the prediction under Earth's gravity would lead to underestimations in the simulated Mercury condition and overestimations in the Jupiter condition (Figure 1c). 
Experiment 1
The goal of Experiment 1 was to assess the impact of simulated gravity on depth perception. Participants observed parabolic motion trajectories (Figure 1a) and indicated the perceived landing location of an object among six options (Figure 1b). By adjusting the initial speed of the projectile, we ensured consistent trajectories across gravity conditions. We hypothesized that, if the visual system integrates sensory and predictive information based on Earth's gravity, zp, then biases will exist across Jupiter and Mercury conditions (Figure 1c) toward the predictive information. Conversely, if reliance is solely on sensory cues, no differences in judged landing locations would be expected across gravitational conditions. 
Methods
Participants
Eleven participants were recruited for Experiment 1. All participants had normal or corrected-to-normal vision and were paid $12 per hour as compensation. Written informed consent was obtained from all participants in accordance with the tenets of the Declaration of Helsinki and following protocol approved by the Brown University Institutional Review Board. 
Apparatus
With their head position fixed by a chin rest, participants viewed stereoscopic renderings of 3D objects by looking into a half-silvered mirror arranged at a 45° angle to a 19-inch cathode-ray tube (CRT) monitor directly to the left of the mirror. The mirror reflected the image displayed on the monitor so that the rendered objects appeared to be floating in space beyond the mirror (Figure 2). The distance from the eyes to the reflected image of the monitor screen was 400 mm. Stereoscopic presentation of the simulated 3D scene was achieved using a frame-interlacing technique and a pair of 3D Vision 2 wireless glasses (NVIDIA, Santa Clara, CA). 
Figure 1.
 
Stimuli and predictions. (a) Graphical description of landing positions in depth for three different gravitational conditions (3.7 m/s2, red; 9.81 m/s2, green; 24.79 m/s2, blue) and the same displayed trajectory (gray). Despite the variation in gravity, identical parabolic trajectories were displayed, with the velocity in depth adjusted accordingly. The gray trajectory representing one of the six possible displayed paths, shown in (b), was kept the same across gravity conditions by adjusting the initial speed vz vzwith the magnitude of the gravitational acceleration (red, green, and blue arrows). The height of the fall is labeled h. (b) Display of the six locations in depth used in Experiments 1, 2, and 3. Depth was measured from the farthest edge of the floor; therefore, the smallest displayed depth was farthest away from the observer. (c) Predictions of the location in depth based on Earth's gravity and the displayed velocity in depth, vz. The dashed lines depict the trajectories predicted if Earth's gravity is assumed (green). Lower velocities in depth were used in the Mercury condition (red); therefore, under the assumption of Earth's gravity, the final location of the projectile would be closer to the edge of the table. Conversely, faster velocities in depth were used for the Jupiter condition (blue); therefore, under the assumption of Earth's gravity, the predicted location in depth would be farther away from the edge of the tabletop.
Figure 1.
 
Stimuli and predictions. (a) Graphical description of landing positions in depth for three different gravitational conditions (3.7 m/s2, red; 9.81 m/s2, green; 24.79 m/s2, blue) and the same displayed trajectory (gray). Despite the variation in gravity, identical parabolic trajectories were displayed, with the velocity in depth adjusted accordingly. The gray trajectory representing one of the six possible displayed paths, shown in (b), was kept the same across gravity conditions by adjusting the initial speed vz vzwith the magnitude of the gravitational acceleration (red, green, and blue arrows). The height of the fall is labeled h. (b) Display of the six locations in depth used in Experiments 1, 2, and 3. Depth was measured from the farthest edge of the floor; therefore, the smallest displayed depth was farthest away from the observer. (c) Predictions of the location in depth based on Earth's gravity and the displayed velocity in depth, vz. The dashed lines depict the trajectories predicted if Earth's gravity is assumed (green). Lower velocities in depth were used in the Mercury condition (red); therefore, under the assumption of Earth's gravity, the final location of the projectile would be closer to the edge of the table. Conversely, faster velocities in depth were used for the Jupiter condition (blue); therefore, under the assumption of Earth's gravity, the predicted location in depth would be farther away from the edge of the tabletop.
Stimuli
Participants viewed a 3D stereoscopic rendering of a projectile motion event off a virtual tabletop. The virtual scene consisted of a flat rectangular horizontal surface (20 cm × 40 cm). The nearest edge of the virtual tabletop was connected to a vertical rectangular surface, perpendicular to the horizontal plane. This vertical rectangle (7 cm × 20 cm) connected to another rectangular horizontal surface (17 cm × 20 cm), parallel to the first, which was displayed at the lower edge of the vertical surface. Altogether, the three surfaces formed a “step-shaped” virtual tabletop and floor. The target projectile was rendered as a sphere with a 1.6-cm diameter (see Figure 2). The tabletop was presented 7 cm below eye level. 
An overhead directional light source provided diffuse shading cues to all 3D shapes in the scene. Additionally, the apparent sizes of the target projectiles decreased linearly with distance from the participant, in accordance with accurate perspective projection. Initially, the projectile appeared to rest on the higher horizontal square surface, appearing 2.5 cm away from its furthest edge in depth and horizontally centered (77.5 cm away from the participant). At the onset of each trial, the projectile object moved toward the participant in depth at a constant velocity until reaching the edge of the surface and falling off. The virtual object accelerated downward in accordance with one of three accelerations of gravity: Earth, 9.81 m/s2; Jupiter, 24.79 m/s2; or Mercury, 3.7 m/s2. The object maintained its constant velocity in depth, producing a physically consistent parabolic trajectory downward. Each velocity in depth was paired with a gravitational acceleration such that six identical parabolic trajectories were displayed. In other words, slower velocities in depth were paired with the lower gravity condition, and faster velocities were paired with the higher gravity conditions (see Table 1). The final locations of the projectile were measured relative to the farthest edge of the vertical surface of the step-shaped virtual object (i.e., distance in airtime) and were 34.25 mm, 44.5 mm, 55 mm, 72 mm, 89 mm, or 116.5 mm. Therefore, the final locations of the projectile objects were 71.57 cm, 70.55 cm, 69.5 cm, 67.8 cm, 66.1 cm, or 63.35 cm away from the participant in depth, resulting in six spatially identical trajectories for each of the three gravity conditions which varied in duration (see Figure 1b and Table 1). Although the trajectories were spatially identical, they differed temporally. Projectiles falling under the gravity of Mercury took longer to fall than those under higher gravities such as those of Earth and Jupiter. The displayed time of fall was 195 ms for Mercury, 120 ms for Earth, and 75 ms for Jupiter. After landing, the projectile was available to the observer for one frame, or roughly 11 ms, before vanishing. Following a 1.5-second delay, a probe consisting of a small horizontally centered black circle appeared on the lower surface (0.4-cm diameter). This delay was introduced to mitigate apparent motion between the projectile ball and the probe. The probe was presented flush with the surface, appearing as a black dot on the same plane as the virtual floor. 
Table 1.
 
Projectile velocities at different depths and gravitational accelerations.
Table 1.
 
Projectile velocities at different depths and gravitational accelerations.
Figure 2.
 
Experimental setup. The stereoscopic image displayed on the monitor was reflected onto a mirror, creating a 3D scene extending beyond the surface of the mirror. The stimulus was comprised of a step-shaped virtual structure, consisting of both the tabletop and floor connected by a vertical wall. The tabletop was situated farther from the observer in depth and 7 cm below eye level. Additionally, a virtual floor was positioned closer to the observer in depth. A projectile traversed toward the observer and descended to the virtual floor under one of three simulated gravitational conditions.
Figure 2.
 
Experimental setup. The stereoscopic image displayed on the monitor was reflected onto a mirror, creating a 3D scene extending beyond the surface of the mirror. The stimulus was comprised of a step-shaped virtual structure, consisting of both the tabletop and floor connected by a vertical wall. The tabletop was situated farther from the observer in depth and 7 cm below eye level. Additionally, a virtual floor was positioned closer to the observer in depth. A projectile traversed toward the observer and descended to the virtual floor under one of three simulated gravitational conditions.
Procedure
After viewing the projectile motion event, participants used key presses to adjust the location of the probe such that it matched the perceived final location of the launched projectile. Participants could move the probe closer or farther in depth along the z-axis along the full length of the floor surface (17 cm). The initial location of the probe was randomly selected from this range. The factorial combination of three gravitational accelerations and six final positions in depth yielded 18 distinct trial types. The stimuli were blocked by gravity condition, and each condition was repeated 10 times, resulting in 60 trial blocks for a total of 180 trials. 
Results
As depicted in Figure 3, overall participants’ depth judgments were closer to the observer (farther from the edge) relative to what was presented. This overestimation was further exacerbated with higher gravity conditions, with the Jupiter condition being more overestimated than the Earth or Mercury conditions. A two-way analysis of variance (ANOVA) with the factors of gravity and final position in depth revealed significant main effects of gravity, F(2, 20) = 18.26, p < 0.001, and final position in depth, F(5, 50) = 194.58, p < 0.001, as well as significant interactions of gravity and final position in depth, F(10, 100) = 5.65, p < 0.001. 
Figure 3.
 
Results from Experiment 1. The perceived location in depth is represented on the vertical axis in terms of the location of the fall, meaning that 0 mm was the location where the ball launched off the tabletop surface. The displayed position is represented on the horizontal. Results are color coded according to the gravity condition. The unity line represents the displayed information. Each data point represents the average response, and error bars are SEM. Perception was inaccurate regarding the displayed information, as shown by the data points not lying along the unity line. However, there is a clear differentiation by gravity condition.
Figure 3.
 
Results from Experiment 1. The perceived location in depth is represented on the vertical axis in terms of the location of the fall, meaning that 0 mm was the location where the ball launched off the tabletop surface. The displayed position is represented on the horizontal. Results are color coded according to the gravity condition. The unity line represents the displayed information. Each data point represents the average response, and error bars are SEM. Perception was inaccurate regarding the displayed information, as shown by the data points not lying along the unity line. However, there is a clear differentiation by gravity condition.
Discussion
When heuristics, such as using the upward motion of a projectile to predict its downward motion, are unavailable (Siegler, Bardy, & Warren, 2010), results show systematic biases in perceived projectile location based on simulated gravity (Figure 3). If observers relied solely on sensory cues from displayed trajectories, no depth judgment differences between gravity conditions would be expected. However, biases align with the hypothesis that the visual system combines sensory information with predictions consistent with Earth's gravity. In the Mercury condition, falls were underestimated relative to Earth, whereas the Jupiter conditions led to overestimation. Moreover, we noted a large overestimation of the judged location in depth when Earth's gravity was simulated, and the magnitude of this overestimation increased the farther the projectile was from the observer in depth. A possible explanation is that sensory information present in our display was not perceived correctly, a possibility explored in Experiments 3 and 4. Although we demonstrated a significant effect of our gravity manipulation, in our next experiment we aimed to confirm that the observed differences among the gravitational conditions are due to the use of predictive information about Earth's gravity and not the different temporal aspects of the otherwise identical parabolic trajectories. 
Experiment 2
Experiment 1 results demonstrate that perception of spatially identical trajectories is systematically biased. However, as faster velocities in depth were used to simulate trajectories on Jupiter, relative to Earth and Mercury, this bias may be due to representational momentum, wherein faster projectiles are perceived as falling closer to the observer (Finke, Freyd, & Shyi, 1986; Freyd, 1983). Previous research has demonstrated that gravity can bias the remembered location of an object displayed on an incline; therefore, gravity plays a role in representational momentum (Bertamini, 1993; Hubbard, 1995). However, we argue that the effect witnessed was a result of a physical representation being combined with sensory information and not a modulation of memory that resulted in biased judgments. 
To test this alternative interpretation of the results, we displayed parabolic motion events that could be affected by representational momentum but not a predictive mechanism that embeds Earth's gravity constant. To eliminate the prediction of gravity, we rotated the display in our main Experiment 1 180° along the z-axis (Figure 4a) and showed only the non-Earth conditions used in Experiment 1. Projectiles “fell upward” and had no relation to laws of projectile motion as understood on the surface of the Earth, thus isolating the effect of representational momentum on depth judgments. If the effect witnessed in Experiment 1 were simply the result of representational momentum, we should expect a similar pattern of results in our current experiment and a significant effect of our gravity manipulation. 
Figure 4.
 
Experiment 2 task design and results. (a) Projectile motion display and experimental apparatus in Experiment 2. The stereoscopic image on the monitor was reflected so that the 3D scene appeared beyond the mirror. (b) Experiment 3 results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 3.7 m/s2 (red) and 24.79 m/s2 (blue) upward acceleration conditions. The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Figure 4.
 
Experiment 2 task design and results. (a) Projectile motion display and experimental apparatus in Experiment 2. The stereoscopic image on the monitor was reflected so that the 3D scene appeared beyond the mirror. (b) Experiment 3 results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 3.7 m/s2 (red) and 24.79 m/s2 (blue) upward acceleration conditions. The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Methods
Participants
Thirteen new participants were recruited for Experiment 2
Stimuli
The display in Experiment 2 was identical to that for Experiment 1, except that all aspects of the stimuli were presented with an 180° rotation along the z-axis, meaning that the object was first presented on a lower level and moved upward to one of the six landing positions. The sphere object was identical to the one used in Experiments 1 and 2. The whole display was shifted upward by 7.5 cm. This was done so that the full trajectory was visible to the observer without the closest, and highest, surface occluding the projectile. The other difference between Experiments 1 and 2 is that we replicated only the Mercury and Jupiter conditions, as the Earth condition was not necessary to test our current hypothesis. Therefore, the virtual object accelerated upward at either 24.79 m/s2 (Jupiter) or 3.7 m/s2 (Mercury), maintaining its constant velocity in depth and producing a parabolic trajectory upward. The same velocities in depth were paired with a gravitational acceleration such that the six parabolic trajectories were the same as in Experiment 1
Results
As in Experiment 1, a two-way ANOVA with the factors of gravity and final position in depth was performed; however, there was no significant main effect of gravity, F(1, 12) = 0.35, p = 0.567. The main effect of final position in depth, F(5, 60) = 55.23, p < 0.001, was highly significant, but the interaction of gravity and final position in depth did not reach significance, F(5, 60) = 1.37, p = 0.247. These results are shown in Figure 4b. 
Discussion
Experiment 2, like Experiment 1, provided various cues for inferring object depth. Despite similar sensory input across both experiments, simulated gravity had no statistically significant effect on participants’ depth judgments. Because the display was unlike any projectile motion event that an observer would typically see on Earth, it is likely that a predictive mechanism utilizing Earth's gravity was not employed. Furthermore, if representational momentum were responsible for our results in Experiment 1, we would expect depth judgments to be overestimated in the Jupiter condition, which involved faster motions in depth relative to the Mercury condition for the same positions in depth. Although depth overestimation persisted without the effect of gravity on judgments, there was no difference based on the initial velocity in depth of the projectile. Therefore, even though previous studies have suggested that representational momentum can be affected by a gravity bias, neither gravity nor representational momentum played a part in observers’ judgments in this case. 
As in Experiment 1, relative depth judgments were largely overestimated, especially when the projectile was farther away from the observer in depth. This prompted Experiments 3 and 4 to investigate whether this bias stemmed from an overestimation of perceived location in depth and/or projectile speed. Interestingly, Experiment 2 showed greater depth overestimation than Experiment 1, hinting at a potential perception difference between upward and downward motion influenced by Earth's gravity (Moscatelli, La Scaleia, Zago, & Lacquaniti, 2019). 
Experiment 3
Our Experiment 1 findings indicated the influence of gravity on depth perception, yet accuracy was compromised even in the cue-consistent condition. To address this, our next experiment explored depth perception devoid of projectile motion cues, limiting display information to landing positions from Experiment 1, with the aim of discerning if observed overestimations stemmed from inaccurate depth perception. 
Methods
In Experiment 3, we presented the projectile object in the same final landing positions as in Experiment 1 but without any motion. This was done to investigate the participants’ ability to perceive depth without the influence of motion perception. By presenting the object statically, we aimed to isolate the role of depth perception in the participants’ ability to perceive the location of a projectile, without being influenced by the motion of the object or any predictive information. 
Participants
Fourteen new participants were recruited for Experiment 2
Stimuli
Participants viewed the same 3D virtual step-shaped tabletop as in our previous experiments. The sphere object was also identical. Rather than a projectile motion event, subjects were simply shown the sphere in each of the six final locations displayed in Experiment 1: 56.57 cm, 55.55 cm, 54.52 cm, 52.81 cm, 51.09 cm, or 48.36 cm away from the participant in depth (see Figure 1b). The object was displayed on the floor of the step-shaped tabletop for either 40 ms or 80 ms. 
Procedure
After brief exposure to the object's location in depth, participants used key presses to adjust the location of the probe such that it matched the perceived location of the object. Participants could move the probe closer or farther in depth along the z-axis along the full length of the floor surface (17 cm). Again, the initial location of the probe was randomly selected from this range. The factorial combination of six positions in depth and two display durations yielded 12 distinct trial types. Each condition was repeated 10 times, for a total of 120 trials per subject. 
Results
When we removed the influence of projectile motion, our depth judgments were mostly accurate. In fact, when we compared the bias (the difference between the judgment and displayed relative depth) of these results to the bias in Experiment 1, we found a significant difference, t(8.31) = 5.35, p < 0.001, Cohen's d = 3.08. A two-way ANOVA with the factors of position in depth and display duration revealed a significant main effect of position in depth, F(5, 65) = 851.77, p < 0.001. The main effect of display duration, did not reach significance, F(1, 13) = 0.47, p = 0.50, nor did the interaction of position in depth and display duration, F(5, 65) = 0.15, p = 0.98. The effect of display duration, F(1, 13) = 0.37, p = 0.55, on standard deviations was not significant, but the main effect of position in depth was significant, F(5, 65) = 7.16, p < 0.001. 
Discussion
Experiment 3 aimed to isolate depth perception from the influence of parabolic motion. The task was similar to Experiment 1, except the display involved no parabolic motion and the object was displayed on the virtual floor for a slightly longer time. Surprisingly, without motion cues, the subjects’ judgments were accurate (see Figure 5). Contrary to expectations from optimal observer models, which suggest increased precision with multiple signals, our results showed greater precision in the single-cue condition of Experiment 3 compared to the multiple-cue conditions of Experiment 1 and 2 (Landy, Maloney, Johnston, & Young, 1995). Furthermore, the relative increase in accuracy witnessed in this study may be partly attributed to the extended observation time in Experiment 3, which allowed for better depth encoding. As there was virtually no bias in the participants’ judgments of the object’s location in depth, our large overestimations of relative depth seen in Experiment 1 may be due to an overestimation in the predictive cue, a possibility explored in our next experiment. 
Figure 5.
 
Experiment 3 task results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 40-ms condition (red) and the 80-ms condition (blue) The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Figure 5.
 
Experiment 3 task results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 40-ms condition (red) and the 80-ms condition (blue) The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Experiment 4
In Experiments 1 and 2, participants overestimated the impact of the projectile in depth across all gravity conditions, despite accurate performance in Experiment 3, where they briefly observed the projectile at the same falling locations. This suggests that the systematic biases stem from motion in depth. We hypothesize that these biases arise from combining an unbiased estimate of the final location of the projectile with the predicted location based on the perceived speed, \({\hat{v}_z}\ \), of the projectile. If the perceived speed of the projectile is overestimated (\({\hat{v}_z} > \ {v_z}\)), then the predicted final location, \({z_p} = {\hat{v}_z}{\rm{\ }}t\), will be overestimated, as well. 
This experiment had two objectives: first, to determine whether participants generally overestimate the perceived speed of the projectile, and, second, to assess whether this overestimation varies with simulated speed. This second aim complements the aims of Experiment 2. If speed overestimation increases linearly with displayed speed, this might suggest that the effect found in Experiment 1 was due to representational momentum rather than anticipated gravity. To test these hypotheses, participants were tasked with comparing the range of velocities in depth used in Experiment 1 with the speed of a horizontally moving object. 
Methods
Participants
Eleven new participants were recruited for Experiment 4
Stimuli
The stimuli for Experiment 4 consisted of two displays: a velocity in depth sequence (Figure 6b) and a horizontal velocity sequence (Figure 6a). The virtual tabletop and floor were the same as those used in Experiment 1. The velocity in depth sequence was similar to the display used in Experiment 1, except that, after the red sphere moved at a constant velocity in depth, it disappeared just before reaching the closest edge of the surface, appearing 77.5 cm away from the participant and disappearing when it was 60 cm away (moving a total of 17.5 cm). We selected five velocities in depth within the range of 0.17 to 1.58 m/s used in Experiment 1: 0.17, 0.58, 0.88, 1.23, and 1.58 m/s. 
Figure 6.
 
Experiment 4 task design and results. (a) The probe sequence displayed horizontal motion along the x-axis. (b) The standard stimulus displayed motion along the depth (z) dimension. (c) Results of Experiment 4. Perceived velocity in x is shown as a function of displayed velocity in z (black solid curve). The dashed diagonal line represents veridicality, and error bars represent ±1 SEM. The red dashed line represents a quadratic model fit of the subjects’ data.
Figure 6.
 
Experiment 4 task design and results. (a) The probe sequence displayed horizontal motion along the x-axis. (b) The standard stimulus displayed motion along the depth (z) dimension. (c) Results of Experiment 4. Perceived velocity in x is shown as a function of displayed velocity in z (black solid curve). The dashed diagonal line represents veridicality, and error bars represent ±1 SEM. The red dashed line represents a quadratic model fit of the subjects’ data.
A separately presented sequence involving a ball moving horizontally across the tabletop was used to probe the displayed velocity in depth. The probe ball appeared to rest on the tabletop; it was presented 2 cm away from the closest edge in depth (62 cm away from the participant) and moved from right to left before disappearing just before reaching the leftmost edge of the tabletop. The initial horizontal position of the probe ball was chosen randomly between 3.4 and 8.5 cm away from the center of the screen, always initially appearing on the right side of the display. 
Procedure
On each trial, subjects were shown the velocity in depth sequence and the horizontal probe. The two stimuli were never visible simultaneously; rather, the two sequences were displayed consecutively, in random order, to avoid any order effect on velocity judgments. Subjects were tasked with judging the speed of the horizontal probe relative to velocity in depth. The movement speed of the probe ball was determined according to an adaptive psychometric procedure to establish the velocity that appeared identical to the velocity in depth display. Participants indicated whether the horizontal probe display or the target display was faster by pressing one of two keys. This response was used to adjust the speed of the probe according to a one-up, one-down staircase procedure that terminated after 12 inversions. 
Results
Participants were tasked with matching the speed of the horizontal probe with the display in depth. Participants consistently needed to respond that the horizontal probe was slower than the velocity in depth for the staircase procedure to make the two displays appear identical in speed. Although judgments of velocity in depth were overestimated relative to the horizontal probe, the rate of overestimation was nonlinear (see Figure 6c). For each participant, we fit a quadratic model based on the displayed velocity in depth, \(y = C + a{V_z} + bV_z^2\), to find the relative rate of each participants overestimation (red dashed line in Figure 6c). 
Discussion
Figure 6c shows that the perceived velocity in depth (black line) exceeded horizontal velocity (dashed gray line), in contrast to previous findings (Welchman, Lam, & Bülthoff, 2008), which suggested faster horizontal motion compared to depth motion at equal speeds. Our study, however, differs in two key aspects: it included a looming cue and it presented the object with contextual information and at a depth that is within Panum's fusional area, likely increasing depth information from binocular disparity. 
Additionally, the varying biases observed in Experiment 1 are unlikely due to differences in speed overestimations in different gravity conditions, as motion speed in depth had to be adjusted across conditions. If faster moving objects are more overestimated compared to slower ones, this could result in larger predicted final locations in the Jupiter condition compared to the Mercury condition.  
\begin{eqnarray} {z_p} = \frac{{{{\hat{v}}_z}}}{{{v_z}}}{v_z}{t_{Fall}} = \frac{{{{\hat{v}}_z}}}{{{v_z}}}z\quad \end{eqnarray}
(2)
 
Equation 2, derived from projectile motion principles, underscores that the depth prediction depends on perceived speed relative to the simulated speed, \(\frac{{{{\hat{v}}_z}}}{{{V_z}}}\), and time of fall. The ratio of perceived to displayed speed, \(\frac{{{{\hat{v}}_z}}}{{{V_z}}}\), decreases with simulated speed, contradicting the expected pattern if biases stemmed solely from a linear speed overestimation (Figure 7a).  
\begin{eqnarray} {z_p} = \frac{{{{\hat{v}}_z}}}{{{v_z}}}\frac{{{t_E}}}{{{t_{Fall}}}}{v_z}{t_{Fall}} = \frac{{{{\hat{v}}_z}}}{{{v_z}}}\frac{{{t_E}}}{{{t_{Fall}}}}z\quad \end{eqnarray}
(3)
 
Figure 7.
 
Model predictions, zp, for Experiment 1 results. Figures are plotted identically to Figure 3. (a) Predicted location in depth of projectile based on perceived overestimation of velocity in depth: \(\frac{{{{\hat{v}}_z}}}{{{v_z}}}z\). As \(\frac{{{{\hat{v}}_z}}}{{{V_z}}}\) decreases with simulated velocity in depth, we note that the most overestimated condition is that of Mercury, which contradicts our Experiment 1 results. (b) Predicted locations based on an internalization of Earth's gravity and biased velocity in depth. Here, we find a striking resemblance to our Experiment 1 results.
Figure 7.
 
Model predictions, zp, for Experiment 1 results. Figures are plotted identically to Figure 3. (a) Predicted location in depth of projectile based on perceived overestimation of velocity in depth: \(\frac{{{{\hat{v}}_z}}}{{{v_z}}}z\). As \(\frac{{{{\hat{v}}_z}}}{{{V_z}}}\) decreases with simulated velocity in depth, we note that the most overestimated condition is that of Mercury, which contradicts our Experiment 1 results. (b) Predicted locations based on an internalization of Earth's gravity and biased velocity in depth. Here, we find a striking resemblance to our Experiment 1 results.
Equation 3 further emphasizes the dependency of a gravity prediction, introducing the ratio of the displayed fall time to that of Earth: tE/tFallFigure 7b illustrates the predicted locations based on Earth's gravity, showing variations in fall times across gravity conditions. 
We propose that this predictive cue to depth is combined with available sensory information (see Figure 8). We are agnostic about the specific nature of the cue integration process and propose that the simplest way of combining sensory and predictive information is through a weighted sum, where the weights sum to 1. We postulate that the visual system performs a rough estimate of the velocity in depth to determine the final position of the projectile. However, this estimate varies depending on the trajectory. When the trajectory confines the position of the object close to the edge, the velocity in depth is estimated based on the strong signal provided by the relative disparity between the projectile and the edge. When the projectile falls far from the edge, the velocity estimate relies solely on changes in ocular vergence and looming, as the binocular information is outside of the fusion range. Consequently, the weight of predictive information decreases as the falling location of the ball gets closer to the observer. The weight of sensory information (ws) thus varies as a function of depth, z, according to the following equation:  
\begin{eqnarray} {w_s} = {w_{s\left( {{\rm{min}}} \right)}} + \frac{{\left( {{w_{s\left( {{\rm{max}}} \right) - }}{w_{s\left( {{\rm{min}}} \right)}}} \right)}}{{\left( {{z_{{\rm{max}}}} - {z_{{\rm{min}}}}} \right)}}\left( {z - {z_{{\rm{min}}}}} \right)\quad \end{eqnarray}
(4)
where zmax and zmin represent the maximum and minimum possible depths in our display (116.50 and 34.25 mm, respectively).The minimum weight of the sensory information, ws(min), and the maximum weight of the sensory information, ws(max), are the only free parameters in our model. When the displayed depth, z, is identical to minimum possible depth, zmin, the weight of the sensory information is at its lowest, and it is highest when the displayed depth equals zmax. Importantly, these boundary conditions set by zmax and zmin ensure that ws is uniquely determined by these parameters. 
Figure 8.
 
Experiment 1 Model fit. Figures are plotted with the same axis values as in Figure 3. Model predictions are represented by the curved lines, with the relevant empirical data from Experiment 1 replotted from Figure 3. Error bands indicate ±1 SEM.
Figure 8.
 
Experiment 1 Model fit. Figures are plotted with the same axis values as in Figure 3. Model predictions are represented by the curved lines, with the relevant empirical data from Experiment 1 replotted from Figure 3. Error bands indicate ±1 SEM.
As the weight of the sensory information and predictive information sum to 1, the weight of the predictive information is dependent on the weight of the sensory information:  
\begin{eqnarray} {w_p} = 1 - {w_s}\quad \end{eqnarray}
(5)
and the final percept is a linear weighted average of these two sources of information:  
\begin{eqnarray} \hat{z} = {z_p}{w_p} + {z_s}{w_s}\quad \end{eqnarray}
(6)
 
The model was fit for each subject across all conditions of gravity and velocity in depth, resulting in consistent parameter values for each individual. Importantly, whereas the model for \(\hat{z}\) depends on ws, both ws(min) and ws(max) are free parameters that allow the model to capture nonlinear changes in weighting across different depths. This depth-dependent weighting is crucial, as the relative reliability of sensory and predictive cues naturally varies with depth. A simpler model with a single parameter, which we tested, performed poorly in fitting the data because it could not account for these nonlinear effects (Akaike information criterion [AIC] = 126.39), but our model produced the key results of our present experiments (AIC = 114.03): Overall, as the predictive cue zp involves an overestimation of the velocity in depth and therefore an overestimation of the final position in depth, we found that judgments in depth, \(\hat{z}\), are largely overestimated, despite the sensory information, zs, being unbiased. When the simulated gravity was inconsistent with Earth's gravity, we found an overestimation in the depth relative to the Earth condition when the simulated gravity was greater than Earth's and an underestimation when the simulated gravity was less than Earth's. Our cue-combination model with two free parameters (mean ws(min) = 0.65; mean ws(max) = 0.91) managed to produce a robust fit of the data, with a root mean square error (RMSE) of 4.85 mm and a standard error of the mean (SEM) of 0.32 mm. 
General discussion
We proposed that the visual system can use Earth's gravity as a predictive cue (Battaglia et al., 2013; Deeb et al., 2021; Hamrick et al., 2011) for estimating the location of launched objects. Previous studies have suggested heuristic strategies to explain observer accuracy in projectile events without invoking internalized gravity. To test our hypothesis, we removed the ability to exploit these online strategies and assessed whether the visual system could accurately predict the final location of a projectile based solely on its speed, fall height, and Earth's gravity. 
Experiment 1 revealed inaccurate perception without additional cues such as upward motion during launch or prolonged visual exposure; however, we found a significant effect of our gravity manipulation, suggesting representation of Earth's gravity. In Experiment 2, we reversed the orientation of the stimuli by 180° along the x-axis, but observed no effect of the gravity manipulation or representational momentum, thus ruling out representational momentum as a contributing factor. We hypothesized that overestimations in depth stem from perceiving the projectile's speed as being faster than displayed, leading to overestimated final locations. Experiment 4 results supported this, and Experiment 3 ruled out misperception in static depth estimates. Together, these findings suggest that biases in velocity in depth drove overestimations in Experiment 1 and predictive cues were responsible for the significant effect of gravity on depth judgments. Predicted depth locations based on Earth's gravity may explain why non-terrestrial gravity conditions led to overestimations (Jupiter) or underestimations (Mercury) compared to simulated Earth gravity. 
A potential alternative to our theory is the post-landing explanation, which suggests that biases in depth perception could arise from the observer's continued mental trajectory of the object after it lands. Examples from studies on the flash lag effect demonstrate large discrepancies in spatial estimates, such that a moving object is perceived much farther in the direction of motion than a flashed object seen in the same location (Hubbard, 2014; MacKay, 1958; Nijhawan, 1994). Although this aligns with our findings, where motion cues significantly influence depth perception, it is important to note that the actual displayed time of fall for the projectiles was exceedingly brief, as was the time the ball was visible after it landed. Given these extremely short durations, tracking the trajectory of the projectile in real time is exceptionally difficult for observers. 
It is also essential to consider the parabolic nature of the projectile's motion. Any forward movement in depth potentially caused by an effect similar to the flash lag effect would also require downward movement. Given the presence of the floor, this would suggest that the object either moved through the floor or rolled forward after falling—scenarios for which we have no evidence. Therefore, we prefer an explanation based on the integration of sensory and predictive information as described in the previous point. This model posits that the visual system combines sensory input and predictive information, resulting in the observed biases in depth perception. 
Understanding projectile motion judgments helps to unveil how the visual system models physics. Earth's gravity constant, given its ubiquitous presence and relevance in intercepting or avoiding launched projectiles, likely plays a crucial role in any intuitive physics system. Although our findings support the use of internalized gravity by the visual system, they do not discount heuristic explanations. Instead, we propose that, in the absence of real-time information for scene perception, the visual system relies on an internalized model of Earth's gravity that, although distinct from classical mechanics, is optimized for practical interaction with our environment. 
Acknowledgments
We thank the reviewers and the editor, Michael Landy, PhD, for their insightful comments and suggestions, which significantly improved the quality of this manuscript. We are also grateful to Ailin Deng and Jason Fischer, PhD, for their useful advice. We also acknowledge the financial support provided by the National Science Foundation (Grant No. 2120610) and the National Institutes of Health (Grant No. 1R21EY033182-01A1). 
Commercial relationships: none. 
Corresponding author: Abdul-Rahim Deeb. 
Address: Department of Psychological Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA. 
References
Battaglia, P. W., Hamrick, J. B., & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, USA, 110(45), 18327LP–18332LP, https://doi.org/10.1073/pnas.1306572110.
Baurès, R., Benguigui, N., Amorim, M. A., & Siegler, I. A. (2007). Intercepting free falling objects: Better use Occam's razor than internalize Newton's law. Vision Research, 47(23), 2982–2991, https://doi.org/10.1016/j.visres.2007.07.024. [PubMed]
Belousov, B., Neumann, G., Rothkopf, C. A., & Peters, J. R. (2016). Catching heuristics are optimal control policies. In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., & Garnett, R. (Eds.), Proceedings of the 30th International Conference on Neural Information Processing Systems (pp. 1434–1442). Red Hook, NY: Curran Associates.
Bertamini, M. (1993). Memory for position and dynamic representations. Memory & Cognition, 21(4), 449–457, https://doi.org/10.3758/bf03197176. [PubMed]
Chapman, S. (1968). Catching a baseball. American Journal of Physics, 36, 868–870, https://doi.org/10.1119/1.1974297.
Deeb, A. R., Cesanek, E., & Domini, F. (2021). Newtonian predictions are integrated with sensory information in 3D motion perception. Psychological Science, 32(2), 280–291, https://doi.org/10.1177/0956797620966785. [PubMed]
Fink, P. W., Foo, P. S., & Warren, W. H. (2009). Catching fly balls in virtual reality: A critical test of the out fielder problem. Journal of Vision, 9(13):14, 1–8, https://doi.org/10.1167/9.13.14. [PubMed]
Finke, R. A., Freyd, J. J., & Shyi, G. C. W. (1986). Implied velocity and acceleration induce transformations of visual memory. Journal of Experimental Psychology: General, 115(2), 175–188, https://doi.org/10.1037/0096-3445.115.2.175. [PubMed]
Firestone, C., & Keil, F. C. (2016). Seeing the tipping point: Balance perception and visual shape. Journal of Experimental Psychology: General, 145(7), 872–881, https://doi.org/10.1037/xge0000151. [PubMed]
Firestone, C., & Scholl, B. J. (2014). “Please tap the shape, anywhere you like”: Shape skeletons in human vision revealed by an exceedingly simple measure. Psychological Science, 25(2), 377–386, https://doi.org/10.1177/0956797613507584. [PubMed]
Freyd, J. J. (1983). The mental representation of movement when static stimuli are viewed. Perception & Psychophysics, 33, 575–581, https://doi.org/10.3758/BF03202940. [PubMed]
Freyd, J. J., Pantzer, T. M., & Cheng, J. L. (1988). Representing statics as forces in equilibrium. Journal of Experimental Psychology: General, 117(4), 395–407, https://doi.org/10.1037/0096-3445.117.4.395. [PubMed]
Hamrick, J., Battaglia, P., & Tenenbaum, J. B. (2011). Internal physics models guide probabilistic judgments about object dynamics. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1545–1550). Wheat Ridge, CO: Cognitive Science Society.
Hubbard, T. L. (1995). Cognitive representation of motion: Evidence for friction and gravity analogues. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(1), 241–254, https://doi.org/10.1037/0278-7393.21.1.241. [PubMed]
Hubbard, T. L. (2014). The flash-lag effect and related mislocalizations: Findings, properties, and theories. Psychological Bulletin, 140(1), 308–338, https://doi.org/10.1037/a0032899. [PubMed]
Khomut, B., & Warren, W. (2010). Catching fly balls in VR: A test of the OAC, LOT and trajectory prediction strategies. Journal of Vision, 7, 146, https://doi.org/10.1167/7.9.146.
Kistemaker, D. A., Faber, H., & Beek, P. J. (2009). Catching fly balls: A simulation study of the Chapman strategy. Human Movement Science, 28(2), 236–249, https://doi.org/10.1016/j.humov.2008.11.001. [PubMed]
Krist, H., Fieberg, E. L., & Wilkening, F. (1993). Intuitive physics in action and judgment: The development of knowledge about projectile motion. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19(4), 952–966, https://doi.org/10.1037/0278-7393.19.4.952.
Landy, M. S., Maloney, L. T., Johnston, E. B., & Young, M. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35(3), 389–412, https://doi.org/10.1016/0042-6989(94)00176-M. [PubMed]
Lee, D. N. (2007). Visuo-motor coordination in space-time. In: Pepping, G.-J., & Grealy, M. A. (Eds.), Closing the gap: The scientific writings of David N. Lee (pp. 259–277). Lawrence Erlbaum Associates Publishers.
MacKay, D. M. (1958). Perceptual stability of a stroboscopically lit visual field containing self-luminous objects. Nature, 181(4607), 507–508, https://doi.org/10.1038/181507a0. [PubMed]
McIntyre, J., Zago, M., Berthoz, A., & Lacquaniti, F. (2001). Does the brain model Newton's laws? Nature Neuroscience, 4(7), 693–694, https://doi.org/10.1038/89477. [PubMed]
McLeod, P., Reed, N., & Dienes, Z. (2006). The generalized optic acceleration cancellation theory of catching. Journal of Experimental Psychology Human Perception & Performance, 32(1), 139–148, https://doi.org/10.1037/0096-1523.32.1.139.
McLeod, P., Reed, N., Gilson, S., & Glennerster, A. (2008). How soccer players head the ball: A test of optic acceleration cancellation theory with virtual reality. Vision Research, 48(13), 1479–1487, https://doi.org/10.1016/j.visres.2008.03.016. [PubMed]
Monache, S. D., Lacquaniti, F., & Bosco, G. (2019). Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion. Journal of Vision, 19(4):13, 1–21, https://doi.org/10.1167/19.4.13.
Moscatelli, A., La Scaleia, B., Zago, M., & Lacquaniti, F. (2019). Motion direction, luminance contrast, and speed perception: An unexpected meeting. Journal of Vision, 19(6):16, 1–13, https://doi.org/10.1167/19.6.16.
Nijhawan, R. (1994). Motion extrapolation in catching. Nature, 370(6487), 256–257, https://doi.org/10.1038/370256b0. [PubMed]
Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science, 237(4820), 1317–1323, https://doi.org/10.1126/science.3629243. [PubMed]
Shepard, R. N. (1994). Perceptual cognitive universals as reflections of the world as reflections of the world. Psychonomic Bulletin and Review, 1(1), 2–28, https://doi.org/10.3758/BF03200759.
Siegler, I. A., Bardy, B. G., & Warren, W. H. (2010). Passive vs. active control of rhythmic ball bouncing: The role of visual information. Journal of Experimental Psychology: Human Perception and Performance, 36(3), 729–750, https://doi.org/10.1037/a0016462. [PubMed]
Smith, K., Battaglia, P. W., & Vul, E. (2013). Consistent physics underlying ballistic motion prediction. In Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 3426–3431). Wheat Ridge, CO: Cognitive Science Society.
Welchman, A. E., Lam, J. M., & Bülthoff, H. H. (2008). Bayesian motion estimation accounts for a surprising bias in 3D vision. Proceedings of the National Academy of Sciences, USA, 105(33), 12087–12092, https://doi.org/10.1073/pnas.0804378105.
Ye, T., Qi, S., Kubricht, J., Zhu, Y., Lu, H., & Zhu, S. C. (2017). The Martian: Examining human physical judgments across virtual gravity fields. IEEE Transactions on Visualization and Computer Graphics, 23(4), 1399–1408, https://doi.org/10.1109/TVCG.2017.2657235. [PubMed]
Zhao, H., & Warren, W. H. (2015). On-line and model-based approaches to the visual control of action. Vision Research, 110(part B), 190–202, https://doi.org/10.1016/j.visres.2014.10.008. [PubMed]
Figure 1.
 
Stimuli and predictions. (a) Graphical description of landing positions in depth for three different gravitational conditions (3.7 m/s2, red; 9.81 m/s2, green; 24.79 m/s2, blue) and the same displayed trajectory (gray). Despite the variation in gravity, identical parabolic trajectories were displayed, with the velocity in depth adjusted accordingly. The gray trajectory representing one of the six possible displayed paths, shown in (b), was kept the same across gravity conditions by adjusting the initial speed vz vzwith the magnitude of the gravitational acceleration (red, green, and blue arrows). The height of the fall is labeled h. (b) Display of the six locations in depth used in Experiments 1, 2, and 3. Depth was measured from the farthest edge of the floor; therefore, the smallest displayed depth was farthest away from the observer. (c) Predictions of the location in depth based on Earth's gravity and the displayed velocity in depth, vz. The dashed lines depict the trajectories predicted if Earth's gravity is assumed (green). Lower velocities in depth were used in the Mercury condition (red); therefore, under the assumption of Earth's gravity, the final location of the projectile would be closer to the edge of the table. Conversely, faster velocities in depth were used for the Jupiter condition (blue); therefore, under the assumption of Earth's gravity, the predicted location in depth would be farther away from the edge of the tabletop.
Figure 1.
 
Stimuli and predictions. (a) Graphical description of landing positions in depth for three different gravitational conditions (3.7 m/s2, red; 9.81 m/s2, green; 24.79 m/s2, blue) and the same displayed trajectory (gray). Despite the variation in gravity, identical parabolic trajectories were displayed, with the velocity in depth adjusted accordingly. The gray trajectory representing one of the six possible displayed paths, shown in (b), was kept the same across gravity conditions by adjusting the initial speed vz vzwith the magnitude of the gravitational acceleration (red, green, and blue arrows). The height of the fall is labeled h. (b) Display of the six locations in depth used in Experiments 1, 2, and 3. Depth was measured from the farthest edge of the floor; therefore, the smallest displayed depth was farthest away from the observer. (c) Predictions of the location in depth based on Earth's gravity and the displayed velocity in depth, vz. The dashed lines depict the trajectories predicted if Earth's gravity is assumed (green). Lower velocities in depth were used in the Mercury condition (red); therefore, under the assumption of Earth's gravity, the final location of the projectile would be closer to the edge of the table. Conversely, faster velocities in depth were used for the Jupiter condition (blue); therefore, under the assumption of Earth's gravity, the predicted location in depth would be farther away from the edge of the tabletop.
Figure 2.
 
Experimental setup. The stereoscopic image displayed on the monitor was reflected onto a mirror, creating a 3D scene extending beyond the surface of the mirror. The stimulus was comprised of a step-shaped virtual structure, consisting of both the tabletop and floor connected by a vertical wall. The tabletop was situated farther from the observer in depth and 7 cm below eye level. Additionally, a virtual floor was positioned closer to the observer in depth. A projectile traversed toward the observer and descended to the virtual floor under one of three simulated gravitational conditions.
Figure 2.
 
Experimental setup. The stereoscopic image displayed on the monitor was reflected onto a mirror, creating a 3D scene extending beyond the surface of the mirror. The stimulus was comprised of a step-shaped virtual structure, consisting of both the tabletop and floor connected by a vertical wall. The tabletop was situated farther from the observer in depth and 7 cm below eye level. Additionally, a virtual floor was positioned closer to the observer in depth. A projectile traversed toward the observer and descended to the virtual floor under one of three simulated gravitational conditions.
Figure 3.
 
Results from Experiment 1. The perceived location in depth is represented on the vertical axis in terms of the location of the fall, meaning that 0 mm was the location where the ball launched off the tabletop surface. The displayed position is represented on the horizontal. Results are color coded according to the gravity condition. The unity line represents the displayed information. Each data point represents the average response, and error bars are SEM. Perception was inaccurate regarding the displayed information, as shown by the data points not lying along the unity line. However, there is a clear differentiation by gravity condition.
Figure 3.
 
Results from Experiment 1. The perceived location in depth is represented on the vertical axis in terms of the location of the fall, meaning that 0 mm was the location where the ball launched off the tabletop surface. The displayed position is represented on the horizontal. Results are color coded according to the gravity condition. The unity line represents the displayed information. Each data point represents the average response, and error bars are SEM. Perception was inaccurate regarding the displayed information, as shown by the data points not lying along the unity line. However, there is a clear differentiation by gravity condition.
Figure 4.
 
Experiment 2 task design and results. (a) Projectile motion display and experimental apparatus in Experiment 2. The stereoscopic image on the monitor was reflected so that the 3D scene appeared beyond the mirror. (b) Experiment 3 results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 3.7 m/s2 (red) and 24.79 m/s2 (blue) upward acceleration conditions. The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Figure 4.
 
Experiment 2 task design and results. (a) Projectile motion display and experimental apparatus in Experiment 2. The stereoscopic image on the monitor was reflected so that the 3D scene appeared beyond the mirror. (b) Experiment 3 results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 3.7 m/s2 (red) and 24.79 m/s2 (blue) upward acceleration conditions. The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Figure 5.
 
Experiment 3 task results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 40-ms condition (red) and the 80-ms condition (blue) The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Figure 5.
 
Experiment 3 task results. Perceived location in depth is on the y-axis, and the true displayed location is on the x-axis. Each curve represents the average of subjects’ responses for the 40-ms condition (red) and the 80-ms condition (blue) The dashed diagonal line represents veridicality, and error bars represent ±1 SEM.
Figure 6.
 
Experiment 4 task design and results. (a) The probe sequence displayed horizontal motion along the x-axis. (b) The standard stimulus displayed motion along the depth (z) dimension. (c) Results of Experiment 4. Perceived velocity in x is shown as a function of displayed velocity in z (black solid curve). The dashed diagonal line represents veridicality, and error bars represent ±1 SEM. The red dashed line represents a quadratic model fit of the subjects’ data.
Figure 6.
 
Experiment 4 task design and results. (a) The probe sequence displayed horizontal motion along the x-axis. (b) The standard stimulus displayed motion along the depth (z) dimension. (c) Results of Experiment 4. Perceived velocity in x is shown as a function of displayed velocity in z (black solid curve). The dashed diagonal line represents veridicality, and error bars represent ±1 SEM. The red dashed line represents a quadratic model fit of the subjects’ data.
Figure 7.
 
Model predictions, zp, for Experiment 1 results. Figures are plotted identically to Figure 3. (a) Predicted location in depth of projectile based on perceived overestimation of velocity in depth: \(\frac{{{{\hat{v}}_z}}}{{{v_z}}}z\). As \(\frac{{{{\hat{v}}_z}}}{{{V_z}}}\) decreases with simulated velocity in depth, we note that the most overestimated condition is that of Mercury, which contradicts our Experiment 1 results. (b) Predicted locations based on an internalization of Earth's gravity and biased velocity in depth. Here, we find a striking resemblance to our Experiment 1 results.
Figure 7.
 
Model predictions, zp, for Experiment 1 results. Figures are plotted identically to Figure 3. (a) Predicted location in depth of projectile based on perceived overestimation of velocity in depth: \(\frac{{{{\hat{v}}_z}}}{{{v_z}}}z\). As \(\frac{{{{\hat{v}}_z}}}{{{V_z}}}\) decreases with simulated velocity in depth, we note that the most overestimated condition is that of Mercury, which contradicts our Experiment 1 results. (b) Predicted locations based on an internalization of Earth's gravity and biased velocity in depth. Here, we find a striking resemblance to our Experiment 1 results.
Figure 8.
 
Experiment 1 Model fit. Figures are plotted with the same axis values as in Figure 3. Model predictions are represented by the curved lines, with the relevant empirical data from Experiment 1 replotted from Figure 3. Error bands indicate ±1 SEM.
Figure 8.
 
Experiment 1 Model fit. Figures are plotted with the same axis values as in Figure 3. Model predictions are represented by the curved lines, with the relevant empirical data from Experiment 1 replotted from Figure 3. Error bands indicate ±1 SEM.
Table 1.
 
Projectile velocities at different depths and gravitational accelerations.
Table 1.
 
Projectile velocities at different depths and gravitational accelerations.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×