Open Access
Article  |   June 2020
Object-based warping in three-dimensional environments
Author Affiliations
Journal of Vision June 2020, Vol.20, 16. doi:https://doi.org/10.1167/jov.20.6.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joshua E. Zosky, Timothy J. Vickery, Kerri A. Walter, Michael D. Dodd; Object-based warping in three-dimensional environments. Journal of Vision 2020;20(6):16. https://doi.org/10.1167/jov.20.6.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Object-based warping is a powerful visual illusion wherein space between features within figural regions is regularly overestimated compared with those within ground regions. Originally, the effect was only examined in displays of two-dimensional (2D) stimuli. The present study sought to examine whether object-based warping persists in more naturalistic viewing conditions, where additional contextual cues are present. Stimuli were presented with either three-dimensional (3D) printed objects (Experiment 1) or 3D objects in virtual reality (Experiments 2–4). The testing metric was actual distance of features (dots) compared with estimated distances made by participants. Responses for the 3D printed stimuli were measured with replica dots on a slide ruler device. The virtual reality experiments collected responses either with a computer mouse or motion-tracked controller and included manipulations of object type, spatial separation, viewing distance of stimuli, and head motion. A standard warping effect in 3D was observed in all experiments, although the effect was not present in one condition that elicits warping in 2D (Occluded Rectangle). The final experiment resolves this discrepancy by reducing the multicomponent object (Occluded Rectangle) to a single component figure, while demonstrating the influence of depth cues on the warping effect under occlusion. Collectively, these experiments reveal that object-based warping is a powerful effect, even in naturalistic settings.

Introduction
Vision scientists frequently use two-dimensional (2D) stimuli to examine the complexity of visual processing in humans. These types of stimuli offer the comfort of highly controlled experimentation at the expense of ecological validity. The use of such stimuli has led to numerous discoveries of perception-altering visual illusions. Illusions often provide insight into the mechanisms underlying vision and perception based upon their fundamental properties such as motion (Anstis, 2001; Suchow & Alvarez, 2011), size (Delboeuf, 1865; Künnapas, 1955; Massaro & Anderson, 1971), spatial representation (Bruno, 2001; Shiffrar & Pavel, 1991), and viewing angle (Kitaoka & Ashida, 2003; Otten et al., 2016; Pinna & Brelstaff, 2000). Although some illusions hint at a grossly distorted perceptual experience, real-world viewing conditions may mitigate these effects. The focus of the present study is a little-studied illusion with broad potential ramifications for real-world perception: object-based warping. 
The initial report of this effect (Vickery & Chun, 2010b) demonstrated that an object seems to warp the visual perception of distance. To investigate this empirically, participants were asked to reproduce the distance between two stationary red dots on a computer monitor with a second set of red dots they could manipulate using a mouse. On some trials, the stationary dots were superimposed on, or surrounded by, various geometric shapes. When performed without bounding objects, participants overestimated the actual distance between dots by 3.8%. When the reference dots were superimposed on a black rectangle (arguably the strongest object condition), the overestimation rate was 17.1%. Similar results were observed when the dots appeared on two separated rectangles as well as a rectangle bisected by a white rectangle. 
Vickery and Chun (2010b) raised two possible explanations for what they termed the “object-based warping” effect: attention and visual representation. Judging the dots’ spacing requires attention to the surrounding area. Examinations of object-based attention have shown that basic 2D shapes (Duncan, 1981; Egly et al., 1994; Moore et al., 1998) and objects embedded in real-world scenes (Malcolm & Shomstein, 2015) can influence attention. This finding suggests that attending to the space between dots requires attending to the object encapsulating that space. Furthermore, attention has been linked to spatial distortions (Gobell & Carrasco, 2005; Liverence & Scholl, 2011; Suzuki & Cavanagh, 1997), meaning that the warping of perceived space might be the result of attentional distribution on bounding objects. Similar to attention, studies of the visual cortex suggest that figure regions are preferred to ground regions (Kovács & Julesz, 1994; Marcus & Van Essen, 2002). This preference for figures leads to an exaggerated cortical representation of objects that might result in perceptual warping. Although these explanations were raised separately, the authors note that attention and visual cortex activity are potentially joint factors that lead to object-based warping. Both proposals make a critical assumption that the phenomenon is ubiquitous and should occur in more natural settings/scenes. 
The present study sought to determine whether warping occurs in natural environments. Experiments were designed to examine how natural observation and response influence object-based warping. The initial experiment tests for object-based warping with physical objects. Given that real objects are difficult to systematically manipulate in a controlled manner, follow-up experiments used virtual reality (VR), affording an opportunity to manipulate object placement in a virtual three-dimensional (3D) space while maintaining a high degree of control over the environment. 
There are a number of reasons to believe that this warping phenomenon might only manifest in highly artificial stimuli. First, it has been suggested that the adaptation of 2D experiments to 3D or stereoscopic contexts can benefit task performance involving position or distance judgments (McIntire et al., 2014). These findings suggest that replicating object-based warping in naturalistic scenes is not a given; in contrast, warping is not directly attributed to object measurement accuracy, but the relative judgments of distances within versus outside a figure. Researchers studying object measurements at varying distances observed that mean object sizes were overestimated when placed at a remote distance relative to the observer (Tiurina & Utochkin, 2019), which suggests that spatial warping might increase with greater distances from the viewer. Previous studies have also found that differences in response modality can alter perception and change task outcomes (Creem-Regehr & Kunz, 2010; McLeod, 1977; Stelzel et al., 2006). This discovery is relevant given that response modality may influence perception and performance on the task. Finally, it is worth noting that, although some research is beginning to address the influence of head movements on perception (Gramann et al., 2014), there is little known about the role head motion plays on perception. Knowing that these variables could impact perception in natural scenes, our experiments address the role of viewing distance, response modality, and head movements as they relate to object-based warping. 
The immediate goal of the present study was to explore whether the warping illusion extends to physical 3D objects (Experiment 1) and virtual displays (Experiments 24), with the secondary goal of determining how the illusion would be impacted by various manipulations of display and response type. The conditions investigated within the experiments were Object Type, Viewing Distance, Dot Distance, Visual Orientation, and Response Modality. 
Experiment 1
In Experiment 1, we sought evidence that object-based warping might occur in real-world stimuli by 3D printing shapes with fixed spacing between reference points and asking participants to judge those spaces with a 3D-printed device. 
Methods
Participants
Seventeen subjects from the University of Delaware community completed this experiment for payment. Experiment 1 methods were approved by University of Delaware's Institutional Review Board. 
Materials
Two reference stimuli sets (a one-object set and a two-object set) and one adjustment device were created using Tinkercad (Autodesk, 2019; https://www.tinkercad.com/). The stimuli were then 3D printed using an Ultimaker 3 3D printer (Figure 1). 
Figure 1.
 
Actual 3D-printed stimulus sets. (Top left) Two-objects reference set. (Top right) One-object reference set. (Bottom) Adjustment set.
Figure 1.
 
Actual 3D-printed stimulus sets. (Top left) Two-objects reference set. (Top right) One-object reference set. (Bottom) Adjustment set.
The one-object stimulus set consisted of (1) a base (100 mm × 100 mm) printed in black filament with two depressions on the surface, and (2) a single “object” (20 mm × 60 mm) printed in silver-gray filament, the underside of which slotted into the depressions on the black surface. Two red dots were printed embedded into the surface of the object (using the dual extrusion printhead, so that the dots were fully incorporated into the object). The outer-edge to outer-edge spacing of the red dots was 41.75 (40 mm center-to-center). The two-object stimulus set consisted of (1) an identical black base and (2) two square-shaped silver-gray objects (each 20 mm × 20 mm) that slotted into the base such that the outer-edge to outer-edge distance was 60 mm. Each square had one red dot printed on the surface in the center of the square, with the same center-to-center distance of these red dots of 40 mm, as in the single-object case (41.75 mm outer-edge to outer-edge). All distances were verified using electronic calipers; variance in printing caused errors of less than 0.2 mm in all cases. 
The adjustment stimuli consisted of a black rail upon which rested two smaller rectangular silver-gray objects. The two adjustment objects each had a red dot of the same size and shape as the ones on the reference stimulus sets embedded on their surfaces. The dot stimuli could be moved back and forth along the rail, which was used for stabilization. At their closest possible spacing, the outer-edge to outer-edge distance of the dots was 10 mm, and at their farthest spacing without falling off the rail their distance was 168 mm. 
Procedure
The subject was seated at a table. A cardboard barrier was used to hide stimuli not in use. At the start of each trial, the experimenter set the adjustment objects to be either maximally proximal or maximally distant, according the current trial condition. The adjustment stimuli were placed directly in front of the subject. Then, the reference object for the current trial condition was placed at approximately arm's reach from the subject. The subject adjusted the adjustment stimuli until the red dots appeared to be as far apart from one another as those on the reference stimuli. This procedure was untimed and done at the subject's own pace. Once the subject indicated they were finished, the experimenter withdrew the adjustment device to behind the occluder, measured the distance (in millimeters) between the red dots using electronic calipers, recording the estimated outer-edge to outer-edge distance. The reference item was removed before beginning the next trial's procedure. 
There were four conditions, each presented once in a single trial. The conditions represented the crossing of two factors: object (single or double) and initial adjustment spacing (far or near). Trials alternated between presentation of the single or double object, constraining presentation sequences to 16 possible orderings of the four conditions. Each iteration was presented to one subject. Owing to accidental oversampling by one subject, one ordering was presented twice (exclusions of the oversampled results had negligible effects on estimates/statistics, so we report results from all 17 subjects here). 
Results
Approach and corrections
We entered the data into a 2 (starting position of adjustment dots: near vs. far) × 2 (object: one or two) repeated-measures analysis of variance (ANOVA). Neither the main effect of initial spacing nor the interaction of spacing with object neared significance (both F < 1); therefore, we averaged over the initial spacing conditions and report the paired t test between object conditions for the sake of simplicity. 
Analyses
Spacing was significantly overestimated under both conditions, compared with the actual spacing of 41.75 mm (both p < .001, one-sample t tests). However, subjects reported the two dots as farther apart when they appeared in the one-object condition (M = 53.0 mm, SD = 4.3 mm) than in the two-object condition (M = 49.0 mm, SD = 5.20 mm). The difference of 4.0 mm was significant, t (16) = 4.99, p < .001, Cohen's d = 1.21. Only one subject had a numerical difference in the opposite direction from the group average. 
Discussion
The results of Experiment 1 suggest that object-based warping is robust in real-world 3D objects, with greater overestimation of dot distance in the single-object case (with both red dots on the same object) than in the two-object case (with the dots on different objects). Owing to the time, labor, and expense involved in 3D printing objects, we sought a more flexible approach to examining object-based warping in more realistic contexts. Therefore, we took a VR approach with artificial stimuli in the following experiments. 
Experiment 2
The goal of the second experiment was to determine whether the warping illusion also occurs in VR. The three object manipulations that elicited the strongest warping effects were adapted from the original Vickery and Chun study (Rectangle, Separated Rectangle, Occluder) as well as the no-object (Control) condition. This decision was intended to maximize the chances of observing an effect of warping under new viewing conditions. Given that there was uncertainty regarding how viewing distance might alter perception as it relates to warping, two viewing distances were incorporated (Near, Far), with the closer condition approximating the original study's display and viewing distance. The distance between dots (Larger, Smaller) was manipulated to determine how minor variations in dot location influence warping given that objects in the real world are viewed at variable distances. This manipulation was based on findings from studies by Vickery and Chun (2010a, 2010b), which suggest that warping is greatest when dots are equidistant from the object's center and borders. The same study suggested that dot locations that deviate away from this central location reduced measures of warping. Finally, viewing orientation (Static, Dynamic) was manipulated between blocks of trials to examine perceptual changes related to head movements with visual updating versus head movements without. The motivation for this final variable stem from concern over VR equipment's default state of linking a user's head orientation to their virtual visual perspective. Adding immersive properties to VR experiences could also lead to perceptual differences from previous studies, which maintain static stimulus presentation. Including multiple factors could aid explanation of any subtle differences between previous findings and the current experiment. 
We predicted that the warping illusion would replicate under all conditions. Specifically, we expected overestimation of dot distance for the Rectangle, Occluder, and Separated Rectangle objects, whereas the mean measurement for the no-object control would not be significantly different from the actual dot distance. Although there was uncertainty regarding how viewing distance, dot distance, and viewing orientation would modulate the effect, warping was still expected under these conditions. With regard to viewing distance, it was expected that farther viewing distances would lead to a decrease in warping, as there would be less visual input to induce a perceptual error. The hypothesis for dot spacing was that the smaller spacing, which held dots centrally at locations equidistant from the center and borders of the Rectangle Object, would lead to greater warping compared with larger spacing as suggested in a follow-up study by Vickery and Chun (2010a). Finally, greater warping was expected in the visually static updates versus dynamic updates of viewing orientation. This expectation is based on receiving additional visual depth cues in the dynamic updates. Vickery and Chun attribute warping to the combination of dots and objects as a single entity, and this should change the percept of dots and objects from a single entity to independent objects. With all of these hypotheses outlined, it should be noted that it was unclear how multiple attributes of VR would interact with the presence and magnitude of the warping effect. 
Methods
Participants
Thirty-nine undergraduate students from the University of Nebraska–Lincoln participated in the study and received course credit for their participation. All participants had normal or corrected-to-normal vision and were naïve to the purpose of the experiment, which took place in a single 60-minute session. Experiment 2 methods were approved by University of Nebraska-Lincoln's Institutional Review Board. 
Materials
All stimuli are presented in Figure 2. Stimuli were designed to best match the parameters of Vickery and Chun (2010b) while optimizing for VR presentation (see Figure 3 for an example). An Oculus Rift CV1 headset was used for stimulus presentation in a virtual environment. Experiment and stimuli design occurred in Vizard 5 (WorldViz, Inc., Santa Barbara, CA), a Python programming IDE designed for 3D and VR development. 
Figure 2.
 
Stimuli used in the VR experiments. The diagram in (A) shows all stimuli used in Experiments 2 and 3 (No Object, Rectangle, Separated Rectangle, Occluder) and the additional stimuli from Experiment 4 (Merged Occluder). (B) A close-up comparison of the Occluder and Merged Occluder in the Near-Small and Far-Small formats. The difference between the bottom face of the occluding white-rectangle for the Occluder object is evident at near viewing distances, but considerably less so in the far viewing distance.
Figure 2.
 
Stimuli used in the VR experiments. The diagram in (A) shows all stimuli used in Experiments 2 and 3 (No Object, Rectangle, Separated Rectangle, Occluder) and the additional stimuli from Experiment 4 (Merged Occluder). (B) A close-up comparison of the Occluder and Merged Occluder in the Near-Small and Far-Small formats. The difference between the bottom face of the occluding white-rectangle for the Occluder object is evident at near viewing distances, but considerably less so in the far viewing distance.
Figure 3.
 
Example of a single trial in the VR experiments with the following characteristics: Rectangle Object, Near Viewing Distance, Small Dot Distance, Static Viewing Orientation.
Figure 3.
 
Example of a single trial in the VR experiments with the following characteristics: Rectangle Object, Near Viewing Distance, Small Dot Distance, Static Viewing Orientation.
Vizard uses metric measurements within 3D renderings for accurate portrayals of distance in virtual space. The software also uses a 3D coordinate space based upon anatomical direction. All participants' viewpoints start at the (X, Y, Z) origin coordinate of (0, 0, 0). A single point light was fixed at coordinate (0, 0, 0) and cast light in all directions. Shadows were enabled for the objects to add visual depth cues. The environment consisted of uniform grey color in all directions. Within the VR environment the only items present were Dots, Objects, and the grey background. 
The experiment consisted of four manipulations: Object, Viewing Distance, Dot Distance, and Viewing Orientation. The Objects were a no-object control (Control), a black rectangle (Rectangle), a black rectangle missing its central region (Separated Rectangle), and a horizontal white bar placed in front of a black rectangle (Occluder). Objects can be seen in Figure 2. Measurement in VR space is based on meters as the base unit, and 1 meter in VR is designed to be identical to actual measurements outside of VR.1 The Viewing Distance manipulation involved viewing reference and manipulation stimuli at either 4 m distance (Near) or 9 m distance (Far) and objects were either at a 4.2 m or 9.2 m distance. Each shape was 0.1 m long in Z coordinates. All shapes were centered 1.95 m from the origin coordinate, with reference stimuli and objects being located at coordinates (−1.50 m, 1.25 m) and manipulation stimuli located at (1.5 m, −1.25 m). All black rectangle objects were 1 m tall in Y coordinates and 0.5 m wide in X coordinates. The Separated Rectangle's measurements and location were identical to the Rectangle object, except the top and bottom halves were 0.4 m tall, leaving a 0.2 m gap between halves. The white occluder was 0.2 m tall, 1.0 m wide, and uniquely located at Z coordinates of 4 m or 9 m distance unlike the other objects. The reference and manipulation stimuli, from here on referred to as “Dots,” were red, 3D orbs 0.2 m in diameter. In the Dot Distance manipulation, the distance between Dots was 50 cm (Smaller) or 60 cm (Larger). The Viewing Orientation manipulation presented displays as constantly facing forward on the Z axis (Static) or updated the display relative to head orientation (Dynamic) based on Oculus Rift motion sensors. 
Responses were made with a typical computer mouse, where measurement was performed using the mouse's scroll wheel and judgements were indicated using the left button. Pulling the wheel toward the observer lowered the ball, and pushing away raised it. Movements were made in 5-mm intervals, where a continuous pull—or push—of the mouse wheel typically amounted to 5 cm. When they decided on a matched distance, they were to press the left mouse button to move on to the next trial. 
Procedure
Participants were seated in front of a gaming computer with a VR headset. After receiving instruction on how to apply the headset, the researcher told each participant their task, “Move the dot below another vertically aligned dot in the lower right side of your view until the distance between dots matches the distance between a static set of dots in the upper left corner of your view.” Participants were instructed to accomplish the task by using the tracking-wheel on a computer mouse. They would continue this for the entirety of the experiment. Each trial began with the response dot located at a random location ranging ±1 m from the correct distance. 
There were 16 unique trials (4 Object × 2 Viewing Distance × 2 Dot Distance) sequentially randomized within a run. Each run was repeated four times per block (64 trials). Participants were given breaks after each block to minimize potential discomfort. There were six blocks per participant, for a total of 384 trials. After three blocks, the Viewing Orientation condition changed from Static to Dynamic or vice versa. The orientation tracking condition switched after three blocks (as opposed to alternating blocks) to decrease potential participant discomfort. 
Results
To analyze the warping effect, Vickery and Chun's (2010b) method of calculating the percent difference between reported and actual distance between dots was used. This measurement error is referred to as “warping” and “distortion” from this point forward, even when discussing the Control Object condition (which is a pure measurement independent of any other influencing stimuli). Positive distortion values indicate an overestimation of the distance, and negative values indicate the converse. The mean distortion was then calculated across all combinations of trial parameters for each subject. 
Approach and corrections
A 4 (Object: Rectangle, Separated Rectangle, Occluder, Control) × 2 (Viewing Distance: Near, Far) × 2 (Dot Distance: Small, Large) × 2 (Viewing Orientation: Dynamic, Static) repeated measures ANOVA was performed on the data to examine if there were differences in warping measurements for Objects at different levels of Viewing Distance, Dot Distance, and Visual Orientation. Table 1 shows the results of the ANOVA for each combination of factors. The assumption of sphericity was violated for the effect of Object, as assessed by Mauchly's test of sphericity, p < .001. To address this, a Greenhouse-Geisser correction was applied to the main effect and interaction effects of Object (see Table 1 for all epsilon values and corrections). All follow-up tests were performed with paired-samples t tests using Bonferroni correction. 
Table 1.
 
Experiment 2 ANOVA results. Notes: dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 1.
 
Experiment 2 ANOVA results. Notes: dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Analyses
Table 2 highlights the means and standard deviations for each factor in the experiment. There was a significant main effect of Object, p < .001, indicating mean differences in warping between the Control, Rectangle, Separated Rectangle, and Occluder conditions. Follow-up tests revealed that the warping measurement for the Rectangle was larger than that for the Control, mean difference = 8.773, p < .001, Separated Rectangle, mean difference = 3.65, p < .001, and Occluder, mean difference = 9.119, p < .001. Similarly, warping was greater for the Separated Rectangle than the Control, mean difference = 5.122, p < .001, and Occluder mean difference = 5.468, p < .001. Surprisingly, there was no significant mean difference of warping measurement between the Occluder and Control, mean difference = −0.346, p = 1.00. 
Table 2.
 
Results from Experiment 2. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Table 2.
 
Results from Experiment 2. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Examining the manipulations in VR, a significant main effect was observed for Distance, p < .001, revealing larger mean warping measurements for the smaller dot spacings relative to the larger spacings. An interaction of Object by Dot Distance was observed, p < .001. Follow-up tests revealed that warping for each object with large dot spacing was consistently less than warping for each object with small dot spacing respectively, all comparisons at least p < .001 (see Figure 4). 
Figure 4.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 4.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
No main effect was observed for Viewing Distance, p = .190. However, an interaction was observed for Object by Viewing Distance, p = .025, which revealed that the warping for the Separated Rectangle was greater when viewed at far distance, mean difference = 1.303, p = .045 (see Figure 5). This finding is further complicated by a significant three-way interaction of Object, Dot Distance, and Viewing Distance, p = .029. This finding revealed that the distances were consistent at large dot spacings, but small dot spacings led to greater warping in Far Viewing Distances versus Near for the Separated Rectangle, mean difference = 1.83, p = .015 and Occluder, mean difference = 2.12, p = .011. Finally, there was no main effect of Viewing Orientation, p = .839. 
Figure 5.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 5.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Discussion
The current findings suggest that the warping effect is present in virtual environments. However, there was an unexpected finding in that the no-object (control) and occluder object had similar mean distortion across all factors. Properties of VR also seem to influence warping beyond the findings for Object. One of the most surprising observations was the similarity between Near and Far Viewing Distances across Objects. It was anticipated that greater distance from the stimuli would increase overestimation based on findings from Tiurina and Utochkin (2019). Instead, the effect of Objects was generally consistent for both distances, except for the separated rectangle. For the Dot Spacing manipulation, it was expected that warping would increase for small spacing compared with large spacing. This finding was observed consistently for comparisons between spacings for the same object, suggesting a general warping difference that depends on spatial relations between items. Finally, the lack of any significant differences between Visual Orientation manipulations was surprising. Updates to visual orientation were expected to provide additional depth cues, but it is difficult to determine if the addition of head movements in visual updates alters task demands. 
The findings of Experiment 2 present a complex interplay between VR and perception. Although object-based warping manifested in a VR environment, VR also elicited unexpected moderations of the effect. Experiment 3 was designed to replicate and further examine these findings, while including a new response modality designed for VR. 
Experiment 3
Experiment 2 was intended to examine if object-based warping could be observed in virtual environments. The present experiment was designed to replicate the findings of Experiment 2, and to determine whether the previous unexpected findings would persist with a new sample. Beyond a replication, this experiment was intended to explore how response modality might influence perception of the illusion by including a motion-tracked controller for participant response. The expectation in Experiment 3 was that prior findings would replicate, although it was unclear how response modality may influence perceptual biases. 
Methods
Participants
Twenty-five undergraduate students from the University of Nebraska–Lincoln participated in the study and received course credit for their participation. Eight participants were removed before analysis owing to equipment malfunctions and data loss. All participants had normal or corrected-to-normal vision and were naïve to the purpose of the experiment which took place in a single 60-minute session. Experiment 3 methods were approved by University of Nebraska-Lincoln's Institutional Review Board. 
Materials
The manipulations in Experiment 3 were identical to those of Experiment 2 except for Response Modality. For this experiment, an Oculus Touch controller was used by participants to make responses. The right-hand controller was used to avoid differences in response strategies for the response ball located on the right side of the VR environment. 
Procedure
Participants received identical instructions to Experiment 2, except for how to respond via the Oculus Touch controller. Participants were instructed that their hand location in space controlled the position of their response Dot, but only on the Y-axis. After placing the Dot at the estimated distance matching the static Dots, they were instructed to press the trigger button with their index finger to proceed to the next trial. As before, each trial began with the response ball located at a random location in the range of ±1 m from the correct distance. Participants were also asked to sit approximately 0.5 m from the desk they were seated to avoid hitting the desk with the controller. 
Results
Approach and corrections
A 4 (Object: Rectangle, Separated Rectangle, Occluder, Control) × 2 (Viewing Distance: Near, Far) × 2 (Dot Distance: Small, Large) × 2 (Viewing Orientation: Dynamic, Static) repeated measures ANOVA was performed on the data to examine if there were statistically significant differences between warping measures at the different levels of Object, Viewing Distance, Dot Distance, and Visual Orientation. Table 3 shows the results of the ANOVA for each combination of factors. The assumption of sphericity was violated for the effect of Object, as assessed by Mauchly's test of sphericity, p = .048. To address this, a Greenhouse-Geisser correction was applied to the main and interaction effects of Object (see Table 3 for all epsilon values and corrections). All follow-up tests were performed with paired-samples t tests using Bonferroni correction. 
Table 3.
 
Experiment 3 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, Mean Square Error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 3.
 
Experiment 3 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, Mean Square Error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Analyses
Table 4 highlights the means and standard deviations for each factor in the experiment. There was again a significant main effect of Object, p < .001. Follow-up tests revealed warping for Rectangle was greater than Control, mean difference = 6.72, p < .002, and Occluder, mean difference = 5.96, p = .003, but not significantly different from the Separated Rectangle, mean difference = 2.08, p = .218. Similarly, warping was greater for the Separated Rectangle than the Control, mean difference = 4.65, p < .001, and Occluder, mean difference = 3.88, p < .009. There was still no significant difference in warping between Occluder and Control, mean difference = 0.76, p > .99. There was also a significant main effect of Dot Distance, p < .001, which indicated greater warping measurements for smaller dot distances than larger dot distances (see Figure 6). No main effect was observed in the Viewing Distance, p = .625, or Viewing Orientation, p < .001, conditions. 
Table 4.
 
Results from Experiment 3. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Table 4.
 
Results from Experiment 3. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Figure 6.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 3 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 6.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 3 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Discussion
Experiment 3 replicated all of the findings of Experiment 2, with the exception of not observing any of the previous interactions. One potential explanation for the lack of interactions could stem from the greater variance in warping attributable to response modality. This difference is presumed to be a result of using the Touch controllers, but this is difficult to verify with response being manipulated as a between-subject factor. As a result, Experiment 4 was designed to include both response modalities as a within-subject manipulation for direct comparison. Furthermore, the continued lack of warping in the occluder object condition led to considerations of the stimulus design. In contrast with Vickery and Chun's 2D stimuli, the stationary dots in the current experiment are situated in a depth plane nearer the observer than the bounding object. Given that the occluding object also presents in this depth plane, one possibility is that the occluder and the orbs group with one another based on 3D proximity. If grouping with the bounding object is critical to produce the warping effect, this might explain why no warping effect was observed in Experiments 2 and 3. To address this question, a new object was added to the Experiment 4 to examine whether object-based warping depends on perceiving a single object. If warping does depend on object singularity, then the new object should exhibit the effect. 
Experiment 4
The final experiment of this study sought to clarify a number of issues arising from the previous two VR experiments. First, why is the Occluder Object inhibiting the warping effect? Originally, there was no reason to suspect that warping would occur for only select Object types. In observing the effect in two experiments, it seems that certain characteristics of the stimuli are altering the percept where warping occurs. To test the hypothesis that the effect relies on the orbs grouping with the bounding object rather than a competing object in the same depth plane, a second occlusion object was created. This Object, called the Merged Occluder, was identical to the original Occluder Object, except that the white rectangle is on the same Z plane as the black rectangle. With this new Object, it was hypothesized that there will again be an effect of warping for the Object condition as the presentation may be more akin to Vickery and Chun's 2D displays. 
In addition to updating the Objects, Experiment 4 also sought to directly determine the influence of Response Modality as a within-subjects factor. This condition allows a direct comparison between Mouse responses and Touch Controller responses to determine how Response Modality changes the perception of warping. Based on findings in Experiments 2 and 3, greater variance was expected in Touch Controller responses compared with Mouse responses. 
Methods
Participants
Thirty-one undergraduate students from the University of Nebraska–Lincoln participated in the study and received course credit for their participation. Two participants were removed before analysis owing to equipment malfunctions and data loss. All participants had normal or corrected-to-normal vision and were naïve to the purpose of the experiment, which took place in a single 60-minute session. Experiment 4 methods were approved by University of Nebraska-Lincoln's Institutional Review Board. 
Materials
Experiment 4 used identical stimuli as Experiments 2 and 3, with the addition of a fifth Object condition, the Merged Occluder (Figure 2A, bottom row). This Object was constructed of the same shapes as the Occluder Object, but the rectangles (white and black) were placed on the same Z coordinate. The component rectangles were drawn such that the white rectangle, while on the same plane as the black rectangle, was visible across the black rectangle's midline. Another addition to this experiment is the within-participant comparison of Response Modalities (Mouse and Touch Controller). Response Modality was counterbalanced between the first and second halves of the experiment with order counterbalanced across participants. 
Procedure
Participants received task instructions identical to Experiments 2 and 3. For response, they were instructed in how to use their first Response Modality (Mouse/Touch controller), then after the second block they were instructed in how to use the other response device. Participants were also asked to sit approximately 1.5 ft from the desk when they were using the Touch controller. 
The task in Experiment 4 was identical to Experiments 2 and 3. There were 20 unique trials (5 Object × 2 Viewing Distance × 2 Dot Distance) sequentially randomized within a run. Each run was repeated four times per block (80 trials). Participants were offered breaks after each block to minimize task fatigue (there were no prior reports of discomfort, but multiple reports of fatigue in task). Trials were increased to accommodate a four-block design wherein each block was assigned one of the Viewing Orientation conditions (Static/Dynamic) and orders were counterbalanced across participants. Response Modality was switched after the second block. There was a total of 320 trials over a maximum time of 60 minutes. 
Results
Approach and corrections
A 4 (Object: Rectangle, Separated Rectangle, Occluder, Control) × 2 (Response Modality: Mouse, Touch) × 2 (Viewing Distance: Near, Far) × 2 (Dot Distance: Small, Large) × 2 (Viewing Orientation: Dynamic, Static) repeated measures ANOVA was performed on the data to examine if there were statistically significant differences between warping measures at the different levels of Object, Viewing Distance, Dot Distance, and Visual Orientation. The assumption of sphericity was violated for the effect of Object, as assessed by Mauchly's Test of Sphericity, p < .001. To address this, a Greenhouse-Geisser correction was applied to the main and interaction effects of Object (see Table 5 for all epsilon values and corrections). All follow-up tests were performed with paired-samples t tests using Bonferroni correction. 
Table 5.
 
Experiment 4 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 5.
 
Experiment 4 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Analyses
Table 6 shows the means and standard deviations for each factor in the experiment. There was a significant main effect of Object, p < .001. Importantly, follow-up tests revealed warping occurred for the Merged Occluder, with similar measurements to the Rectangle, mean difference = −1.96, p = .895, and greater measurements than the Separated Rectangle, mean difference = 1.65, p = .016, Occluder, mean difference = 6.57, p < .001, and Control, mean difference = 7.92, p < .001. The Rectangle exhibited the most warping again, with greater measurements than the Separated Rectangle, mean difference = 3.61, p = .002, Occluder, mean difference = 8.53, p < .001, and Control, mean difference = 9.88, p < .001, conditions. Warping was again greater for the Separated Rectangle relative to the Control, mean difference = 6.27, p < .001, and Occluder, mean difference = 4.92, p < .001. There was still no observed difference of warping between the Occluder and Control, mean difference = 1.35, p > .99. 
Table 6.
 
Results from Experiment 4. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) × 2 (response modality) design along with total number of participants included in analysis (N)
Table 6.
 
Results from Experiment 4. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) × 2 (response modality) design along with total number of participants included in analysis (N)
Furthermore, there was a significant main effect of Dot Distance, p < .001, with greater warping in the small spacing compared with larger spacing (Figure 7). While there was no main effect of Viewing Distance, p = .317, there was an interaction of Viewing Distance by Dot Distance, p = 0.017. Follow-up tests revealed that warping was greater for small dot spacings when observed at farther distances than at closer distances (see Figure 8). Surprisingly, there was also a main effect observed for Viewing Orientation, p = .009, where the Dynamic condition exhibited greater warping (see Figure 9). Although no main effect was observed for Response Modality, p = .885, there was an interaction of Viewing Orientation by Response Modality, p = .011 (see Figure 10). Follow-up tests revealed that when using the Touch controller, warping was greater in Dynamic than Static Viewing Orientation, mean difference = 18.39, p = .010. This was not the case for the mouse condition, which showed no difference in warping between Dynamic and Static Viewing Orientations, mean difference = 0.15, p = .872. Finally, a three-way interaction was observed for Dot Distance, Viewing Orientation, and Response Modality, p = 0.027, which showed that—although warping was greater on trials with small dot spacings than large dot spacings in general—there was a greater difference between small and large dot spacings, mean difference = 9.47, p < .001, on trials with Touch controller responses and dynamically oriented displays compared with statically oriented displays, mean difference = 5.93, p < .001. 
Figure 7.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 7.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 8.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 8.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 9.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing orientation. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 9.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing orientation. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 10.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of response modality. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 10.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of response modality. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Discussion
In the first two VR experiments, warping was not observed in the occluder object condition. In contrast, warping was observed with a merged occluder in the present experiment, implying that warping is altered based on grouping owing to proximity in depth. This finding is striking given the resemblance of the objects and reveals the role that depth cues play in object perception. It is also noteworthy that Viewing Distance seemed to have little impact on the effect, suggesting object-based warping occurs independent of object-distance perception. Although differences were observed between response modalities, these only occurred in combination with other factors, namely, Viewing Orientation and Dot Distance. The primary reason for these differences seems to stem from use of Touch controllers with a dynamic viewing orientation leading to greater error. These findings provide insight to the results of the first two VR experiments and suggest that the warping effect is persistent, even with the introduction of more naturalistic viewing conditions. 
Conclusions
The present study addressed whether prior 2D object-based warping effects occur in 3D presentations while exploring critical considerations for research in 3D environments and VR. Specifically, we asked whether the warping effect is observable under naturalistic conditions? After repeated tests, it is apparent that object-based warping occurs with 3D objects in natural and VR environments. The spatial distortion associated with an occluded object was the only tested example that apparently differed between 2D and 3D displays. The inclusion of a Merged Occluder provided evidence that the effect seen in Vickery and Chun's work depends on the measurement in a single plane of depth. The addition of depth cues to the occluder—suggesting different depth planes—resulted in no signs of warping. Our interpretation of the results is that grouping effects—owing to proximity in depth—modulate object-based warping. 
The basis for this assertion comes from the small, yet informative, difference between the visual cues of the occluding objects. Specifically, three features distinguish the white rectangle's location relative to the black rectangle (see Figure 11). First, visible overlap of the white rectangle's bottom face on the black rectangle suggests proximal occlusion. When there is no overlapping bottom face, white and black rectangles seem to be aligned in depth. Second, the inside corners where white and black rectangles meet, cues the viewer to distinguish overlap or intersection based on how they connect—perpendicularly (as an occlusion) or diagonally (as a joint surfaces). Finally, the center alignment of the white rectangle anchors it to the black rectangle or the dots. These combined cues seem to separate—or bind—the white and black rectangles. 
Figure 11.
 
Close up view of items in the Occluder (top) and Merged-Occluder (bottom) conditions. Appearance of the white-rectangle's bottom-face, perpendicular meeting points on the inside corners, and offset of each rectangles center suggest they exist on different depth planes.
Figure 11.
 
Close up view of items in the Occluder (top) and Merged-Occluder (bottom) conditions. Appearance of the white-rectangle's bottom-face, perpendicular meeting points on the inside corners, and offset of each rectangles center suggest they exist on different depth planes.
Although the difference between the original occluder object and merged occluder object are apparent in the near condition, those differences become minimal in the far condition (see Figure 2B for a comparison). The difference in warping seems to relate to item grouping: the dots group with either the retinotopic image or the perceptual experience. This finding is consistent with the view of Palmer (2002) and others, that grouping occurs based on perception of scene structure rather than retinotopic arrangements. For example, when grouping 2D arrays of luminous beads, participants grouped beads based on distance in depth as opposed to proximity in retinotopic space (Rock & Brosgole, 1964). Similarly, Nakayama et al. (1989) found evidence that object recognition under occlusion occurs early in the visual cortex. As participants observe the occluder, potentially they see the dots group to the white rectangle only when it is clearly separate from the black rectangle. Although a minor distinction, subtle depth cues seem to play a key role on the warping effect. 
Although the present findings extend our understanding of object-based warping (Vickery & Chun, 2010b), they cannot speak directly to numerous other factors of perception. The present study prioritized stimuli that previously elicited the strongest effect of warping (rectangle, separated rectangle, occluder) to determine if the effect changes when scaled up to a more naturalistic viewing experience. By subsampling the stimuli from the original study, other factors of perception which could impact object-based warping (e.g., illusory contoured shapes) were absent from the present study. It will be important going forward to examine object-based warping in 3D with other stimuli to determine the limitations of this effect, particularly in light of the unanticipated findings. 
Object-based warping bears similarities to a number of size-based illusions. Findings of illusory distortions and errors relating to size judgments have been shown both within basic shapes (Künnapas, 1955), as well as between shapes of different sizes (the Ebbinghaus illusion; Massaro & Anderson, 1971) and contexts (the Delboeuf illusion; Delboeuf, 1865). In the Künnapas study, the length of a line is perceived to decrease as its surrounding context shape (a square) increases. Similarly, the Ebbinghaus and Delboeuf illusions involve the perception of decreasing size for a target shape when surrounded by larger context shape(s). These size illusions show decreases in the perceived size of target measurements when the surrounding context is larger. Although parallels exist between these illusions and object-based warping, results are not consistent. The original warping study's Discussion and Conclusion section reported an additional experiment testing the manipulation of rectangle size (small vs. large) and found no difference in the magnitude of the warping effect (Vickery & Chun, 2010b, pp. 1762–1763). This finding further differentiates warping from similar size illusions, suggesting that object-based warping comes from a different aspect of perceptual judgments and is not due to relative size judgments. Future studies should examine the similarities and differences between warping and other size illusions to dissociate the origins of these potentially related but distinct errors of perception. 
In addition to object-based warping, the current study examined aspects of naturalistic research that may be relevant to future research. Although there is increased interest in VR research, few systematic investigations have been performed to determine how VR technology alters both perception and performance. Our results suggest that naturalistic studies (like Experiment 1) translate to artificial environments in VR and this will be an important consideration going forward. 
The manipulations in VR were used to gain insight on object-based warping. Dot Distance was intrinsically related to multiple properties of VR (egocentric distance measurement, shading, spatial location), yet there was a consistent reduction of warping for larger dot spacings. These findings could be explained by studies of spatial compression within virtual environments (Messing & Durgin, 2005; Sinai et al., 1999; Thompson et al., 2004; Viguier et al., 2001; Willemsen & Gooch, 2002; Witmer & Kline, 1998; Witmer & Sadowski, 1998). These studies suggest that spatial compression occurs for larger dot spacings. Although a small manipulation, dot spacing highlights the subtle changes that impact the warping effect in VR. 
A unique aspect of this study involved the use of head orientation in display updates. When initially setting out to use virtual environments for perceptual research, it was unclear whether head movements should be used when comparing results with a typical 2D study. Experiments 1, 2, and 3 suggest that there is no difference, whereas Experiment 4 found typically larger measurement errors with head movements. Although the effect of head movements is small, it deserves further investigation to determine when to apply head movements in translational research. 
Finally, response modality was manipulated in the present study as it relates to object-based warping. Although the primary tools of computer-based research are keyboards and mice, VR technology provides a convenient integration of naturalistic response modalities. The present study's findings suggest that the Touch controllers increased response variance compared with a standard mouse. When directly comparing the Touch controller and mouse in Experiment 4, there were similar response means, but the Touch controller had a noticeably greater response variance. Interestingly, when examining the influence of Viewing Orientation and Response Modality together, the warping effect was exaggerated for Touch responses in a Dynamic viewing environment and diminished in a Static viewing environment. This observation was not seen in the mouse-based trials, suggesting a unique interaction of vision and action. To that end, the present findings also add to the literature on changes in responses based on the interaction of vision and proprioception (Adam et al., 2012; Goodhew et al., 2015; Gozli et al., 2012; McLeod, 1977; Pashler, 1990; Stelzel et al., 2006). Recently, attention in this field focuses on the different neural systems at play in visually guided response. Specifically, the literature suggests that responses made with hands near the visual stimuli make use of the magnocellular visual pathway and lead to faster responses—but lower accuracy—whereas hands far from visual stimuli utilize the parvocellular visual pathway and have greater spatial accuracy at the expense of slower response times (Adam et al., 2012; Goodhew et al., 2015; Gozli et al., 2012). The findings from Experiment 4 are consistent with this notion, showing that combined information of hand location and head orientation lead to increased spatial measurement error compared with the static viewing environment. These differences suggest that different visual pathways may be relied on, depending on the level of immersion participants experience in virtual environments. This topic should be tested further to determine the extent which action can impact research in VR. 
The current study examined several considerations in translating a robust 2D illusion to both physical and virtual 3D presentations, while laying the groundwork for future studies of perception and action in VR. The findings of this study suggest that object-based warping is a powerful phenomenon that can be observed in natural settings. Even with a variety of visual properties the warping effect is present. These results also provide broad implications in experimental design choices for VR. Future studies should be cognizant of the stimuli presentation parameters in VR development, along with plans for repeated replications within-study. Together, these findings provide a stepping-stone for translating powerful laboratory-based studies in perception to natural environments. 
Acknowledgments
Supported by NSF OIA 1632849 to TJV, MDD, and colleagues. 
Commercial relationships: none. 
Corresponding author: Joshua E. Zosky. 
Address: Department of Psychology, University of Nebraska, Lincoln, NE, USA. 
Footnotes
1   The current measurements are provided in metric distance without degrees of visual angle owing to difficulty in adapting the calculations to a virtual environment.
References
Adam, J. J., Bovend'Eerdt, T. J. H., van Dooren, F. E. P., Fischer, M. H., & Pratt, J. (2012). The closer the better: Hand proximity dynamically affects letter recognition accuracy. Attention, Perception, & Psychophysics, 74(7), 1533–1538. https://doi.org/10/f39fct [CrossRef]
Anstis, S. (2001). Footsteps and inchworms: Illusions show that contrast affects apparent speed. Perception, 30(7), 785–794. https://doi.org/10/ff6vnq [CrossRef]
Bruno, N. (2001). Breathing illusions and boundary formation in space-time. In Advances in psychology (Vol. 130, pp. 531–556). New York: Elsevier.
Creem-Regehr, S. H., & Kunz, B. R. (2010). Perception and action. WIREs Cognitive Science, 1(6), 800–810. https://doi.org/10/fqhq39 [CrossRef]
Delboeuf, F. J. (1865). Note sur certaines illusions d'optique: Essai d'une théorie psychophysique de la maniere dont l'oeil apprécie les distances et les angles. Bulletins de l'Académie Royale Des Sciences, Lettres et Beaux-Arts de Belgique, 19, 195–216.
Duncan, J. (1981). Directing attention in the visual field. Perception & Psychophysics, 30(1), 90–93. https://doi.org/10/dkwjrt [CrossRef]
Egly, R., Driver, J., & Rafal, R. D. (1994). Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123(2), 161–177. https://doi.org/10/btcstp [CrossRef]
Gobell, J., & Carrasco, M. (2005). Attention alters the appearance of spatial frequency and gap size. Psychological Science, 16(8), 644–651. https://doi.org/10/c366gm [CrossRef]
Goodhew, S. C., Edwards, M., Ferber, S., & Pratt, J. (2015). Altered visual perception near the hands: A critical review of attentional and neurophysiological models. Neuroscience & Biobehavioral Reviews, 55, 223–233. https://doi.org/10/f7j6kx [CrossRef]
Gozli, D. G., West, G. L., & Pratt, J. (2012). Hand position alters vision by biasing processing through different visual pathways. Cognition, 124(2), 244–250. https://doi.org/10/f33dgx [CrossRef]
Gramann, K., Ferris, D. P., Gwin, J., & Makeig, S. (2014). Imaging natural cognition in action. International Journal of Psychophysiology, 91(1), 22–29. PubMed. https://doi.org/10/f5tsdz [CrossRef]
Kitaoka, A., & Ashida, H. (2003). Phenomenal characteristics of the peripheral drift illusion. Vision, 15(4), 261–262. https://doi.org/10/ggscw4
Kovács, I., & Julesz, B. (1994). Perceptual sensitivity maps within globally defined visual shapes. Nature, 370(6491), 644–646. https://doi.org/10/fgx83d [CrossRef]
Künnapas, T. M. (1955). Influence of frame size on apparent length of a line. Journal of Experimental Psychology, 50(3), 168–170. https://doi.org/10/c84d2c [CrossRef]
Liverence, B. M., & Scholl, B. J. (2011). Selective attention warps spatial representation: Parallel but opposing effects on attended versus inhibited objects. Psychological Science, 22(12), 1600–1608. https://doi.org/10/dncvwv [CrossRef]
Malcolm, G. L., & Shomstein, S. (2015). Object-based attention in real-world scenes. Journal of Experimental Psychology: General, 144(2), 257–263. https://doi.org/10/f678hd [CrossRef]
Marcus, D. S., & Van Essen, D. C. (2002). Scene segmentation and attention in primate cortical areas V1and V2. Journal of Neurophysiology, 88(5), 2648–2658. https://doi.org/10/fv972m [CrossRef]
Massaro, D. W., & Anderson, N. H. (1971). Judgmental model of the Ebbinghaus illusion. Journal of Experimental Psychology, 89(1), 147–151. https://doi.org/10/fq2wjz [CrossRef]
McIntire, J. P., Havig, P. R., & Geiselman, E. E. (2014). Stereoscopic 3D displays and human performance: A comprehensive review. Displays, 35(1), 18–26. https://doi.org/10/f5vzch [CrossRef]
McLeod, P. (1977). A dual task response modality effect: Support for multiprocessor models of attention. Quarterly Journal of Experimental Psychology, 29(4), 651–667. https://doi.org/10/dgjc5n [CrossRef]
Messing, R., & Durgin, F. H. (2005). Distance perception and the visual horizon in head-mounted displays. ACM Transactions on Applied Perception, 2(3), 234–250. https://doi.org/10/cfvmr5 [CrossRef]
Moore, C. M., Yantis, S., & Vaughan, B. (1998). Object-based visual selection: Evidence from perceptual completion. Psychological Science, 9(2), 104–110. https://doi.org/10/dw4s9b [CrossRef]
Nakayama, K., Shimojo, S., & Silverman, G. H. (1989). Stereoscopic depth: Its relation to image segmentation, grouping, and the recognition of occluded objects. Perception, 18(1), 55–68. https://doi.org/10/fv3mqb [CrossRef]
Otten, M., Pinto, Y., Paffen, C. L. E., Seth, A. K., & Kanai, R. (2016). The Uniformity Illusion: Central stimuli can determine peripheral perception. Psychological Science, 28(1), 56–68. https://doi.org/10/f9vhs5 [CrossRef]
Palmer, S. E. (2002). Perceptual grouping: It's later than you think. Current Directions in Psychological Science, 11(3), 101–106. https://doi.org/10/dg5t8v [CrossRef]
Pashler, H. (1990). Do response modality effects support multiprocessor models of divided attention? Journal of Experimental Psychology: Human Perception and Performance, 16(4), 826–842. https://doi.org/10/cq9txx [CrossRef]
Pinna, B., & Brelstaff, G. J. (2000). A new visual illusion of relative motion. Vision Research, 40(16), 2091–2096. https://doi.org/10/ckvcnx [CrossRef]
Rock, I., & Brosgole, L. (1964). Grouping based on phenomenal proximity. Journal of Experimental Psychology, 67(6), 531–538. https://doi.org/10/csqbzc [CrossRef]
Shiffrar, M., & Pavel, M. (1991). Percepts of rigid motion within and across apertures. Journal of Experimental Psychology: Human Perception and Performance, 17(3), 749–761. https://doi.org/10/czr8qq [CrossRef]
Sinai, M., Krebs, W., Darken, R., Rowland, J., & McCarley, J. (1999). Egocentric distance perception in a virtual environment using a perceptual matching task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 43(22), 1256–1260. https://doi.org/10/fz36w9 [CrossRef]
Stelzel, C., Schumacher, E. H., Schubert, T., & D‘Esposito, M. (2006). The neural effect of stimulus-response modality compatibility on dual-task performance: An fMRI study. Psychological Research, 70(6), 514–525. https://doi.org/10/fvhp6t [CrossRef]
Suchow, J. W., & Alvarez, G. A. (2011). Motion silences awareness of visual change. Current Biology, 21(2), 140–143. https://doi.org/10/dvtppm [CrossRef]
Suzuki, S., & Cavanagh, P. (1997). Focused attention distorts visual space: An attentional repulsion effect. Journal of Experimental Psychology: Human Perception and Performance, 23(2), 443–463. https://doi.org/10/fhdd8q [CrossRef]
Thompson, W. B., Willemsen, P., Gooch, A. A., Creem-Regehr, S. H., Loomis, J. M., & Beall, A. C. (2004). Does the quality of the computer graphics matter when judging distances in visually immersive environments? Presence: Teleoperators and Virtual Environments, 13(5), 560–571. https://doi.org/10/dvrg6x [CrossRef]
Tiurina, N. A., & Utochkin, I. S. (2019). Ensemble perception in depth: Correct size-distance rescaling of multiple objects before averaging. Journal of Experimental Psychology: General, 148(4), 728–738. https://doi.org/10/ggkhh6 [CrossRef]
Vickery, T. J., & Chun, M. M. (2010a). Warped spatial perception within and near objects. Journal of Vision, 10(7), 1186–1186. https://doi.org/10.1167/10.7.1186 [CrossRef]
Vickery, T. J., & Chun, M. M. (2010b). Object-based warping: An illusory distortion of space within objects. Psychological Science, 21(12), 1759–1764. https://doi.org/10/drn6t4 [CrossRef]
Viguier, A., Clément, G., & Trotter, Y. (2001). Distance perception within near visual space. Perception, 30(1), 115–124. https://doi.org/10/dt7pvc [CrossRef]
Willemsen, P., & Gooch, A. A. (2002). Perceived egocentric distances in real, image-based, and traditional virtual environments. Proceedings IEEE Virtual Reality 2002, 275–276. https://doi.org/10.1109/VR.2002.996536
Witmer, B. G., & Kline, P. B. (1998). Judging perceived and traversed distance in virtual environments. Presence: Teleoperators and Virtual Environments, 7(2), 144–167. https://doi.org/10/b594cj [CrossRef]
Witmer, B. G., & Sadowski, W. J. (1998). Nonvisually guided locomotion to a previously viewed target in real and virtual environments. Human Factors, 40(3), 478–488. https://doi.org/10/dzhhv8 [CrossRef]
Figure 1.
 
Actual 3D-printed stimulus sets. (Top left) Two-objects reference set. (Top right) One-object reference set. (Bottom) Adjustment set.
Figure 1.
 
Actual 3D-printed stimulus sets. (Top left) Two-objects reference set. (Top right) One-object reference set. (Bottom) Adjustment set.
Figure 2.
 
Stimuli used in the VR experiments. The diagram in (A) shows all stimuli used in Experiments 2 and 3 (No Object, Rectangle, Separated Rectangle, Occluder) and the additional stimuli from Experiment 4 (Merged Occluder). (B) A close-up comparison of the Occluder and Merged Occluder in the Near-Small and Far-Small formats. The difference between the bottom face of the occluding white-rectangle for the Occluder object is evident at near viewing distances, but considerably less so in the far viewing distance.
Figure 2.
 
Stimuli used in the VR experiments. The diagram in (A) shows all stimuli used in Experiments 2 and 3 (No Object, Rectangle, Separated Rectangle, Occluder) and the additional stimuli from Experiment 4 (Merged Occluder). (B) A close-up comparison of the Occluder and Merged Occluder in the Near-Small and Far-Small formats. The difference between the bottom face of the occluding white-rectangle for the Occluder object is evident at near viewing distances, but considerably less so in the far viewing distance.
Figure 3.
 
Example of a single trial in the VR experiments with the following characteristics: Rectangle Object, Near Viewing Distance, Small Dot Distance, Static Viewing Orientation.
Figure 3.
 
Example of a single trial in the VR experiments with the following characteristics: Rectangle Object, Near Viewing Distance, Small Dot Distance, Static Viewing Orientation.
Figure 4.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 4.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 5.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 5.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 2 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 6.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 3 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 6.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 3 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 7.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 7.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of dot distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 8.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 8.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing distance. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 9.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing orientation. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 9.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of viewing orientation. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 10.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of response modality. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 10.
 
Box-plot comparison of object-based warping (percent measurement overestimate) in Experiment 4 for each level of response modality. Boxes represent quartile ranges, with the bisecting line representing the median response across participants. Whiskers represent the extent of individual participant responses, with diamonds highlighting outliers (based on 1.5× the interquartile range).
Figure 11.
 
Close up view of items in the Occluder (top) and Merged-Occluder (bottom) conditions. Appearance of the white-rectangle's bottom-face, perpendicular meeting points on the inside corners, and offset of each rectangles center suggest they exist on different depth planes.
Figure 11.
 
Close up view of items in the Occluder (top) and Merged-Occluder (bottom) conditions. Appearance of the white-rectangle's bottom-face, perpendicular meeting points on the inside corners, and offset of each rectangles center suggest they exist on different depth planes.
Table 1.
 
Experiment 2 ANOVA results. Notes: dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 1.
 
Experiment 2 ANOVA results. Notes: dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 2.
 
Results from Experiment 2. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Table 2.
 
Results from Experiment 2. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Table 3.
 
Experiment 3 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, Mean Square Error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 3.
 
Experiment 3 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, Mean Square Error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 4.
 
Results from Experiment 3. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Table 4.
 
Results from Experiment 3. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) design along with total number of participants included in analysis (N).
Table 5.
 
Experiment 4 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 5.
 
Experiment 4 ANOVA results. Note. dfNum, degrees of freedom numerator; dfDen, degrees of freedom denominator; Epsilon, Greenhouse-Geisser multiplier for degrees of freedom, mean square error (MSE), and p values in the table incorporate this correction; SSNum, sum of squares numerator; SSDen, sum of squares denominator; ηp², partial eta-squared.
Table 6.
 
Results from Experiment 4. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) × 2 (response modality) design along with total number of participants included in analysis (N)
Table 6.
 
Results from Experiment 4. Means (M) and standard deviations (SD) for warping as a function of a 4 (object) × 2 (viewing distance) × 2 (dot distance) × 2 (visual orientation) × 2 (response modality) design along with total number of participants included in analysis (N)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×