Free
Article  |   July 2013
Humans perceive object motion in world coordinates during obstacle avoidance
Author Affiliations
Journal of Vision July 2013, Vol.13, 25. doi:10.1167/13.8.25
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Brett R. Fajen, Melissa S. Parade, Jonathan S. Matthis; Humans perceive object motion in world coordinates during obstacle avoidance. Journal of Vision 2013;13(8):25. doi: 10.1167/13.8.25.

      Download citation file:


      © 2017 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  A fundamental question about locomotion in the presence of moving objects is whether movements are guided based upon perceived object motion in an observer-centered or world-centered reference frame. The former captures object motion relative to the moving observer and depends on both observer and object motion. The latter captures object motion relative to the stationary environment and is independent of observer motion. Subjects walked through a virtual environment (VE) viewed through a head-mounted display and indicated whether they would pass in front of or behind a moving obstacle that was on course to cross their future path. Subjects' movement through the VE was manipulated such that object motion in observer coordinates was affected while object motion in world coordinates was the same. We found that when moving observers choose routes around moving obstacles, they rely on object motion perceived in world coordinates. This entails a process, which has been called flow parsing (Rushton & Warren, 2005; Warren & Rushton, 2009a), that recovers the component of optic flow due to object motion independent of self-motion. We found that when self-motion is real and actively generated, the process by which object motion is recovered relies on both visual and nonvisual information to factor out the influence of self-motion. The remaining component contains information about object motion in world coordinates that is needed to guide locomotion.

Introduction
Many locomotor tasks involve interactions with moving objects. People weave through crowds in shopping malls, athletes dodge their opponents on the playing field, and animals chase prey in the wild. Such tasks comprise a family of actions that require humans and other animals to coordinate their movements with the movements of other objects. Attempts to understand how locomotion is guided in the presence of moving objects often begin with an analysis of information in optic flow. 
Figure 1A depicts the optic flow field for a moving observer with an object moving from right to left across the observer's future path. The local optical motion of the moving object (depicted by the yellow vector) reflects the motion of the object relative to the moving observer—that is, object motion in an observer-centered reference frame. As such, the same local optical motion results from different combinations of observer and object motion with the same relative motion. 
Figure 1
 
Optic flow field and decomposition into self-motion and object-motion components. (A) Optic flow field generated by an observer moving over a ground surface and an object (yellow dot) moving from right to left. (B) Component of optic flow due to self-motion independent of object motion. (C) Component of optic flow due to object motion independent of self-motion. The optic flow field in (A) is the vector sum of the self-motion (B) and object-motion (C) components. From Fajen, B. R., & Matthis, J. S. (2013). Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One, 8(2): e55446. doi:10.1371/journal.pone.0055446, used under a Creative Commons Attribution License.
Figure 1
 
Optic flow field and decomposition into self-motion and object-motion components. (A) Optic flow field generated by an observer moving over a ground surface and an object (yellow dot) moving from right to left. (B) Component of optic flow due to self-motion independent of object motion. (C) Component of optic flow due to object motion independent of self-motion. The optic flow field in (A) is the vector sum of the self-motion (B) and object-motion (C) components. From Fajen, B. R., & Matthis, J. S. (2013). Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One, 8(2): e55446. doi:10.1371/journal.pone.0055446, used under a Creative Commons Attribution License.
The optic flow field depicted in Figure 1A can be parsed into two components: a self-motion component (Figure 1B), which reflects the motion of the observer independent of the motion of other objects, and an object-motion component (Figure 1C), which reflects the motion of objects independent of the motion of the observer. That is, the optic flow field is the vector sum of the self-motion component and the object-motion component. Whereas the local optical motion of the moving object in Figure 1A reflects object motion in a reference frame that moves with the observer (i.e., observer coordinates), the motion of the moving object in Figure 1C reflects object motion in a stationary reference frame (i.e., world coordinates). 
A question of major theoretical significance for models of visually guided interception and obstacle avoidance is whether observers rely upon object motion perceived in an observer-centered reference frame or a world-centered reference frame. The former allows for the possibility that the local optical motion of the moving object in the optic flow field is sufficient to guide locomotion. For example, the leftward drift of the moving object in Figure 1A specifies that the object will pass in front of the observer if current speed and direction are maintained. If the object is a target to be intercepted, the observer should increase speed and/or turn to the left. Conversely, if the object is an obstacle to be avoided and is not laterally drifting in the optic flow field, it is on a collision course and evasive action is called for. This strategy and minor variations of it for using optical motion have been proposed as accounts of collision detection, obstacle avoidance, and interception in both humans (Chardenon, Montagne, Laurent, & Bootsma, 2005; Cutting, Vishton, & Braren, 1995; Fajen & Warren, 2004, 2007; Lenoir, Musch, Thiery, & Savelsbergh, 2002; Ni & Andersen, 2008; Rushton & Allison, 2013; Rushton, Harris, Lloyd, & Wann, 1998) and nonhuman animals (Collett & Land, 1978; Lanchester & Mark, 1975; Olberg, Worthington, & Venator, 2000). Because the object's lateral motion in the optic flow field reflects the relative motion between the object and the observer, such models imply that interception and obstacle avoidance are guided by object motion perceived in observer coordinates. 
Alternatively, guiding locomotion in the presence of moving objects may require recovering the object-motion component of optic flow—that is, the component that reflects the motion of objects in world coordinates independent of the motion of the observer (Fajen & Matthis, 2011). Because the optic flow field is influenced by both the motion of the observer and the motion of other objects, recovering the object-motion component requires factoring out the influence of self-motion (Wallach, 1987). This process is known as flow parsing (Rushton & Warren, 2005; Warren & Rushton, 2009a). 
Studies of flow parsing demonstrate that humans are capable of recovering the object-motion component and perceiving object motion in world coordinates (Matsumiya & Ando, 2009). However, it remains unclear whether this process actually plays any role in visually guided interception and obstacle avoidance. Models based on the lateral motion of the object in the optic flow field suggests that flow parsing is superfluous because the lateral optical motion is sufficient to guide locomotion. Yet there are several important aspects of interception and avoidance of moving objects that these models cannot capture (Fajen, 2013; Fajen & Matthis, 2013): (a) they treat objects and the observer as points without physical extent (but see Rushton, Wen, & Allison, 2002 for an attempt to address this problem), (b) they ignore the fact that there are limits to how fast one can move, and (c) they offer no account of how speed and direction of locomotion are coordinated during interception and obstacle avoidance. An alternative model presented by Fajen and Matthis (Fajen, 2013; Fajen & Matthis, 2013) provides a basis for addressing these limitations, but it is based on object motion in world coordinates and therefore requires the visual system to recover the object-motion component of optic flow. 
The primary aim of the present study was to test two competing hypotheses about the perception of object motion during obstacle avoidance: humans rely on information in optic flow, which reflects object motion in observer coordinates (Hypothesis 1), versus humans must recover object motion in world coordinates (Hypothesis 2). We considered two versions of Hypothesis 2, one that relies entirely on visual self-motion information to recover object motion in world coordinates (Hypothesis 2A) and one that relies on both visual and nonvisual self-motion information (Hypothesis 2B). 
Choosing routes around moving obstacles
Subjects performed a route decision task in an ambulatory virtual environment that was viewed through a head-mounted display (Figure 2A). They walked straight ahead from a home position while a virtual obstacle moved from right to left for 1.4 s, at which time it disappeared (Figure 2B). Subjects quickly judged whether they could have avoided the obstacle by passing in front of it before it reached their locomotor path. They were instructed to base their judgments on whether they could have passed in front if they had been allowed to walk as quickly as possible but not run.1 
Figure 2
 
Screenshot and task. (A) Screenshot of virtual environment viewed through HMD. (B) Plan view of observer moving straight ahead and object moving from right to left toward an unmarked location (×) 3, 4, or 5 m from the home location. (C) Lateral shift manipulation applied in Session A-Shift and Session B-Shift trials. Observer's position in the virtual environment was shifted to the left by 20% of his or her forward displacement.
Figure 2
 
Screenshot and task. (A) Screenshot of virtual environment viewed through HMD. (B) Plan view of observer moving straight ahead and object moving from right to left toward an unmarked location (×) 3, 4, or 5 m from the home location. (C) Lateral shift manipulation applied in Session A-Shift and Session B-Shift trials. Observer's position in the virtual environment was shifted to the left by 20% of his or her forward displacement.
The obstacle moved along one of 15 unique trajectories, which were defined by two factors: the location along an imaginary line extending forward from the subjects' initial position toward which the obstacle moved (indicated by an “×” in Figure 2B), and the amount of time it would have taken for the obstacle to reach that point had it not disappeared (i.e., time-to-crossing [TTC]). The top-left quadrant of Figure 3A shows the specific values of location and TTC. The trajectories were chosen to yield a range of responses varying from easily passable even at slow walking speeds (i.e., when the obstacle moved toward a nearby location and TTC was long) to definitely not passable even at a fast walking speed (i.e., when the obstacle moved toward a distant location and TTC was short). Each trajectory was repeated eight times for a total of 120 trials. 
Figure 3
 
Schematic diagram of design of experiment. The four main quadrants represent trials with 0% lateral shift (red) and trials with 20% lateral shift (blue) in Sessions A and B. Session A comprised 120 trials with 0% lateral shift (solid red) and 24 randomly interspersed catch trials with 20% lateral shift (checkered blue). Session B comprised 120 trials with 20% lateral shift (solid blue) and 24 randomly interspersed catch trials with 0% lateral shift (checkered red).
Figure 3
 
Schematic diagram of design of experiment. The four main quadrants represent trials with 0% lateral shift (red) and trials with 20% lateral shift (blue) in Sessions A and B. Session A comprised 120 trials with 0% lateral shift (solid red) and 24 randomly interspersed catch trials with 20% lateral shift (checkered blue). Session B comprised 120 trials with 20% lateral shift (solid blue) and 24 randomly interspersed catch trials with 0% lateral shift (checkered red).
In addition, there were also 24 randomly interspersed catch trials in which subjects' position in the virtual environment was laterally shifted to the left on each frame by 20% of their forward displacement, which corresponds to a ∼11° shift in the locomotor path (see Figure 2C). The lateral shift manipulation was similar to that used by Warren, Kay, Zosh, Duchon, and Sahuc (2001). The initial conditions in catch trials matched a subset of the initial conditions in normal trials (see bottom left quadrant of Figure 3). The 120 normal trials without a lateral shift and 24 randomly interspersed catch trials with a lateral shift comprised Session A (left column of Figure 3). 
Subjects also completed a second session (Session B) within 4 days of Session A. The order of sessions was counterbalanced across subjects. Session B was identical to Session A with two exceptions. First, there were 120 trials with a 20% lateral shift and 24 randomly interspersed catch trials with a 0% lateral shift (i.e., the reverse of Session A). Second, the set of initial conditions used in catch trials in the two sessions differed (compare initial conditions for trials indicated by checkered blocks in Figure 3). As we will explain in the Results section below, this design allowed us to determine whether route judgments were based on object motion in observer coordinates or world coordinates. 
Methods
Subjects
Eleven subjects (six men, five women, mean age: 19.0 years) participated in the experiment. Subjects were compensated for participation with extra credit. 
Equipment
The experiment was conducted in a 6.5 m × 9 m ambulatory virtual environment laboratory. Subjects wore an nVis nVisor SX111 stereoscopic HMD with a resolution of 1280 pixels × 1024 pixels (nVis, Inc., Reston, VA) per eye and a diagonal field of view of 111°. Head position and orientation were tracked using an Intersense IS-900 motion tracking system (Intersense, Billerica, MA). Data from the tracking system were used to update the position and orientation of the simulated viewpoint. The virtual environment was created using Vizard Virtual Reality Toolkit (WorldViz LLC, Santa Barbara, CA) running on an Alienware Area-51 PC (Dell, Inc., Round Rock, TX). 
Virtual environment and procedure
The virtual environment consisted of a green, grass-textured ground surface, a black sky, and an array of randomly distributed bamboo-textured posts (Figure 2A). Subjects began each trial by walking to a designated home location, which was a rectangular box in the virtual environment that changed color from translucent red to translucent yellow when the subject's head was inside the box. Subjects also turned to face an alignment marker, which appeared as a thin vertical line in the distance. Once they were properly positioned and aligned, they pressed a button on a handheld remote mouse, which triggered the appearance of a stationary, yellow obstacle (a cylinder 2.0 m tall × 0.1 m in diameter). The initial position of the obstacle varied randomly between 5.5 and 6.0 m in depth and between 1.5 and 2.0 m to the right of the midline. After a 0.5 s delay, the home box and alignment marker disappeared, the obstacle began moving leftward, and an auditory “go signal” was presented to cue subjects to begin walking. Subjects were instructed to walk straight ahead in the direction that they were facing. 
The trajectory of the obstacle was determined by the location along the imaginary line extending forward from the subjects' initial position toward which the obstacle moved and the amount of time it would have taken to arrive at that point (TTC). The 15 conditions (3 locations × 5 TTCs) used in Session A-No Shift trials and the six conditions (3 locations × 2 TTCs) used in Session A-Shift trials in both sessions are shown in Figure 3. The obstacle disappeared 1.4 s after it began moving, which was between 0.9 and 1.7 s before it reached the midline depending on the value of TTC in that trial. Subjects pressed one of two buttons on the handheld mouse to indicate whether they would have avoided the obstacle by passing in front of it before it reached their locomotor path or passing behind it after it crossed their locomotor path. Judgments had to be entered within a response window that began 1.0 s after the trial began and lasted 1.6 s, or else the trial was aborted and repeated later in the session. 
Odd-numbered and even-numbered trials were performed while walking in opposite directions in the lab. Therefore, after subjects entered the response, the start box for the next trial appeared in front of them. Subjects walked into the box and turned 180° to face the alignment marker in preparation for the next trial. The distance between the start boxes for odd-numbered and even-numbered trials (and hence the approximate distance that subjects walked between trials) was 3 m. The virtual environment remained visible as subjects walked to the start box, with the same value of lateral shift (0% or 20%) that was applied before the response was entered. 
Before starting the experiment, subjects completed a short warm-up session designed to familiarize themselves with moving through the virtual environment and performing the judgment task. In Session B, the warm-up session was completed with the lateral shift to ensure that subjects were properly adapted to the conditions encountered in normal trials in that session. The Institutional Review Board at Rensselaer Polytechnic Institute approved the experimental protocol. 
Data analyses
The dependent measure was the percentage of trials in which subjects judged that they would pass in front of the object, averaged across various subsets of conditions, as explained in following text. We refer to this as “% passable.” 
The logic of the analyses assumed that there were no systematic differences in walking behavior in the real world across sets of trials from different conditions. To confirm this assumption, we measured subjects' head position in real world coordinates 1 s after the trial began. For the purposes of this analysis, we did not consider head position after 1 s because this was the moment at which subjects could begin to enter their responses. Because we were concerned with walking trajectories up until responses were entered, 1 s was the last possible moment at which we could be certain that subjects had not yet entered a response. For each analysis considered (see also, Figure 4), we calculated the difference in mean head position at 1 s between the two sets of trials. The mean differences along the x-axis were 1.4 cm (Session A-No Shift/Session B-No Shift), 2.3 cm (Session A-Shift/Session B-Shift), 2.7 cm (Session B-Shift/Session B-No Shift), and 4.1 cm (Session A-No Shift/Session B-Shift) and along the z-axis were 1.2 cm (Session A-No Shift/Session B-No Shift), 0.7 cm (Session A-Shift /Session B-Shift), 1.0 cm (Session B-Shift/Session B-No Shift), and 0.2 cm (Session A-No Shift/Session B-Shift). These differences indicate that subjects followed nearly identical walking trajectories in each set of trials. 
Figure 4
 
Summary of results. (A) and (C) show the subset of conditions used for analyses shown in (B) and (D), respectively. Error bars represent ±1 SE and asterisks denote statistically significant differences.
Figure 4
 
Summary of results. (A) and (C) show the subset of conditions used for analyses shown in (B) and (D), respectively. Error bars represent ±1 SE and asterisks denote statistically significant differences.
The fact that walking trajectories were so similar in all four conditions may seem inconsistent with previous studies on locomotor adaptation. For example, subjects in Bruggeman, Zosh, & Warren (2007) walked to a visible goal while the focus of expansion was offset from the actual direction of locomotion, similar to the lateral shift manipulation used in the present study. Within a few trials, they adapted to the offset such that they followed a different walking trajectory to the goal. Given that the lateral shift was present in ∼83% of trials in Session B, one might expect walking trajectories in that session to differ from those in Session A. As we already explained, this was not the case. The reason for this was that in the present study, subjects did not walk to a visible target but rather walked straight ahead in the direction that they were already facing, which was the same in all four conditions as determined by the alignment marker and start box. Therefore, the similarity of walking trajectories across conditions was not inconsistent with previous studies or with adaptation to the lateral shift. 
Results
Hypothesis 1: Object motion is perceived in observer coordinates
According to Hypothesis 1, observers base their judgments on object motion perceived at observer coordinates using information that is directly available in the optic flow field. Therefore, Hypothesis 1 predicts that judgments should be similar in conditions in which the visual information that is available to subjects as they move is the same. We tested this prediction by comparing No Shift trials in Session A with No Shift trials in Session B. For this analysis, we focused on the subset of Session A-No Shift trials with initial conditions that matched those in Session B-No Shift trials (see dark red blocks in upper left quadrant of Figure 4A). Therefore, differences in judgments could not be attributed to differences in initial conditions. We also confirmed that there were no systematic differences in walking behavior across conditions (see data analyses section of Methods). As such, we can assume that for a pair of trials from different sets (i.e., one Session A-No Shift trial and one Session B-No Shift trial) with identical initial conditions (i.e., the same location and initial TTC), the visual information that was available to subjects as they moved was effectively the same. 
If judgments were based on information in optic flow that reflects object motion in observer coordinates, judgments in Session A-No Shift trials should be similar to judgments in Session B-No Shift trials. The findings were inconsistent with this prediction (see solid and checkered red bars in Figure 4B). Subjects were significantly more likely to perceive that they could pass in front in Session A-No Shift trials compared with Session B-No Shift trials, t(10) = 4.04, p < 0.01. 
Hypothesis 1 makes a similar prediction about Session A-Shift trials and the subset of Session B-Shift trials with the same initial conditions (solid and checkered blue blocks in Figure 4A). The same lateral shift manipulation that was applied in Session A-Shift trials was also applied in Session B-Shift trials, and walking behavior was nearly identical in these two conditions. Therefore, the visual information that was available to subjects in these two conditions was effectively the same. Contrary to the predictions of Hypothesis 1, subjects were significantly more likely to perceive that they could pass in front in Session A-Shift trials compared with Session B-Shift trials, t(10) = 3.74, p < 0.01 (see solid and checkered blue bars in Figure 4B). Taken together, the first set of analyses demonstrates that judgments differed under conditions in which object motion in observer coordinates was the same, which is inconsistent with the predictions of Hypothesis 1. 
Next, we tested Hypothesis 2, which predicts that observers base their judgments on perceived object motion in world coordinates, which involves flow parsing. We considered two versions of this hypothesis that differ in terms of the contributions of visual and nonvisual self-motion information to flow parsing. 
Hypothesis 2A: Object motion is perceived in world coordinates and is recovered using visual self-motion information
The first version of Hypothesis 2, which we labeled Hypothesis 2A, states that the self-motion component of optic flow (i.e., the component that must be factored out) is based entirely on visual self-motion information with no contribution of nonvisual information. Several previous studies have investigated the influence of visual self-motion information by presenting stationary observers with stimuli simulating combined self-motion and object motion (Matsumiya & Ando, 2009; Royden & Connors, 2010; Royden & Moore, 2012; Royden, Wolfe, & Klempen, 2001; Rushton, Bradshaw, & Warren, 2007; Rushton & Warren, 2005; Warren & Rushton, 2007, 2008, 2009b). Perceived object motion was influenced by global optic flow simulating self-motion, indicating that visual information can be used to factor out the self-motion component of optic flow. Here we asked whether humans rely entirely on visual self-motion information even when self-motion is real and actively generated, as it was in the present study; that is, when nonvisual self-motion information is also available. 
Like Hypothesis 1, Hypothesis 2A predicts that judgments should be similar in normal trials in one session and catch trials in the opposite session (i.e., the two comparisons in Figure 4A), because both the local optical motion of the object and the global optic flow specifying self-motion were the same in these two conditions. As already noted, subjects were significantly more likely to perceive gaps as passable in Session A-No Shift trials compared with Session B-No Shift trials, and in Session A-Shift trials compared with Session B-Shift trials (Figure 4B). Thus, the analyses of normal trials in one session and catch trials in the opposite session do not support the predictions of Hypothesis 2A either. 
Hypothesis 2B: Object motion is perceived in world coordinates and is recovered using visual and nonvisual self-motion information
These analyses are, however, consistent with the second version of this hypothesis (Hypothesis 2B), which states that observers rely on perceived object motion in world coordinates and use both visual and nonvisual self-motion information for flow parsing. We illustrated this point for Session A-No Shift trials and Session B-No Shift trials (i.e., conditions indicated by red bars in Figure 4A). Because there was no lateral shift in either set of trials and because walking behavior was nearly identical in both trial types, the optic flow field in pairs of trials with matching initial conditions was effectively the same. However, Session B-No Shift trials were randomly interspersed within a larger set of Session B-Shift trials with a 20% leftward shift. Consequently, as subjects walked through the virtual environment in Session B-Shift trials, nonvisual self-motion information generated by walking in one direction was accompanied by global optic flow corresponding to walking ∼ 11° to the left. Because the lateral shift was applied in the majority of trials in Session B, subjects should adapt to this change in the relation between nonvisual self-motion information and global optic flow. When adaptation occurred, the perceived direction of self-motion based on nonvisual information shifted leftward toward the visually specified direction of locomotion (similar to the effect reported in Bruggeman et al., 2007). This is illustrated in Figure 5, which shows how the perceived direction of self-motion based on nonvisual information shifted as the observer adapted to the leftward shifted flow field. 
Figure 5
 
Adaptation of perceived direction of self-motion based on nonvisual information in Session B. Over repeated Session B-Shift trials with the optic flow field shifted leftward, perceived direction based on nonvisual information (NV) shifted leftward toward the optically specified direction of self-motion (V). The effects of adaptation carried over to Session B-No Shift trials, in which the lateral shift was not applied.
Figure 5
 
Adaptation of perceived direction of self-motion based on nonvisual information in Session B. Over repeated Session B-Shift trials with the optic flow field shifted leftward, perceived direction based on nonvisual information (NV) shifted leftward toward the optically specified direction of self-motion (V). The effects of adaptation carried over to Session B-No Shift trials, in which the lateral shift was not applied.
Although the lateral shift manipulation was not applied in Session B-No Shift trials, the large majority (83%) of trials in Session B were Session B-Shift trials with the lateral shift. As such, the effects of adaptation on perceived direction of self-motion based on nonvisual information should carry over to Session B-No Shift trials as well. Even though the optically specified direction of self-motion was straight ahead in Session B-No Shift trials, perceived direction of self-motion based on nonvisual information was shifted to the left (see right side of Figure 5). 
Thus, in Session A-No Shift trials, both visual and nonvisual information about the direction of self-motion were consistent with the actual direction of self-motion. In contrast, in Session B-No Shift trials, visual information was consistent with the actual direction of self-motion but the perceived direction based on nonvisual information was perturbed to the left. If nonvisual information contributes to flow parsing, then the component of optic flow that is attributed to self-motion and factored out should differ in Session A-No Shift trials compared with Session B-No Shift trials. 
The logic of this prediction is illustrated in Figure 6A through D, which depicts the flow parsing process for Session A-No Shift trials and Session B-No Shift trials, assuming that subjects relied at least partly on nonvisual self-motion information. Figure 6A and C depict the optic flow fields on Session A-No Shift and Session B-No Shift trials, respectively. The flow fields are effectively the same because there was no lateral shift and because walking behavior was so similar. Note that the perceived direction of self-motion based on visual and nonvisual information (indicated by the white bars labeled V and NV, respectively) are aligned in Figure 6A but not in Figure 6C due to adaptation to the lateral shift in Session B-Shift trials. 
Figure 6
 
Flow parsing with and without lateral shift. Optic flow fields with moving object for the no lateral shift (A, C) and 20% leftward shift (E) conditions. (B, D, F) show the parsing of the local optical motion of the moving object (solid line) into self-motion (dashed lines) and object-motion (dotted lines) components. V and NV indicate the perceived direction of self-motion based on visual and nonvisual information, respectively.
Figure 6
 
Flow parsing with and without lateral shift. Optic flow fields with moving object for the no lateral shift (A, C) and 20% leftward shift (E) conditions. (B, D, F) show the parsing of the local optical motion of the moving object (solid line) into self-motion (dashed lines) and object-motion (dotted lines) components. V and NV indicate the perceived direction of self-motion based on visual and nonvisual information, respectively.
In Figure 6B and D, which depicts the flow parsing process, the local optical motion of the object (solid yellow lines) is also the same. However, if subjects adapted to the lateral shift in Session B and if they relied partly on nonvisual self-motion information for flow parsing, then the component attributed to self-motion (dashed lines) would have more lateral motion. The remaining component, which was attributed to object motion (dotted line), points farther to the left. Therefore, if nonvisual information contributed to flow parsing, then subjects should perceive that the object was moving leftward at a faster rate in Session B-No Shift trials and should be less likely to perceive that the object was passable, when compared with Session A-No Shift trials. As already explained and as illustrated by the red bars in Figure 4B, the results were consistent with this prediction, providing support for Hypothesis 2B. 
As the previous analysis shows, Hypothesis 2B explains why judgments can be different even under conditions in which the available visual information is the same. The next analysis tested a strong prediction of Hypothesis 2B—that under certain circumstances, judgments can be similar in trials with and without the lateral shift despite the available visual information being quite different. Understanding this prediction requires a brief discussion of the lateral shift manipulation and how it affected object motion in observer coordinates and world coordinates. Recall that when the lateral shift manipulation was applied, subjects' movements through the virtual environment were shifted to the left. Therefore, the motion of the object in observer coordinates differed in these two conditions. This difference is illustrated in Figure 6A and E, which depicts the unparsed optic flow fields with 0% and 20% lateral shift, respectively. Nonetheless, because the lateral shift manipulation affected the movement of the observer and not the movement of the object, object motion in world coordinates was the same. (Note that object motion is the same in Figure 2B and C, which depicts observer and object motion in world coordinates without and with the lateral shift manipulation, respectively.) Therefore, if route decisions are based on object motion perceived in world coordinates, then it should be possible for judgments in trials with and without the lateral shift to be similar as long as the available self-motion information is sufficient for accurate flow parsing. This is illustrated in Figure 6B and F, which shows the flow parsing process with 0% and 20% lateral shift, respectively. The local optical motion of the object differed in these two conditions (compare solid yellow lines). However, if subjects accurately perceived their self-motion in the virtual environment, then the component of optic flow that was attributed to self-motion should also differ (compare dashed lines). In particular, the estimated self-motion component should have more lateral motion when the lateral shift was applied. Therefore, although the local optical motion in Figure 6B (with no lateral shift) differs from that in Figure 6F (with the lateral shift), the difference should be canceled out by the difference in the estimated self-motion component. The resultant vector, which reflects the object-motion component, should be the same (compare dotted lines in Figure 6B and F). 
Next, let us consider the conditions in which this prediction (i.e., similar judgments with and without the lateral shift) should hold. According to Hypothesis 2B, observers rely on both visual and nonvisual self-motion information. Therefore, subjects should accurately recover the object-motion component only when perceived self-motion based on visual and nonvisual information are consistent with each other. Such was the case in Session A-No Shift trials because both visual and nonvisual information was aligned with the actual direction of self-motion (see white bars in Figure 6B). Likewise, perceptions of self-motion based on visual and nonvisual information were consistent in Session B-Shift trials (i.e., both were perturbed to the left, assuming that subjects adapted to the lateral shift; see white bars in Figure 6F). In contrast, in Session B-No Shift trials, visual self-motion information was aligned with the actual direction of self-motion, whereas perceived self-motion based on nonvisual information was perturbed to the left (see white bars in Figure 6D). Therefore, visual and nonvisual estimates of self-motion were consistent in Session A-No Shift trials, consistent in Session B-Shift trials, but in conflict in Session B-No Shift trials. If Hypothesis 2B is correct, then judgments on Session A-No Shift and Session B-Shift trials should be similar to each other but should differ from judgments in Session B-No Shift trials. To test this prediction, we used the subset of initial conditions that were common to all three sets of trials (see Figure 4C, D). As predicted, judgments on Session A-No Shift trials were significantly different from judgments on Session B-No Shift trials, t(10) = 4.04, p < 0.01] but not significantly different from judgments in Session B-Shift trials, t(10) = 1.96, ns.2 
Discussion
The comparisons in Figure 4C, D provide the crucial test for whether object motion is perceived in observer coordinates or world coordinates. The triad of conditions (Session A-No Shift, Session B-Shift, and Session B-No Shift) includes one pairing in which object motion was the same in observer coordinates (e.g., Session A-No Shift and Session B-No Shift) and another pairing in which object motion was the same in world coordinates but different in observer coordinates (e.g., Session A-No Shift and Session B-Shift). The fact that the pairing with the similar judgments is Session A-No Shift trials and Session B-Shift trials provides compelling evidence that when people choose routes around moving obstacles, they rely on information that reflects object motion in world coordinates rather than in observer coordinates. 
The results also provide strong evidence that the process of recovering object motion in world coordinates relies on both visual and nonvisual self-motion information. In previous studies of flow parsing (Matsumiya & Ando, 2009; Royden & Connors, 2010; Royden & Moore, 2012; Royden et al., 2001; Rushton et al., 2007; Rushton & Warren, 2005; Warren & Rushton, 2007, 2008, 2009a, 2009b), reliable nonvisual self-motion information was not available because self-motion was simulated and viewed on a computer monitor or projection screen by a stationary observer. When self-motion is real and actively generated, as it was in the present study, nonvisual information, which is known to contribute to the perception of self-motion (Campos, Byrne, & Sun, 2010; Harris, Jenkin, & Zikovitz, 2000; Mittelstaedt & Mittelstaedt, 2001), is also available. Our findings show that nonvisual information also plays a role in flow parsing. This highlights the multisensory nature of the flow parsing problem (Calabro, Soto-Faraco, & Vaina, 2011; Dyde & Harris, 2008; MacNeilage, Zhang, DeAngelis, & Angelaki, 2012) and adds to the growing body of literature demonstrating that what people perceive can be affected by their own self-generated movement (Wexler & van Boxtel, 2005). 
On the possibility that adaptation affected perceived egocentric direction
Our interpretation of the effects is based on the assumption that the lateral shift manipulation in Session B shifted the perceived direction of self-motion based on nonvisual information and affected the component of optic flow that was attributed to self-motion. However, one might wonder whether adaptation to the lateral shift manipulation affected the perceived egocentric direction of the obstacle and if this could also explain the results. There are two reasons why we think the findings cannot be attributed to a change in the perceived egocentric direction of the object. First, although perceived straight ahead can be realigned using prisms (e.g., Herlihey & Rushton, 2012; Redding & Wallace, 1997), laterally displacing optic flow in a virtual environment (as in the present study) does not affect perceived straight ahead (Bruggeman & Warren, 2010; Bruggeman et al., 2007). Second, even if the lateral shift manipulation had affected perceived straight ahead, it would have shifted it in the same direction as the shift in optic flow (i.e., to the left). The obstacle, which always appeared on the right, would have been perceived as lying farther to the right than it actually was. This would have biased subjects toward perceiving that they could pass in front of the obstacle. However, the actual effect is in the opposite direction. As shown in Figure 4B, subjects were less likely to perceive that they could pass in front when they were adapted to the lateral shift (i.e., on Session B-No Shift trials) compared to when they were not (i.e., on Session A-No Shift trials). Therefore, the findings cannot be attributed to an effect of the lateral shift manipulation on the perceived egocentric direction of the obstacle. 
Guiding locomotion in the presence of moving objects
The findings of this study help to bridge a gap in the literature between studies of the perception of object motion during self-motion on the one hand and studies of visually guided locomotion on the other hand. The former (Matsumiya & Ando, 2009; Royden & Connors, 2010; Royden & Moore, 2012; Rushton & Warren, 2005; Warren & Rushton, 2007, 2008, 2009a, 2009b) establish that moving observers can perceive object motion in world coordinates and provide details about the mechanisms that underlie this process. The latter (Chardenon et al., 2005; Cutting et al., 1995; Fajen & Warren, 2004, 2007; Lenoir et al., 2002; Ni & Andersen, 2008) generally assume that collision detection, obstacle avoidance, and interception are based on object motion perceived in observer coordinates. Thus, one might conclude that the perception of object motion during self-motion and the visual guidance of locomotion in the presence of moving objects rely on different reference frames. Our findings suggest that this is not the case—both rely on object motion perceived in a world-centered reference frame and both require flow parsing to recover object motion independent of self-motion. 
Why do observers need to perceive object motion in world coordinates to guide locomotion in the presence of moving objects? After all, the lateral optical motion of a moving object, which reflects object motion in observer coordinates, specifies whether one will collide with a moving object if current locomotor speed and direction are maintained. The answer is that there is more to interception and obstacle avoidance than knowing whether one's current locomotor velocity will result in a collision. Observers also need to know how fast they would need to move to intercept, pass in front of, or pass behind a moving object. More specifically, observers need to know how fast to move independent of how fast they are currently moving, and they need to know how fast to move in relation to how fast they are capable of moving. 
To illustrate this point, consider a pedestrian moving at a comfortable walking speed and a moving obstacle (e.g., bicycle) moving from right to left across the pedestrian's future path. Even if the pedestrian would pass behind the bicycle if her current speed is maintained, she may choose to speed up to pass in front of the obstacle (e.g., if she is in a hurry). However, if the speed needed to pass in front is faster than the speed that the pedestrian is capable of moving (or willing to move), then she should not attempt to pass in front. The decision to pass in front or pass behind requires the ability to perceive the minimum speed needed to pass in front in relation to the maximum speed that the observer is capable of moving (or willing to move). Although information about the sufficiency of one's current speed is available in the optic flow field (i.e., combined self-motion plus object-motion components), information about how fast one needs to move is not. Instead, such information is found in the object-motion component of optic flow—that is, the component that reflects object motion in world coordinates (Fajen & Matthis, 2013). Therefore, the ability to perceive object motion in world coordinates, which was demonstrated in this study, plays an essential role in guiding locomotion in the presence of moving objects. 
Acknowledgments
This research was supported by a grant from the National Institutes of Health (1R01EY019317). The authors thank Kevin Todisco for creating the virtual environments used in this experiment. 
Commercial relationships: none. 
Corresponding author: Brett Fajen. 
Email: fajenb@rpi.edu. 
Address: Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY. 
References
Bruggeman H. Warren W. H. (2010). The direction of walking—but not throwing or kicking—is adapted by optic flow. Psychological Science, 21 (7), 1006–1013. [CrossRef] [PubMed]
Bruggeman H. Zosh W. Warren W. H. (2007). Optic flow drives human visuo-locomotor adaptation. Current Biology, 17 (23), 2035–2040. [CrossRef] [PubMed]
Calabro F. J. Soto-Faraco S. Vaina L. M. (2011). Acoustic facilitation of object movement detection during self-motion. Proceedings of the Royal Society B-Biological Sciences, 278 (1719), 2840–2847. [CrossRef]
Campos J. L. Byrne P. Sun H. J. (2010). The brain weights body-based cues higher than vision when estimating walked distances. European Journal of Neuroscience, 31 (10), 1889–1898. [CrossRef] [PubMed]
Chardenon A. Montagne G. Laurent M. Bootsma R. J. (2005). A robust solution for dealing with environmental changes in intercepting moving balls. Journal of Motor Behavior, 37 (1), 52–64. [CrossRef] [PubMed]
Collett T. S. Land M. F. (1978). How hoverflies computer interception courses. Journal of Comparative Physiology, 125, 191–204. [CrossRef]
Cutting J. E. Vishton P. M. Braren P. A. (1995). How we avoid collisions with stationary and moving objects. Psychological Review, 102 (4), 627–651. [CrossRef]
Dyde R. T. Harris L. R. (2008). The influence of retinal and extra-retinal motion cues on perceived object motion during self-motion. Journal of Vision, 8 (14): 5, 1–10, http://www.journalofvision.org/content/8/14/5, doi:10.1167/8.14.5. [PubMed] [Article] [CrossRef] [PubMed]
Fajen B. R. (2013). Guiding locomotion in complex and dynamic environments. Frontiers in Behavioral Neuroscience, manuscript accepted for publication.
Fajen B. R. Diaz G. Cramer C. (2011). Reconsidering the role of movement in perceiving action-scaled affordances. Human Movement Science, 30 (3), 504–533. [CrossRef] [PubMed]
Fajen B. R. Matthis J. M. (2013). Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One, 8 (2), e55446. [CrossRef] [PubMed]
Fajen B. R. Matthis J. S. (2011). Direct perception of action-scaled affordances: The shrinking gap problem. Journal of Experimental Psychology: Human Perception and Performance, 37 (5), 1442–1457. [CrossRef] [PubMed]
Fajen B. R. Warren W. H. (2007). Behavioral dynamics of intercepting a moving target. Experimental Brain Research, 180 (2), 303–319. [CrossRef] [PubMed]
Fajen B. R. Warren W. H. (2004). Visual guidance of intercepting a moving target on foot. Perception, 33 (6), 689–715. [CrossRef] [PubMed]
Harris L. R. Jenkin M. Zikovitz D. C. (2000). Visual and non-visual cues in the perception of linear self motion. Experimental Brain Research, 135 (1), 12–21. [CrossRef] [PubMed]
Herlihey T. A. Rushton S. K. (2012). The role of discrepant retinal motion during walking in the realignment of egocentric space. Journal of Vision, 12 (3): 4, 1–11, http://www.journalofvision.org/content/12/3/4, doi:10.1167/12.3.4. [PubMed] [Article] [CrossRef] [PubMed]
Lanchester B. S. Mark R. F. (1975). Pursuit and prediction in tracking of moving food by a teleost fish (Acanthaluteres-Spilomelanurus). Journal of Experimental Biology, 63 (3), 627–645. [PubMed]
Lenoir M. Musch E. Thiery E. Savelsbergh G. J. (2002). Rate of change of angular bearing as the relevant property in a horizontal interception task during locomotion. Journal of Motor Behavior, 34 (4), 385–404. [CrossRef] [PubMed]
MacNeilage P. R. Zhang Z. DeAngelis G. C. Angelaki D. E. (2012). Vestibular facilitation of optic flow parsing. PLoS One, 7 (7), e40264. [CrossRef] [PubMed]
Matsumiya K. Ando H. (2009). World-centered perception of 3D object motion during visually guided self-motion. Journal of Vision, 9 (1): 15, 11–13, http://www.journalofvision.org/content/9/1/15, doi:10.1167/9.1.15. [PubMed] [Article] [CrossRef] [PubMed]
Mittelstaedt M. L. Mittelstaedt H. (2001). Idiothetic navigation in humans: estimation of path length. Experimental Brain Research, 139 (3), 318–332. [CrossRef] [PubMed]
Ni R. Andersen G. J. (2008). Detection of collision events on curved trajectories: Optical information from invariant rate-of-bearing change. Attention Perception & Psychophysics, 70 (7), 1314–1324. [CrossRef]
Olberg R. M. Worthington A. H. Venator K. R. (2000). Prey pursuit and interception in dragonflies. Journal of comparative physiology. A, Sensory, Neural, and Behavioral Physiology, 186 (2), 155–162. [CrossRef]
Redding G. M. Wallace B. (1997). Adaptive spatial alignment. Mahwah, NJ: Erlbaum.
Royden C. S. Connors E. M. (2010). The detection of moving objects by moving observers. Vision Research, 50 (11), 1014–1024. [CrossRef] [PubMed]
Royden C. S. Moore K. D. (2012). Use of speed cues in the detection of moving objects by moving observers. Vision Research, 59, 17–24. [CrossRef] [PubMed]
Royden C. S. Wolfe J. M. Klempen N. (2001). Visual search asymmetries in motion and optic flow fields. Perception & Psychophysics, 63 (3), 436–444. [CrossRef] [PubMed]
Rushton S. K. Allison R. S. (2013). Biologically-inspired heuristics for human-like walking trajectories toward targets and around obstacles. Displays, 34 (2), 105–113. [CrossRef]
Rushton S. K. Bradshaw M. F. Warren P. A. (2007). The pop out of scene-relative object movement against retinal motion due to self-movement. Cognition, 105 (1), 237–245. [CrossRef] [PubMed]
Rushton S. K. Harris J. M. Lloyd M. Wann J. P. (1998). Guidance of locomotion on foot uses perceived target location rather than optic flow. Current Biology, 8, 1191–1194. [CrossRef] [PubMed]
Rushton S. K. Warren P. A. (2005). Moving observers, relative retinal motion and the detection of object movement. Current Biology, 15 (14), R542–R543. [CrossRef] [PubMed]
Rushton S. K. Wen J. Allison R. S. (2002). Egocentric direction and the visual guidance of robot locomotion background, theory and implementation. In Lee S.-W. Bülthoff H. H. Poggio T. A. Wallraven C. (Eds.), Biologically motivated computer vision: Lecture notes in computer science ( Vol. 2525. pp. 576–591). Berlin, Heidelberg: Springer.
Wallach H. (1987). Perceiving a stable environment when one moves. Annual Review of Psychology, 38, 1–27. [CrossRef] [PubMed]
Warren P. A. Rushton S. K. (2007). Perception of object trajectory: Parsing retinal motion into self and object movement components. Journal of Vision, 7 (11): 2, 1–11, http://www.journalofvision.org/content/7/11/2, doi:10.1167/7.11.2. [PubMed] [Article] [CrossRef] [PubMed]
Warren P. A. Rushton S. K. (2008). Evidence for flow-parsing in radial flow displays. Vision Research, 48 (5), 655–663. [CrossRef] [PubMed]
Warren P. A. Rushton S. K. (2009 a). Optic flow processing for the assessment of object movement during ego movement. Current Biology, 19 (18), 1555–1560. [CrossRef] [PubMed]
Warren P. A. Rushton S. K. (2009 b). Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues. Vision Research, 49 (11), 1406–1419. [CrossRef] [PubMed]
Warren W. H. Kay B. A. Zosh W. D. Duchon A. P. Sahuc S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4, 213–216. [CrossRef] [PubMed]
Wexler M. van Boxtel J. J. (2005). Depth perception by the active observer. Trends in Cognitive Sciences, 9 (9), 431–438. [CrossRef] [PubMed]
Footnotes
1  In this particular study, the accuracy of judgments was less important than the effects of manipulations of self-motion information. Therefore, the experiment was not designed to measure the accuracy with which subjects judged whether they were capable of passing in front of the moving obstacle. However, the accuracy of judgments was measured in an earlier experiment in which subjects judged whether they could pass in front on some trials and actually attempted to pass in front on other trials (Fajen, Diaz, & Cramer, 2011). In that experiment, there was a close match between judgments and actions with no evidence of a systematic bias to overestimate or underestimate one's ability to pass in front.
Footnotes
2  Although the difference between Session A-No Shift trials and Session B-Shift trials was not statistically significant, there did appear to be a trend toward a higher percentage of passable judgments in Session B-Shift trials. This can be attributed to incomplete adaptation to the lateral shift in Session B. If subjects only partially adapted to the lateral shift, perceived self-motion based on nonvisual information would be shifted to the left, but by less than the optically specified shift of ∼11°. The component of optic flow that is attributed to self-motion would have more lateral motion in Session B than in Session A (as depicted by the dashed line in Figure 6F and B). However, the difference in the component attributed to self-motion would be less than if subjects had completely adapted. Therefore, the difference in the local optical motion between Session A-No Shift trials and Session B-Shift trials would not have been completely canceled out by the difference in the component attributed to self-motion, resulting in a small difference in the perceived motion of the object. Thus, the possible trend toward a higher percentage of passable judgments on Session B-Shift trials could be explained by incomplete adaptation to the lateral shift manipulation.
Figure 1
 
Optic flow field and decomposition into self-motion and object-motion components. (A) Optic flow field generated by an observer moving over a ground surface and an object (yellow dot) moving from right to left. (B) Component of optic flow due to self-motion independent of object motion. (C) Component of optic flow due to object motion independent of self-motion. The optic flow field in (A) is the vector sum of the self-motion (B) and object-motion (C) components. From Fajen, B. R., & Matthis, J. S. (2013). Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One, 8(2): e55446. doi:10.1371/journal.pone.0055446, used under a Creative Commons Attribution License.
Figure 1
 
Optic flow field and decomposition into self-motion and object-motion components. (A) Optic flow field generated by an observer moving over a ground surface and an object (yellow dot) moving from right to left. (B) Component of optic flow due to self-motion independent of object motion. (C) Component of optic flow due to object motion independent of self-motion. The optic flow field in (A) is the vector sum of the self-motion (B) and object-motion (C) components. From Fajen, B. R., & Matthis, J. S. (2013). Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One, 8(2): e55446. doi:10.1371/journal.pone.0055446, used under a Creative Commons Attribution License.
Figure 2
 
Screenshot and task. (A) Screenshot of virtual environment viewed through HMD. (B) Plan view of observer moving straight ahead and object moving from right to left toward an unmarked location (×) 3, 4, or 5 m from the home location. (C) Lateral shift manipulation applied in Session A-Shift and Session B-Shift trials. Observer's position in the virtual environment was shifted to the left by 20% of his or her forward displacement.
Figure 2
 
Screenshot and task. (A) Screenshot of virtual environment viewed through HMD. (B) Plan view of observer moving straight ahead and object moving from right to left toward an unmarked location (×) 3, 4, or 5 m from the home location. (C) Lateral shift manipulation applied in Session A-Shift and Session B-Shift trials. Observer's position in the virtual environment was shifted to the left by 20% of his or her forward displacement.
Figure 3
 
Schematic diagram of design of experiment. The four main quadrants represent trials with 0% lateral shift (red) and trials with 20% lateral shift (blue) in Sessions A and B. Session A comprised 120 trials with 0% lateral shift (solid red) and 24 randomly interspersed catch trials with 20% lateral shift (checkered blue). Session B comprised 120 trials with 20% lateral shift (solid blue) and 24 randomly interspersed catch trials with 0% lateral shift (checkered red).
Figure 3
 
Schematic diagram of design of experiment. The four main quadrants represent trials with 0% lateral shift (red) and trials with 20% lateral shift (blue) in Sessions A and B. Session A comprised 120 trials with 0% lateral shift (solid red) and 24 randomly interspersed catch trials with 20% lateral shift (checkered blue). Session B comprised 120 trials with 20% lateral shift (solid blue) and 24 randomly interspersed catch trials with 0% lateral shift (checkered red).
Figure 4
 
Summary of results. (A) and (C) show the subset of conditions used for analyses shown in (B) and (D), respectively. Error bars represent ±1 SE and asterisks denote statistically significant differences.
Figure 4
 
Summary of results. (A) and (C) show the subset of conditions used for analyses shown in (B) and (D), respectively. Error bars represent ±1 SE and asterisks denote statistically significant differences.
Figure 5
 
Adaptation of perceived direction of self-motion based on nonvisual information in Session B. Over repeated Session B-Shift trials with the optic flow field shifted leftward, perceived direction based on nonvisual information (NV) shifted leftward toward the optically specified direction of self-motion (V). The effects of adaptation carried over to Session B-No Shift trials, in which the lateral shift was not applied.
Figure 5
 
Adaptation of perceived direction of self-motion based on nonvisual information in Session B. Over repeated Session B-Shift trials with the optic flow field shifted leftward, perceived direction based on nonvisual information (NV) shifted leftward toward the optically specified direction of self-motion (V). The effects of adaptation carried over to Session B-No Shift trials, in which the lateral shift was not applied.
Figure 6
 
Flow parsing with and without lateral shift. Optic flow fields with moving object for the no lateral shift (A, C) and 20% leftward shift (E) conditions. (B, D, F) show the parsing of the local optical motion of the moving object (solid line) into self-motion (dashed lines) and object-motion (dotted lines) components. V and NV indicate the perceived direction of self-motion based on visual and nonvisual information, respectively.
Figure 6
 
Flow parsing with and without lateral shift. Optic flow fields with moving object for the no lateral shift (A, C) and 20% leftward shift (E) conditions. (B, D, F) show the parsing of the local optical motion of the moving object (solid line) into self-motion (dashed lines) and object-motion (dotted lines) components. V and NV indicate the perceived direction of self-motion based on visual and nonvisual information, respectively.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×