Free
Research Article  |   April 2006
High-speed navigators: Using more than what meets the eye
Author Affiliations
Journal of Vision April 2006, Vol.6, 3. doi:https://doi.org/10.1167/6.5.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Francesca C. Fortenbaugh, John C. Hicks, Lei Hao, Kathleen A. Turano; High-speed navigators: Using more than what meets the eye. Journal of Vision 2006;6(5):3. https://doi.org/10.1167/6.5.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This study employed a novel method to dissociate the use of external visual information and internal spatial representations in human navigation. Using a goal-directed walking task and gaze-contingent displays, 14 participants with normal vision navigated within an immersive virtual forest during which each participant's field of view (FOV) was restricted to 10, 20, or 40 deg in diameter. Participants were classified into two groups, good and poor navigators, based on a cluster analysis of their individual mean latencies, walk times, and path efficiencies in the 10 deg condition. Changes in performance measures across the three FOVs were calculated for the two groups. Significant interactions were found, with the overall performance of the poor navigators decreasing at a faster rate than the performance of the good navigators. Perceptual spans were also calculated for the two groups, and it was determined that the good navigators were able to complete the same task as effectively as the poor navigators with a smaller FOV. Collectively, these results support recent theories stating that good navigators rely on internal spatial representations to a greater extent than poor navigators do.

Introduction
Humans complete a series of goal-directed walking tasks everyday. Although most goals are reached with ease, individual differences still affect the ways in which these tasks are completed. For example, if a group of individuals are asked to walk to a target while avoiding stationary obstacles, some people will walk faster than others. One person may end up hitting one of the obstacles, whereas another may not hit any. In addition, if the distance to the target is sufficiently large, it is likely that no two individuals will take the same path. 
Because goal-directed walking tasks and other subtypes of navigation are so important to our survival, it is important to understand what causes good and poor performers to differ in their navigational abilities. Recently, a group of researchers (Hartley, Maguire, Spiers, & Burgess, 2003) has suggested that the development and usage of internal spatial representations, called cognitive maps, may help to explain why some individuals are better navigators than others. Using fMRI, the researchers showed that a network of structures in the human brain known to be involved in spatial memories (Aguirre, Zarahn, & D'Esposito, 1998; Burgess, Becker, King, & O'Keefe, 2001; Ekstrom et al., 2003; Epstein, Graham, & Downing, 2003; Epstein & Kanwisher, 1998; Maguire et al., 2003) was activated to a greater extent when good performers, who were classified as such by their mean path efficiencies, simulated navigation through unfamiliar routes (Hartley et al., 2003; Maguire et al., 1998). Although this finding is taken as evidence linking cognitive maps to navigational ability (Hartley et al., 2003), it is not possible to rule out the role of alternative variables, possibly differences in visual search or gaze behaviors, from fMRI data alone. This is due to the fact that participants in this study did not actually locomote while neural activity was recorded and also because eye movements were neither observed nor controlled. 
However, there is some behavioral evidence to support Hartley et al.'s (2003) proposal from studies conducted with visually impaired individuals. In one particular study, Hill and Rieser (1993) videotaped the strategies used by good and poor performers, who were either totally blind or only able to perceive light, in determining the locations of objects within a room relative to a starting position. The researchers found that the best performers systematically executed perimeter patterns, gridline patterns, or both to determine the layout of the room as well as to find the objects within the room more quickly than the poor performers. From their analyses and the fact that the participants did not have available external visual information, it appears that the good performers were able to develop an internal spatial representation that then allowed them to make more accurate estimates of the distance and angular offset of objects from the starting position. Although the extent to which individuals without sight are able to consistently create accurate representations of their environments is debatable, the fact that persons without vision are able to perform within the range of sighted individuals on some large-scale tasks indicates that they are able to create representations of their environments on one level or another (Ungar, 2000). 
For some, the idea of cognitive maps aiding navigation is intuitive if one does not restrict the definition of a spatial representation to the same Euclidean metric with which its external counterpart is perceived. Geographers have been talking about this concept for close to a hundred years (Gulliver, 1908; Trowbridge, 1913), and researchers within this field (Blaut, Stea, Spencer, & Blades, 2003) have taken the universal ability to draw top–down perspectives of well-traveled areas as evidence of an innate capability to create viewpoint-independent spatial representations. Yet, the ability to create cognitive maps of the world around us cannot be taken to infer that individuals with normal vision always utilize these representations. Research has repeatedly shown the important role vision can play in effective navigation and the strong dependence humans place upon external visual information when it is available (Cutting, Vishton, Fluckiger, Baumberger, & Gerndt, 1997; Fajen & Warren, 2004; Gibson, 1994; Harris & Bonas, 2002; Lee, 1998; Lee, Craig & Grealy, 1999; Priest, Cutting, Torrey, & Regan, 1985; Rushton, Harris, Lloyd, & Wann, 1998; Wang & Cutting, 1999; Warren, 1998; Warren, Kay, Zosh, Duchon, & Sahuc, 2001). In addition, with the advent of reliable eye-tracking technology, researchers have been able to show that there exists a tight coupling between eye movements and action when one is required to walk in a specified manner (Hollands & Marple-Horvat, 2001; Hollands, Patla, & Vickers, 2002; Turano, Geruschat, & Baker, 2003) or complete a variety of other activities that require some level of motor control (Carpenter & Williams, 1995; Hayhoe, Shrivastava, Mruczek, & Pelz, 2003; Land, 1992; Land & Hayhoe, 2001; Land, Mennie, & Rusted, 1999; Land & Lee, 1994; McPeek, Skavenski, & Nakayama, 2000; Peterson, Kramer, & Irwin, 2004). Arguments for an innate usage of cognitive maps in navigation are further complicated by research illustrating that without any sign of a global representation being formed, humans can also use internal sensory information to guide navigation in previously traveled areas (Loomis, Klatzky, Golledge, & Philbeck, 1999; Mittelstaedt & Mittelstaedt, 2001; Philbeck & Loomis, 1997; Rieser, Ashmead, Taylor, & Youngquist, 1990). 
Given that evidence exists for the role of both external visual information and internal spatial representations in navigation, it seems plausible that both factors play a significant role. Indeed, Oliva, Wolfe, and Arsenio (2004) have found that people can rely on internal or external information, depending on the type of task being completed. This finding is supported by the work of Ballard, Hayhoe, and Pelz (1995), who showed that during a block-building task, participants chose to look back at the original design until such checking behavior became too time-consuming, at which point they began to rely more on their memories of the structures. 
However, if both good and poor navigators use external visual information to guide their movements and cognitive maps are by nature unobservable, how can one test for the use of these representations by good navigators? As Nadel and Hardt (2004) note, if a task allows participants to choose between two or more strategies, one cannot be sure that a specific one is being utilized based on the behaviors observed. Assuming that a group of individuals shows superior performance on a goal-directed walking task, the question becomes how one can differentiate those who are utilizing an internal representation of the environment from those who are simply better at utilizing external visual information. This study sought to create such a task through the development of a virtual forest in which the size of the field of view (FOV) of the participants was restricted to 10, 20, or 40 deg in diameter using the gaze-contingent concept of Geisler and Perry (2002). Over three blocks of trials, the participants were required to find and walk to a target tree as quickly and efficiently as possible while avoiding obstacle trees along the way. The rationale for this approach is as follows: By utilizing a virtual environment, it was assured that all external visual information, from lighting to texture, would be homogeneous across the participants. At the same time, however, restricting the participants' FOV systematically manipulated the amount of external visual information available. In doing so, this paradigm decreases the probability of participants successfully completing the task when solely relying on external visual information. Thus, if the better navigators do rely on cognitive maps to a greater extent than the poor navigators, it was hypothesized that a significant interaction should be found between FOV and navigational group, with the performance of the good navigators deteriorating at a slower rate than the poor navigators with decreasing FOV. This hypothesis leads to two concrete predictions. First, as FOV decreases from 40 to 10 deg, the rate at which performance deteriorates should be faster for the poor navigators than the good navigators. In particular, different rates of change should be seen for path efficiency as this measure was used to define performance groups in Hartley et al.'s (2003) study. Second, if the good navigators can rely on an internal spatial representation of the environment, they should be able to use these representations to compensate for a decrease in external visual information. Thus, the good navigators should be able to perform as well as the poor navigators with a smaller FOV and thus less visual information available within a single glance. It is important to point out that these predictions do not suggest that the good navigators should necessarily outperform the poor navigators when FOV is the largest. This is because the confines of the laboratory space limit the distance that participants must travel and, as a result of this, the difficulty of the task. It is therefore possible that with a large enough FOV, participants using only external visual information may be able to complete the task as efficiently as those using a combination of external and internal information. 
Methods
Participants
Fifteen healthy volunteers, seven men and eight women, participated in this study. The mean age of the participants was 34 years old, with a range of 24 to 51 years old. No participant had any ocular diseases or muscular–skeletal disorders. One male participant was dropped from the study due to technical difficulties with the eye-tracking software. Thus, 14 participants completed the study and were included in the analyses. All participants were compensated for their time, and this research followed the tenets of the Declaration of Helsinki. 
Visual function was tested binocularly, with participants wearing their normal corrective lenses to ensure that each participant had normal vision. Visual acuity was tested using an ETDRS eye chart, and peak contrast sensitivity was tested using a Pelli–Robson letter chart. Mean visual acuity was −0.10 logMAR, and mean peak contrast sensitivity was 1.82 logCS. The pupillary distance of each participant was measured and used to adjust the position of the displays in the headset to obtain a stereo view of the environment. 
Stimuli
An immersive virtual forest was created using 3D Studio Max software (Discreet, Montreal, CA). The program was exported to a graphics engine developed in-house with C++ and Microsoft's DirectX. The graphics program used the output from a HiBall head tracker (3rd Tech, Chapel Hill, NC) attached to the top of the head-mounted display (HMD) together with the imported scene to determine the participant's current point of view in the environment. Perspective views of the environment were displayed in the HMD using a GeForce FX graphics board (nVIDIA, Santa Clara, CA). The forest consisted of 29 trees: 1 target, 3 obstacles, and 25 distracters. The starting position was fixed across all trials, and participants were oriented in the same direction before each trial. Forty-five configurations were created, in which the target tree was located in one of five positions on a horizontal line 10 m from the starting position. The target was offset from the center by 0, ±0.33, or ±0.67 m (corresponding to offsets of 0, ±2, or ±4 deg, respectively), with the center being the point at which the line intersected with the participants' initial heading. Between the target and starting position, 3 obstacle trees were placed on horizontal lines located 2, 4.67, and 7.33 m from the starting position, with exactly 1 tree on each line per configuration. Positions of the obstacle trees were either ±0.33 or ±0.67 m from the center of each line. All of the remaining 25 trees were located outside the confines of the actual laboratory space such that the participants were able to see them but unable to walk into them. The configurations were divided into three blocks of 15 trials, with the position of the target tree counterbalanced within each block. 
Each configuration was unique in terms of the locations of the target tree and obstacle trees. Because of this, one half or more of the target tree was visible from the starting position in only 36% of the trials. Yet, a few global consistencies remained across all configurations, and these can be seen in Figure 1. First, the target tree was always located on a plane 10 m from the starting position. Second, the offset of the target tree was always within 4 deg of the initial heading of the participants at the starting position. Third, the configuration space remained constant in size and shape. Although participants were never able to explicitly see the edges of the configuration space, an experimenter prevented participants from leaving the area if they walked too close to the edge. Finally, two of the distracter trees remained fixed across all configurations along the walls of the configuration space to prevent participants from walking into actual walls of the laboratory, which protruded near the edge of the configuration space. Again, participants were prevented from trying to walk around the outside these stable distracter trees, as this would have involved leaving the configuration space. Thus, there existed several clues as to the global structure of the environment and general location of the target tree across all configurations. 
Figure 1
 
Sample view of the virtual forest used in this experiment. A top–down view, not seen by participants, to illustrate a typical configuration. Starting position is shown as a red dot; the three obstacle trees are marked by Xs, and the five possible target locations are shown as blue dots. The remaining distracter trees are shown as gray circles and are located outside the confines of the allowable walking space, shown as the dotted rectangle.
Figure 1
 
Sample view of the virtual forest used in this experiment. A top–down view, not seen by participants, to illustrate a typical configuration. Starting position is shown as a red dot; the three obstacle trees are marked by Xs, and the five possible target locations are shown as blue dots. The remaining distracter trees are shown as gray circles and are located outside the confines of the allowable walking space, shown as the dotted rectangle.
FOV was restricted to 40, 20, or 10 deg (see Figure 2) using the gaze-contingent display concept of Geisler and Perry (2002) together with the programmable functionality of the nVIDIA GeForce FX5900 GPU (software by coauthor J.H.). A mask of a certain visual field size was created as a monochrome bitmap, where the intensity of each pixel indicated the degree to which the view was blurred at that point. The center position of the mask was tethered to the participant's center of gaze, which was determined from an online analysis of the participant's eye images. The mask was partitioned into eight gray level bins, and the 2D perspective view of the scene was down-sampled iteratively to produce a corresponding set of eight increasingly blurred images, which were multiplied with the mask levels and combined to produce the final image. 
Figure 2
 
Sample views of the virtual forest used in this experiment. First-person views of the forest and target tree displayed with a 10, 20, and 40 deg (diameter) FOV.
Figure 2
 
Sample views of the virtual forest used in this experiment. First-person views of the forest and target tree displayed with a 10, 20, and 40 deg (diameter) FOV.
Apparatus
Head and eye tracking
A HiBall-3000 Optical Tracker (3rd Tech) was used to monitor head position and orientation. Infrared LEDs were housed on the ceiling tiles of the testing room, and their signals were detected by optical sensors mounted in a holder that was attached to the top of the headset. Head position and orientation were sampled every 7 ms. Tracker resolution is reported to be 0.2 mm, with an angular precision less than 0.03 deg. The output of the head tracker was filtered using an exponential smoothing function with an 80-ms time constant. Point of view was calculated from the head position and orientation data collected. Daubechies wavelet transform of the sixth order, Db6 (Ismail & Asfour, 1999), was applied to the data from the head tracker to filter out the oscillations associated with gait and to determine walking path. 
Eye tracking was performed using software developed in-house by coauthor L.H. on the output of cameras housed within the headset in front of each eye. To minimize processing time, pupil tracking was performed with the identification of the center of mass of a threshold value within a specified region of interest. A 5-point calibration was performed prior to testing each block of trials, and a drift-correction calibration was performed before each trial. 
Head-mounted display
The display device was an HMD system (a modified Low-Vision Enhancement System developed by Robert Massof at the Wilmer Eye Institute). The headset contained two color microdisplays (SVGA, 800 × 600 3D OLED Microdisplay, Emagin Corp.). The FOV of each was 48 deg (H) × 38 deg (V), with a spatial resolution approximately 0.06 deg/pixel. The displays have a refresh rate of 60 Hz. Spatially offset images were sent to each display, producing a stereo view. 
Design and procedure
Before beginning every trial, both practice and test, participants were required to orient themselves at the starting position. To do this, participants viewed four concentric white rings in a black room. When participants were standing on the starting position, the four rings collapsed on top of one another and appeared to be one ring. Proper orientation on the starting position was achieved when a red dot tethered to the participant's heading was placed on the center of a bull's-eye located in the center of the farthest ring. 
At the start of the experiment, all participants were shown, with a full FOV (i.e., 48 × 38 deg), a sample of the virtual environment in which the experimenter pointed out the target tree, which had a different type of bark from the obstacle and distracter trees. The experimenter verified that each participant was able to discriminate the target tree from the other trees. The participants were then given five practice trials in which they were told to walk to the target tree without hitting any of the obstacle trees. Participants were instructed to take their time during the practice trials and get adjusted to moving in the virtual world. After the practice trials, every participant was told, “You are now going to complete three blocks of trials in which you must find and walk to the target tree as quickly and efficiently as possible without hitting any of the obstacle trees. You are also going to have part of your visual field restricted during the trials.” All participants were informed that they could take a break at any point if they needed to. If participants came too close to any of the walls (<12 in.) in the laboratory along the edge of the configuration space, the experimenter guiding them would put out her arm and block the participant from walking into the wall. All participants had been informed before beginning the experiment that if they walked into the arm of the experimenter, they were too close to a wall and would need to change direction. Participants were not told in which direction to turn. If a participant hit an obstacle tree, a loud tone was sounded. A participant was considered to have hit an obstacle tree if the participant's midline came within 0.625 m of the center of the tree, as all trees in the forest had a radius of 0.33 m and the width of the participants was set at 0.295 m in the virtual environment. FOV was blocked such that only one FOV was presented during each set of 15 trials, and the order in which the blocks were presented was counterbalanced across the participants. 
Analysis
Navigational ability is a multidimensional concept, and to obtain a comprehensive assessment, travel time, number of obstacles hit, and path efficiency were measured. Furthermore, travel time in goal-directed navigation has two distinct phases: a latency phase (the time from display onset to the time of the participant's first step) and a walking phase. Therefore, separate analyses were run on the two phases. It should be noted that analyses were performed on the log of the individual mean latencies and walk times to normalize the distributions. However, for clarity, the graphs for these measures will be shown in their original units. 
A K-means cluster analysis was performed on the individual mean scores for latency, walk time, and path efficiency in the 10 deg condition using the statistical package JMP (Cary, NC). The analysis classified the participants into two groups, with seven participants in each group. Those with the shortest times and highest efficiencies, whom we will refer to as the good navigators, did not differ from the poor navigators on the basis of gender, height, gait size, age, or block order. 
To test the hypothesis that the rate at which performance deteriorates with FOV is faster for the poor navigators than the good navigators, a mixed designs ANOVA was performed on the navigation measures, with FOV as the within-subject factor and group as the between-subject factor. 
Results
Performance data
As expected, significant FOV × Group interactions were found for latency, F(2,11) = 5.03, p = .03, and walk time, F(2,11) = 8.63, p < .01, with both mean latency and walk time increasing at a faster rate for the poor navigators than the good navigators as FOV decreased ( Figures 3 and 4). There was also a significant main effect of group in the latency phase, F(1,12) = 29.40, p < .01, and the walk phase, F(1,12) = 11.53, p < .01, with poor navigators having longer mean times in both cases. Thus, despite having more time to identify and locate objects in the scene and plan a path to the goal during the latency phase, the poor navigators were still not able to reach the goal in a shorter amount of time than the good navigators across all three FOVs. For both the good and poor navigators, group means in the latency and walk phases across the three FOVs tested were well fit by power functions, with r > .98 for all four analyses. A comparison of the slopes of the two groups shows that the rate at which mean latency and walk time increased with decreasing FOV was two and three times higher, respectively, for the poor navigators compared with the good navigators. 
Figure 3
 
Performance times for good and poor navigators. Mean latency in seconds (the time from initial presentation of forest until participants began walking) as a function of FOV, using a log–log scale. Individual mean latencies are shown as small red circles for good navigators and blue diamonds for poor navigators. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 3
 
Performance times for good and poor navigators. Mean latency in seconds (the time from initial presentation of forest until participants began walking) as a function of FOV, using a log–log scale. Individual mean latencies are shown as small red circles for good navigators and blue diamonds for poor navigators. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 4
 
Performance times for good and poor navigators. Mean walk time in seconds as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 4
 
Performance times for good and poor navigators. Mean walk time in seconds as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
To further investigate differences in walking strategies, the amount of time the participants spent standing still during the walking phase was extracted from the walking data. To account for possible head/body movements that occur without a person actually taking a step, standing still was operationally defined as a participant's position, as measured by the HiBall tracker as moving less than 0.1 m/s. As can be seen in Figure 5, mean standing time increased with decreasing FOV, F(2,11) = 19.11, p < .01. There was also a main effect of group, with good navigators showing significantly smaller standing times than poor navigators on average, F(1,12) = 13.11, p < .01. Finally, there was a significant FOV × Group interaction, with the mean standing times of the poor navigators increasing at a much faster rate than the mean standing times of the good navigators as FOV decreased, F(2,11) = 6.09, p = .02. The means for the good and poor navigators were well fit by logarithmic functions ( r > .97 for both), and inspection of Figure 5 shows that on a log-linear scale, the rate at which standing time increased for the poor navigators was approximately 10 times higher than that of the good navigators. 
Figure 5
 
Performance times for good and poor navigators. Mean standing time in seconds as a function of FOV, using a log-linear scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 5
 
Performance times for good and poor navigators. Mean standing time in seconds as a function of FOV, using a log-linear scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
The effect of the time spent standing still on the walking times of the two groups is further illustrated when one compares the mean walking speeds for the two groups. Mean walking speed was calculated as the ratio of the length of the path taken to the goal divided by the time spent walking (where time spent walking equaled the total walk time minus the calculated standing time). As can be seen in Figure 6, although there was a significant decrease in walking speed with decreasing FOV, F(2,11) = 32.95, p < .01, there was no significant difference in walking speed between the good and poor navigators, F(1,12) = 0.75, p = .40. There was also no interaction found between FOV and group, F(2,11) = 2.00, p = .18. Mean values for both groups were well fit by power functions ( r = .99 for both), and although a slightly steeper slope is observed for the poor navigators, this difference does not produce a large difference over the FOV tested. 
Figure 6
 
Performance times for good and poor navigators. Mean walking speed in meters per second as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 6
 
Performance times for good and poor navigators. Mean walking speed in meters per second as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Movies 1 and 2 illustrate the different performances of the good and poor navigators. In the movies, both good and poor navigators are completing similar trials with a 10 deg FOV in which the target is not visible from the starting position. The trials were completed during the second block for the good navigator and during the third block for the poor navigator. 
 
Movie 1
 
Good navigator. The red dot is the participant. The dark blue dot represents the participant's gaze position, and the blue dotted line tethers the gaze position to the participant. The lighter region represents what is visible within the participant's 10 deg FOV. The target tree is shaded light blue.
 
Movie 2
 
Poor navigator. Presented in same format as Movie 1.
One can walk fast but bump into obstacles along the route. Therefore, to test whether the good navigators walked faster than the poor navigators because they did not put as much effort into avoiding obstacles along the way, the proportion of obstacles hit per block was determined. The proportion of obstacles hit was calculated from 45 possible hits across 15 trials per block. Mean proportions for the good navigators in the 10, 20, and 40 deg conditions were 0.37, 0.11, and 0.02, respectively. Similarly, the mean proportions for the poor navigators were 0.47, 0.13, and 0.04, respectively. To compare across the two groups, the proportions were converted into ranks (averaged) and parametric statistical tests were performed on the ranks. No effect of group was found, as the good and poor navigators hit approximately the same proportion of obstacles across all three FOVs, F(1,12) = 0.56, p = .47. The proportion of obstacles hit did increase significantly with decreasing FOV, F(2,11) = 62.18, p < .01, and no interaction was found between FOV and group, F(2,11) = 0.49, p = .62. Thus, both good and poor navigators were effective at avoiding the obstacle trees with the two largest FOVs; hence, the longer walk times of the slow walkers cannot be explained by a more cautious approach. 
Path efficiency was defined as the ratio of the distance walked to the optimal distance, which was defined as the shortest path to the goal, outside the boundary of the three obstacle trees (see Figure 7a). It should be noted that given the large range of individual mean walk times in the 10 deg condition, from 11.6 to 64.0 s, and the fact that participants stopped sometimes in their path, the lower means for the good navigators do not necessarily constrain them to having taken the shortest possible routes. However, Figure 7b shows that the good navigators did in fact walk more direct paths to the target than the poor navigators, F(1,12) = 19.05, p < .01, and that path efficiency decreased overall with FOV, F(2,11) = 13.86, p < .01. More important, analyses show a significant interaction between FOV and group, F(2,11) = 6.70, p = .01. Path efficiencies as a function of FOV were well fit by linear functions ( r = .99 for both groups). A comparison of the slopes of the two lines shows that the rate at which path efficiency decreased with FOV for the poor navigators was five times higher than that for the good navigators. 
Figure 7
 
Path efficiencies of good and poor navigators. (a) Top–down view of an exemplary configuration. The gray circles are the obstacles, and the black circles represent a padded region of 0.625 m around the obstacle center to account for the radius of the tree and the average distance between a person's midline and shoulder. Movement across this imaginary boundary was counted as an obstacle hit. The red line signifies the path taken by a participant. The yellow line signifies the optimal path, that is, the shortest path from start to goal without crossing the obstacles' boundaries. (b) Mean path efficiency as a function of FOV. Individual mean path efficiencies are shown as small red circles and blue diamonds for good and poor navigators, respectively. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 7
 
Path efficiencies of good and poor navigators. (a) Top–down view of an exemplary configuration. The gray circles are the obstacles, and the black circles represent a padded region of 0.625 m around the obstacle center to account for the radius of the tree and the average distance between a person's midline and shoulder. Movement across this imaginary boundary was counted as an obstacle hit. The red line signifies the path taken by a participant. The yellow line signifies the optimal path, that is, the shortest path from start to goal without crossing the obstacles' boundaries. (b) Mean path efficiency as a function of FOV. Individual mean path efficiencies are shown as small red circles and blue diamonds for good and poor navigators, respectively. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
The number of times participants were blocked from walking outside the experimental space was also calculated. During the blocks of trials corresponding to the 40, 20, and 10 deg conditions, the good navigators were stopped on average by the experimenter 0.43, 0.57, and 0.14 times, respectively, whereas the poor navigators were stopped 1.29, 1.57, and 0.57 times. Van der Waerden tests were performed to compare the means of the two groups at each FOV. The results show that poor navigators had to be redirected significantly more times than the good navigators in the 10 deg condition, χ = 6.51, p = .01, and in the 20 deg condition, χ = 5.45, p = .02. Good and poor navigators did not differ in the 40 deg condition, χ = 0.17, p = .68. Although the means for both groups are low, the significantly larger means of the poor navigators in the 10 and 20 deg conditions suggest that several poor navigators were not aware of the spatial confines of the configuration space even after one or two blocks of trials. 
Perceptual span analysis
As stated earlier, if superior navigators do augment visual information with an internal spatial representation of the forest and performance differences are not simply a result of one group's ability to process available visual information more effectively, there should be a difference in the smallest size of the visual field at which performance breaks down (perceptual span) for the two groups. In particular, if good navigators rely more on internal representations, it was predicted that the good navigators in this experiment should be able to complete the same task as efficiently as the poor navigators with a smaller FOV. This concept of a “perceptual span” has been demonstrated in performance tasks such as reading (Bertera & Rayner, 2000). A similar concept, visual span, has also been investigated in reading and tasks such as visual search, reading music, and chess playing (Gilman & Underwood, 2003; Legge, Ahn, Klitz, & Luebker, 1997; Pomplun, Reingold, & Shen, 2001; Rayner, 1998; Reingold, Charness, Pomplun, & Stampe, 2001; Saida & Ikeda, 1979). To test this hypothesis, perceptual span was estimated as the FOV corresponding to a 20% increase in baseline from each group's mean time-to-goal versus FOV power functions, where baseline was defined as mean time-to-goal at 40 deg and time-to-goal was the sum of mean latency and walk time (see Figure 8). The perceptual span of the good navigators was found to be 22.5 deg, whereas the calculated span for the poor navigators was 31.9 deg. The calculated span for the good navigators was about 30% smaller than that of the poor navigators, which again illustrates that the good navigators were able to maintain a higher level of performance with a smaller FOV. The spans correspond to trial times of 16.5 s for the good navigators and 21.0 s for the poor navigators. If one extrapolates along the two linear fits of the path efficiencies of the good and poor navigators at their perceptual spans, one finds efficiencies of 0.93 and 0.88, respectively. Therefore, when examining the measure of navigational performance that Hartley et al. (2003) used, that is, path efficiency, one sees that the good navigators in this study were able to perform as well as the poor navigators on the same task with an FOV approximately 70% the size of the one required by the poor navigators. In terms of area, the span of the good navigators corresponds to a loss of about half the amount of information available to the poor navigators in a single glance. 
Figure 8
 
Perceptual span analysis. Mean time-to-goal (latency + walk time) in seconds as a function of FOV using a log–log scale. Good navigator means are shown as red circles and poor navigator means as blue squares. Black and gray arrows link the FOV to the time corresponding to a 20% increase in baseline performance for the good and poor navigators, respectively. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 8
 
Perceptual span analysis. Mean time-to-goal (latency + walk time) in seconds as a function of FOV using a log–log scale. Good navigator means are shown as red circles and poor navigator means as blue squares. Black and gray arrows link the FOV to the time corresponding to a 20% increase in baseline performance for the good and poor navigators, respectively. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Sampling rates
Although the performance and perceptual span data support the two predictions set forth, previous studies have not controlled for differences in sampling rates. Therefore, to assess whether sampling rate was a mitigating factor in the observed differences in performance, fixation rates were analyzed to determine whether the good navigators made more fixations than did the poor navigators. As the total number of fixations made is dependent upon time, fixation rates were analyzed to give a normalized number of fixations. Mean fixation rates for the good navigators in the 10, 20, and 40 deg conditions during latency were 3.20, 3.43, and 3.37 fixations/s, respectively. For the poor navigators, fixation rates were 2.92, 3.12, and 3.32 fixations/s, respectively. During the walking phase, fixation rates increased for the good navigators, with the means rising to 4.06, 4.43, and 4.71 fixations/s in the 10, 20, and 40 deg conditions, respectively. The poor navigators also showed this trend with mean fixation rates of 3.68, 4.18, and 4.71 fixations/s, respectively. Overall, fixation rates decreased with FOV in the latency phase, F(2,11) = 7.79, p < .01, and the walk phase, F(2,11) = 27.83, p < .01. Fixation rates did not differ between the good and poor navigators during the latency phase, F(1,12) = 0.93, p = .35, or the walk phase, F(1,12) = 0.26, p = .62. Furthermore, no interaction was found in the latency phase, F(2,11) = 0.78, p = .48, or the walk phase, F(2,11) = 1.50, p = .27. 
Discussion
Based on the performance data collected, it is apparent that although the performances of all participants were negatively affected by a reduced FOV, the good navigators were able to find their way to the target tree more quickly than the poor navigators across all three FOVs. More important, however, the good navigators were able to complete the task as quickly and efficiently as the poor navigators with a significantly smaller FOV. In other words, what the results of the perceptual span analysis suggest is that the way in which the performance of the two groups changed across the three FOVs tested was different, and it is the difference in these trends, not the actual values obtained, which suggests that the good navigators were more effective at responding to the loss of available external visual information than the poor navigators. The question is what factors can explain these observed differences in performance trends. From the analyses performed, it is known that variations in height, gender, and gait size were not responsible for the different performances of the good and poor navigators. Furthermore, as a virtual environment was used, changes in the environment across participants could not have occurred, ruling out the role of extraneous external variables. 
On the other hand, the use of a virtual environment could have influenced behavior if some participants found it more difficult to adjust to the virtual environment or were not as comfortable as others moving within the environment. However, level of comfort and other such problems were controlled to some extent by focusing the analyses completed on behavioral trends across the three FOVs tested (i.e., interactions) rather than the separate performances of the two groups at each FOV (i.e., main effects of group). Furthermore, the analyses on walking speed showed no interaction between FOV and group or main effect of group, indicating that the good and poor navigators walked at approximately the same speed when moving through the environment. In studies investigating the ability of individuals with actual visual field deficits to navigate safely through an environment, percentage of preferred walking speed (PPWS) has been used as a measure of how comfortable these individuals are when walking (Clark-Carter, Heyes, & Howarth, 1986; Soong, Lovie-Kitchin, & Brown, 2004). In the present experiment, preferred walking speed was not measured; thus, it is not possible to determine the PPWS for each individual. However, one can apply a similar logic when comparing the speeds of the two groups, using each group's mean walking speed in the 40 deg condition as a baseline. The fact that their walking speeds did not differ significantly in any of the three FOVs tested means that the ratios created by dividing the mean speeds in the 10 and 20 deg conditions by the baseline speed in the 40 deg condition would be equivalent for the two groups. This in turn suggests that if the poor navigators were uncomfortable, it did not impair their ability to walk in the environment any more than the good navigators did. Furthermore, the significant interaction found for standing time, with the poor navigators spending increasingly more time standing still than the good navigators in the forest as FOV decreased, could be the result of different strategies or the poor navigators attempting to reorient themselves after getting lost in the forest. Although it is possible that increases in standing time could result from participants feeling uncomfortable, one would expect great decreases in walking speed for these participants as well. Given that there was no effect of group or interaction found for walking speed, it seems unlikely that factors related to walking in an immersive virtual environment can fully explain the changes in performance observed across the three FOVs tested. 
There were, however, implicit cues within the environment that could have aided the participants in completing the task if they noticed them. First, due to restrictions caused by the actual size of the laboratory, the target and obstacle trees in the virtual forest were arranged to fit within an open rectangular end of the laboratory, roughly 6 m in width and 10 m in length with an open field of trees depicted outside this region. Participants were given ample time to notice this limitation as they were brought into this area of the laboratory and fitted with the headset before beginning the experiment. As no attempt was made to disorient the participants before initially walking them to the starting position, participants could have noted that the target and obstacle trees would necessarily always be located within this region in every configuration. (Note: There were distracter trees located outside the actual laboratory space.) Also, as an experimenter stopped all participants and told them to redirect if they walked too close to a wall, walked outside the allotted space, or tried to walk around the two stationary distracter trees along the periphery, information about the size of the laboratory was indirectly provided to those participants who were not able to pick up on these restrictions themselves. Examination of the number of times participants had to be redirected by an experimenter showed that during each of the three blocks, five or six of the eight participants in the poor navigator group had to be redirected at least once. In contrast, only two of the six good navigators needed to be redirected in the 20 and 40 deg conditions, with the proportion raising to one half of the participants in the 10 deg condition. Although the fact that most of the good navigators did not need to be redirected when walking with larger FOV cannot speak of their knowledge of the target tree's location, the high number of the incidences for the poor navigators, especially in the 40 and 20 deg conditions, does suggests that most of the them were not able to pick up on the dimensions of the environment. The second implicit cue within the environment was the location of the target. The target tree was always in the same general location within the environment: 10 m away from the starting position and offset from the initial heading direction of the participants by at most 4 deg. Therefore, the position of the target tree was always relatively straight ahead, and this information could have been used to develop a strategy for finding the target when FOV was reduced. It is interesting to note that all of the participants who actually stated that they noticed and used this consistency to help them find the target were in the good navigators group. 
The fact that the good and poor navigators hit approximately the same number of obstacles is important. Given that very few obstacles were hit in the 20 and 40 deg conditions, it may be that a floor effect was observed. However, the fact that more than one third of the obstacles were hit by both good and poor navigators in the 10 deg condition suggests that a floor effect cannot be the only factor responsible for the participant's performance. The design of the experiment itself may have influenced the number of hits that were calculated. Due to programming limitations, all participants were assumed to have the same shoulder width in the virtual environment. Thus, it may be that some participants with smaller shoulder widths may have been able to safely pass an obstacle tree, although the computer registered the pass as a hit. Also, hits were considered to be all-or-none occurrences; hence, participants walking straight into obstacle trees were not distinguished from participants brushing against the edge of a tree. It may therefore be possible that, if the types of hits were broken into separate categories, differences between the good and poor navigators would be seen. Given this limitation, it would be beneficial for future studies to try to take into account the different shoulder widths of participants, categorize hits into at least two different subtypes, or both. 
However, the size of the participants may not be the only factor involved. Another way to explain this phenomenon is to consider what type of information is important to maneuver around an obstacle. As noted earlier, people can use both external and internal information to guide their movements (Oliva et al., 2004; Wang & Spelke, 2000); thus, an increased usage of internal information by good navigators does not necessarily imply that external information is never used. Studies on action tasks (Hollands & Marple-Horvat, 2001; Patla & Vickers, 1997) examining eye and gaze behaviors while stepping over an obstacle or onto a target have found that vision is used in an online manner during such movements to assure that one's foot is placed properly. As no two configurations are the same in this study, the task requires that participants integrate current external information about the locations of the obstacle trees into whatever internal representation they develop about the global characteristics of the forest to maintain a high level of performance as FOV is restricted. Thus, a plausible explanation for the lack of a difference between the good and poor navigators could be that although internal information is more useful than local external information when searching for the target and planning a rough path within the global framework of the forest, the opposite may be true while participants are trying to maneuver around an obstacle within proximity. These findings are not necessarily inconsistent with theories that suggest the use of internal spatial representations in navigation. In particular, Hartley et al.'s (2003) proposal does not suggest that good navigators rely solely on internal information, whereas poor navigators use only external visual information. What is proposed is that although both groups use external visual information, good navigators are able to integrate this available visual information with some sort of internal global representation of the environment, however detailed or distorted it may be, thereby allowing them to maintain a sense of where they are and where they need to go. Thus, superior performance on a navigational task does not arise from internal representations alone, but rather from the more immediate coupling of internal and external information about one's environment. 
Wang and Spelke (2000) speak to this coupling in their discussion of human spatial representations. According to the researchers, when navigating through novel environments, humans, like animals, rely on path integration to remember where they have been and gone. Early spatial representations that develop are coarse and encompass only the global, geometric structures of the environment. Over the course of repeated exposures, however, humans are able to transform egocentric, viewpoint-dependent representations into geocentric, viewpoint-independent representations. Vishton and Cutting (1995) also suggest that while humans rely on external visual information, they may unconsciously develop internal representations of their environments, which they term “mental maps.” However, even if the good navigators in this study were able to develop an internal representation of the forest world over the course of the three blocks, they still would not have been able to predict where each of the three obstacle trees would be located in the following trial as the locations were changed every trial. Thus, although an internal spatial representation may have been able to aid walk times and path efficiencies by giving participants a sense of the general layout of the laboratory space and location of the target tree, any sort of internal representation developed could not have helped participants maneuver around the obstacles. 
With regard to the eye-movement data collected, the fixation rates of the good and poor navigators were very similar. However, this does not mean that all participants were looking at the same things in the forest. It is possible that given a general internal representation of the environmental layout and target location, the good navigators used this information to aid their visual search before moving forward. Future analysis of the participants' gaze patterns may provide insight into whether good navigators execute general search patterns in an attempt to locate the target or if they focus their attention in a specific area of the scene in a manner consistent with someone who knows where they need to go but is trying to determine the best way to get to that particular place. It would also allow for estimations of when the target trees were detected during the course of each trial. 
From a cognitive standpoint, similar eye-movement patterns also do not assure that all participants perceived the environment in the same way. It may be that as FOV decreased, some of the participants found it harder to integrate incoming visual information across head and eye movements or detect heading from the optic flow patterns that were still available. With regard to integrating information across head or eye movements, the ability to do so successfully requires an individual to hold incoming visual information in memory, albeit at an unconscious level, and bind all of the information together to create a cohesive percept. How this occurs is not fully understood, but researchers in this area (Aivar, Hayhoe, Chizk, & Mruczek, 2005; Irwin, 1991) have suggested that a relatively long-lived, although perhaps sparsely detailed, representation of the spatial relationship between objects in a scene is retained across fixations and used to integrate information over time. In particular, a recent study by Brockmole and Irwin (2005) explored this process, which they term memory–percept integration, and found evidence to suggest that the perception of a stable world is developed by the integration of incoming external visual information, with information held in visual short-term memory (VSTM) that is built up from multiple prior fixations. Furthermore, this information held in VSTM is not retinotopic in nature but rather stores information about the objects themselves or the layout of the objects relative to each other. In other words, although visual information is received in terms of a retinotopic reference frame, that information is stored by VSTM in terms of an egocentric reference frame. This suggests that a transformation in the representation of visual information occurs, which allows an observer to uncouple incoming visual information from the positions in which they were received by the eye, thus allowing individuals to perceive the world as being external and separate from themselves. This process of holding first-person incoming visual information in memory and then combining it together to create a single percept of the external world is similar to the process that several researchers (Richardson, Montello, & Hegarty, 1999; Seigel & White, 1975; Shelton & McNamara, 2004; Wang & Spelke, 2002) have suggested is utilized to create spatial representation of an environment. According to these models, individuals take successive first-person snapshots of an environment that they then integrate together via an established frame of reference to create a viewpoint-independent representation of the environment (i.e., information goes from being egocentric in nature to allocentric). Given the similarity of these two processes and their reliance on VSTM to bind information that is remembered with information that is currently being received, it may be that the process of integrating incoming visual information across fixations to create a coherent percept of the outside world and the development of internal spatial representations that are viewpoint independent may fall upon a continuum, with transsaccadic integration occurring on the timescale of a few milliseconds and the development of spatial representations building up over a longer period. If this is the case, it follows that those who are better at integrating visual information across head and eye movements would also be better at developing internal spatial representations of the environment as both processes involve similar techniques utilized by the same mechanism (i.e., VSTM) but on a different timescale. 
As for the ability to detect heading, it seems unlikely that a difference in this area could be a major factor in causing the differences in performance observed for two reasons. First, research has shown that the area of the human visual field most sensitive to detecting changes in optic flow arrays at walking speeds lies in the central 10 deg (Turano, Yu, Hao, & Hicks, 2005), which was never occluded in this experiment. Second, for people to utilize heading information in a navigation task, they must first know where they are trying to head. Without knowledge of the location of the target, it is unlikely that differences in the ability to detect changes in optic flow patterns would be that helpful. Therefore, although both of these differences in perceptual ability may have existed, it does not seem likely that they were the driving forces behind the differing performances of the good and poor navigators. 
Another important question to address is the nature of the internal spatial representations that the good navigators may have developed. The design used in this study differs from those traditionally employed in spatial learning tasks in that the current environment was not wholly stable. Thus, it would not have been possible for the participants to build a cognitive map in the strong sense of the term, with the target tree, obstacle trees, and distracter trees all represented in specific locations. However, given the implicit cue provided by the structure of the laboratory, the static starting position and orientation, and the location of the target tree (which remained at a constant distance and varied in offset by 4 deg or less), a representation of the environment, which included these factors, could have been constructed. In particular, the fact that the path efficiency of the good navigators only dropped from 94% in the 40 deg condition to 92% in the 10 deg condition suggests that the good navigators knew in which direction to walk to find the target tree. Had the environment been stable, one could argue that this knowledge represents the use of path integration, and indeed, the participants may have used some proprioceptive information to guide them to the target tree. However, because the locations of the obstacle trees were not stable, using path integration alone would not be effective because the same path could not be taken on every trial. It seems more likely then that the good navigators developed a coarse representation of the environment, including a small area in which the target tree would be located and, perhaps, the boundaries of the configuration space. This type of representation differs from route knowledge in that it is more global in nature and allows the participants to walk to the target along more than one route. Although it is impossible to determine if participants were able to define an accurate metric for this representation, the performance data indicate that the good navigators knew where they needed to go to find the target tree. This type of representation is similar to the early survey representations that Wang and Spelke (2000) argue for, with just global structures represented in a cohesive manner. 
Finally, a significant amount of time has gone into developing a variety of tests to evaluate different cognitive strategies and determine spatial ability/intelligence, such as the Vandenberg Mental Rotations Test (Ekstrom, French, Harman, & Dermen, 1976) or the Guilford–Zimmerman Spatial Orientation Test (Guilford & Zimmerman, 1981). Results of these tests have been found to be predictive of real and virtual navigational performance. This study did not utilize any of these tests, but future studies may wish to include measures of spatial ability and intelligence to see if they can help differentiate different cognitive strategies that performers typically chose to employ. In particular, although the results of this study suggest that good navigators may have utilized internal spatial representations to a greater extent than poor navigators, it cannot distinguish whether differences occur in developing or utilizing these representations. Furthermore, if the difference lies in utilizing internal spatial representations, it would be important to understand whether this is due to differences in the participants' abilities to hold representations in short-term memory or simply due to differences in the strategies they prefer to rely on. Also, it would be beneficial for future studies to include an fMRI component, similar to the paradigm used by Hartley et al.'s (2003), as this would allow for direct comparison between behavioral and neurological data obtained and may help to further illuminate different strategies used by good and poor navigators. 
Conclusions
The results of this study are consistent with Hartley et al.'s (2003) proposal that good navigators tend to build and rely on internal spatial representations to a greater extent than poor navigators. As with any study examining cognitive processing, it was not possible to directly measure the existence and nature of any internal spatial representations used by the participants. However, by measuring behavioral changes with decreasing FOV, it was possible to identify individuals who were more efficient at completing the current task and indirectly infer an increased reliance on internal spatial representations by these individuals. Given the nature of the environment used in this study and the novel paradigm employed, it is important that future research be conducted to determine if the present findings generalize to other environments and tasks. Furthermore, the use of larger sample sizes may be helpful in identifying groups of good and poor navigators, as well as individuals who would fall in a category somewhere in the middle. This, in conjunction with other measures of spatial ability, may help to shed light on the nature and form of internal spatial representations, as well as the ways in which they are utilized in navigation. 
Acknowledgments
This research was supported by National Institutes of Health Grant EY-07839 (K.A.T.). 
Commercial relationships: none. 
Corresponding author: Kathleen A. Turano. 
Email: kturano1@jhmi.edu. 
Address: 550 N. Broadway, 6th floor, Baltimore, MD 21205, USA. 
References
Aguirre, G. K. Zarahn, E. D'Esposito, M. (1998). Neural components of topographical representation. Proceedings of the National Academy of Sciences of the United States of America, 95, 839–846. [PubMed] [Article] [CrossRef] [PubMed]
Aivar, M. P. Hayhoe, M. M. Chizk, C. L. Mruczek, R. E. (2005). Spatial memory and saccadic targeting in a natural task. Journal of Vision, 5, (3), 177–193, http://journalofvision.org/5/3/3, doi:10.1167/5.3.3. [PubMed] [Article] [CrossRef] [PubMed]
Ballard, D. H. Hayhoe, M. M. Pelz, J. B. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience, 7, 66–80. [CrossRef] [PubMed]
Bertera, J. H. Rayner, K. (2000). Eye movements and the span of the effective stimulus in visual search. Perception & Psychophysics, 62, 576–585. [PubMed] [CrossRef] [PubMed]
Blaut, J. M. Stea, D. Spencer, C. Blades, M. (2003). Mapping as a cultural and cognitive universal. Annals of the Association of American Geographers, 93, 165–185. [CrossRef]
Brockmole, J. R. Irwin, D. E. (2005). Eye movements and the integration of visual memory and visual perception. Perception & Psychophysics, 67, 495–512. [PubMed] [CrossRef] [PubMed]
Burgess, N. Becker, S. King, J. A. O'Keefe, J. (2001). Memory for events and their spatial context: Models and experiments. Philosophical Transactions of the Royal Society of London: Series B, Biological Sciences, 356, 1493–1503. [PubMed] [CrossRef] [PubMed]
Carpenter, R. H. Williams, M. L. (1995). Neural computation of log likelihood in control of saccadic eye movements. Nature, 377, 59–62. [PubMed] [CrossRef] [PubMed]
Clark-Carter, D. D. Heyes, A. D. Howarth, C. I. (1986). The efficiency and walking speed of visually impaired people. Ergonomics, 29, 779–789. [PubMed] [CrossRef] [PubMed]
Cutting, J. E. Vishton, P. M. Fluckiger, M. Baumberger, B. Gerndt, J. D. (1997). Heading and path information from retinal flow in naturalistic environments. Perception & Psychophysics, 59, 426–441. [PubMed] [CrossRef] [PubMed]
Ekstrom, A. D. Kahana, M. J. Caplan, J. B. Fields, T. A. Isham, E. A. Newman, E. L. (2003). Cellular networks underlying human spatial navigation. Nature, 425, 184–188. [PubMed] [CrossRef] [PubMed]
Ekstrom, R. B. French, J. W. Harman, H. H. Dermen, D. (1976). Kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing Service.
Graham, K. S. Downing, P. E. (2003). Viewpoint-specific scene representations in human parahippocampal cortex. Neuron, 37, 865–876. [PubMed] [CrossRef] [PubMed]
Epstein, R. Kanwisher, N. (1998). A cortical representation of the local visual environment. Nature, 392, 598–601. [PubMed] [CrossRef] [PubMed]
Fajen, B. R. Warren, W. H. (2004). Visual guidance of intercepting a moving target on foot. Perception, 33, 689–715. [PubMed] [CrossRef] [PubMed]
Geisler, W. S. Perry, J. S. (2002). Real-time simulation of arbitrary visual fields.
Gibson, J. J. (1994). The visual perception of objective motion and subjective movement 1954. Psychological Review, 101, 318–323. [PubMed] [CrossRef] [PubMed]
Gilman, E. Underwood, G. (2003). Restricting the field of view to investigate the perceptual spans of pianists. Visual Cognition, 10, 201–232. [CrossRef]
Guilford, J. P. Zimmerman, W. S. (1981). The Guilford–Zimmerman Aptitude Survey manual of instruction and interpretation. Palo Alto, CA: Consulting Psychologists Press.
Gulliver, F. P. (1908). Orientation of maps. Bulletin of the American Geographical Society, 40, 538–542. [CrossRef]
Harris, J. M. Bonas, W. (2002). Optic flow and scene structure do not always contribute to the control of human walking. Vision Research, 42, 1619–1626. [PubMed] [CrossRef] [PubMed]
Hartley, T. Maguire, E. A. Spiers, H. J. Burgess, N. (2003). The well-worn route and the path less traveled: Distinct neural bases of route following and wayfinding in humans. Neuron, 37, 877–888. [PubMed] [CrossRef] [PubMed]
Hayhoe, M. M. Shrivastava, A. Mruczek, R. Pelz, J. B. (2003). Visual memory and motor planning in a natural task. Journal of Vision, 3, (1), 49–63, http://journalofvision.org/3/1/6/, doi:10.1167/3.1.6. [PubMed] [Article] [CrossRef] [PubMed]
Hill, E. W. Rieser, J. J. (1993). How persons with visual impairments explore novel spaces: Strategies of good and poor performers. Journal of Visual Impairment & Blindness, 87, 295–301.
Hollands, M. A. Marple-Horvat, D. E. (2001). Coordination of eye and leg movements during visually guided stepping. Journal of Motor Behavior, 33, 205–216. [PubMed] [CrossRef] [PubMed]
Hollands, M. A. Patla, A. E. Vickers, J. N. (2002). “Look where you're going!”: Gaze behavior associated with maintaining and changing the direction of locomotion. Experimental Brain Research, 143, 221–230. [PubMed] [CrossRef] [PubMed]
Irwin, D. E. (1991). Information integration across saccadic eye movements. Cognitive Psychology, 23, 420–456. [PubMed] [CrossRef] [PubMed]
Ismail, A. R. Asfour, S. S. (1999). Discrete wavelet transform: A tool in smoothing kinematic data. Journal of Biomechanics, 32, 317–321. [PubMed] [CrossRef] [PubMed]
Land, M. Mennie, N. Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28, 1311–1328. [PubMed] [CrossRef] [PubMed]
Land, M. F. (1992). Predictable eye–head coordination during driving. Nature, 359, 318–320. [PubMed] [CrossRef] [PubMed]
Land, M. F. Hayhoe, M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41, 3559–3565. [PubMed] [CrossRef] [PubMed]
Land, M. F. Lee, D. N. (1994). Where we look when we steer. Nature, 369, 742–744. [PubMed] [CrossRef] [PubMed]
Lee, D. N. (1998). Guiding movement by coupling taus. Ecological Psychology, 10, 221–250. [Article] [CrossRef]
Lee, D. N. Craig, C. M. Grealy, M. A. (1999). Sensory and intrinsic coordination of movement. Proceedings of the Biological Society, 266, 2029–2035. [PubMed] [Article] [CrossRef]
Legge, G. E. Ahn, S. J. Klitz, T. S. Luebker, A. (1997). Psychophysics of reading: XVI The visual span in normal and low vision. Vision Research, 37, 1999–2010. [PubMed] [CrossRef] [PubMed]
Loomis, J. M. Klatzky, R. L. Golledge, R. G. Philbeck, J. W. Golledge, R. G. (1999). Human navigation by path integration. Wayfinding behavior: Cognitive mapping and other spatial processes. (pp. 125–151). Baltimore, MD: Johns Hopkins University Press.
Maguire, E. A. Burgess, N. Donnett, J. G. Frackowiak, R. S. Frith, C. D. O'Keefe, J. (1998). Knowing where and getting there: A human navigation network. Science, 280, 921–924. [PubMed] [CrossRef] [PubMed]
Maguire, E. A. Spiers, H. J. Good, C. D. Hartley, T. Frackowiak, R. S. Burgess, N. (2003). Navigation expertise and the human hippocampus: A structural brain imaging analysis. Hippocampus, 13, 250–259. [PubMed] [CrossRef] [PubMed]
McPeek, R. M. Skavenski, A. A. Nakayama, K. (2000). Concurrent processing of saccades in visual search. Vision Research, 40, 2499–2516. [PubMed] [CrossRef] [PubMed]
Mittelstaedt, M. L. Mittelstaedt, H. (2001). Idiothetic navigation in humans: Estimation of path length. Experimental Brain Research, 139, 318–332. [PubMed] [CrossRef] [PubMed]
Nadel, L. Hardt, O. (2004). The spatial brain. Neuropsychology, 18, 473–476. [PubMed] [CrossRef] [PubMed]
Oliva, A. Wolfe, J. M. Arsenio, H. C. (2004). Panoramic search: The interaction of memory and vision in search through a familiar scene. Journal of Experimental Psychology: Human Perception and Performance, 30, 1132–1146. [PubMed] [CrossRef] [PubMed]
Patla, A. E. Vickers, J. N. (1997). Where and when do we look as we approach and step over an obstacle in the travel path? Neuroreport, 8, 3661–3665. [PubMed] [Article] [CrossRef] [PubMed]
Peterson, M. S. Kramer, A. F. Irwin, D. E. (2004). Covert shifts of attention precede involuntary eye movements. Perception & Psychophysics, 66, 398–405. [PubMed] [CrossRef] [PubMed]
Philbeck, J. W. Loomis, J. M. (1997). Comparison of two indicators of perceived egocentric distance under full-cue and reduced-cue conditions. Journal of Experimental Psychology: Human Perception and Performance, 23, 72–85. [PubMed] [CrossRef] [PubMed]
Pomplun, M. Reingold, E. M. Shen, J. (2001). Investigating the visual span in comparative search: The effects of task difficulty and divided attention. Cognition, 81, B57–B67. [PubMed] [CrossRef] [PubMed]
Priest, H. N. Cutting, J. E. Torrey, C. C. Regan, D. (1985). Visual flow and direction of locomotion. Science, 227, 1063–1065. [PubMed] [CrossRef] [PubMed]
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422. [PubMed] [CrossRef] [PubMed]
Reingold, E. M. Charness, N. Pomplun, M. Stampe, D. M. (2001). Visual span in expert chess players: Evidence from eye movements. Psychological Science, 12, 48–55. [PubMed] [CrossRef] [PubMed]
Richardson, A. E. Montello, D. R. Hegarty, M. (1999). Spatial knowledge acquisition from maps and from navigation in real and virtual environments. Memory & Cognition, 27, 741–750. [PubMed] [CrossRef] [PubMed]
Rieser, J. J. Ashmead, D. H. Taylor, C. R. Youngquist, G. A. (1990). Visual perception and the guidance of locomotion without vision to previously seen targets. Perception, 19, 675–689. [PubMed] [CrossRef] [PubMed]
Rushton, S. K. Harris, J. M. Lloyd, M. R. Wann, J. P. (1998). Guidance of locomotion on foot uses perceived target location rather than optic flow. Current Biology, 8, 1191–1194. [PubMed] [CrossRef] [PubMed]
Saida, S. Ikeda, M. (1979). Useful visual field size for pattern perception. Perception & Psychophysics, 25, 119–125. [PubMed] [CrossRef] [PubMed]
Seigel, A. W. White, S. H. Reese, H. (1975). The development of spatial representations of large-scale environments. Advances in child development and behavior. –55). New York: Academic Press.
Shelton, A. L. McNamara, T. P. (2004). Orientation and perspective dependence in route and survey learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 158–170. [PubMed] [CrossRef] [PubMed]
Soong, G. P. Lovie-Kitchin, J. E. Brown, B. (2004). Measurements of preferred walking speed in subjects with central and peripheral vision loss. Ophthalmic & Physiological Optics, 24, 291–295. [PubMed] [CrossRef]
Trowbridge, C. C. (1913). On fundamental methods of orientation and “imaginary maps”; Science, 38, 888–897. [CrossRef] [PubMed]
Turano, K. A. Geruschat, D. R. Baker, F. H. (2003). Oculomotor strategies for the direction of gaze tested with a real-world activity. Vision Research, 43, 333–346. [PubMed] [CrossRef] [PubMed]
Turano, K. A. Yu, D. Hao, L. Hicks, J. C. (2005). Optic-flow and egocentric-direction strategies in walking: Central vs peripheral visual field. Vision Research, 45, 3117–3132. [PubMed] [CrossRef] [PubMed]
Ungar, S. Kitchin,, R. F. (2000). Cognitive mapping without visual experience. Cognitive mapping: Past, present, and future. London: Routledge.
Vishton, P. M. Cutting, J. E. (1995). Wayfinding, displacements, and mental maps: Velocity fields are not typically used to determine one's aimpoint. Journal of Experimental Psychology: Human Perception and Performance, 21, 978–995. [PubMed] [CrossRef] [PubMed]
Wang, R. F. Cutting, J. E. (1999). Where we go with a little good information. Psychological Science, 10, 71–75. [CrossRef]
Wang, R. F. Spelke, E. S. (2000). Updating egocentric representations in human navigation. Cognition, 77, 215–250. [PubMed] [CrossRef] [PubMed]
Wang, R. Spelke, E. (2002). Human spatial representation: Insights from animals. Trends in Cognitive Sciences, 6, 376–382. [PubMed] [CrossRef] [PubMed]
Warren, W. H. (1998). Perception of heading is a brain in the neck. Nature Neuroscience, 1, 647–649. [PubMed] [Article] [CrossRef] [PubMed]
Warren, Jr., W. H. Kay, B. A. Zosh, W. D. Duchon, A. P. Sahuc, S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4, 213–216. [PubMed] [Article] [CrossRef] [PubMed]
Figure 1
 
Sample view of the virtual forest used in this experiment. A top–down view, not seen by participants, to illustrate a typical configuration. Starting position is shown as a red dot; the three obstacle trees are marked by Xs, and the five possible target locations are shown as blue dots. The remaining distracter trees are shown as gray circles and are located outside the confines of the allowable walking space, shown as the dotted rectangle.
Figure 1
 
Sample view of the virtual forest used in this experiment. A top–down view, not seen by participants, to illustrate a typical configuration. Starting position is shown as a red dot; the three obstacle trees are marked by Xs, and the five possible target locations are shown as blue dots. The remaining distracter trees are shown as gray circles and are located outside the confines of the allowable walking space, shown as the dotted rectangle.
Figure 2
 
Sample views of the virtual forest used in this experiment. First-person views of the forest and target tree displayed with a 10, 20, and 40 deg (diameter) FOV.
Figure 2
 
Sample views of the virtual forest used in this experiment. First-person views of the forest and target tree displayed with a 10, 20, and 40 deg (diameter) FOV.
Figure 3
 
Performance times for good and poor navigators. Mean latency in seconds (the time from initial presentation of forest until participants began walking) as a function of FOV, using a log–log scale. Individual mean latencies are shown as small red circles for good navigators and blue diamonds for poor navigators. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 3
 
Performance times for good and poor navigators. Mean latency in seconds (the time from initial presentation of forest until participants began walking) as a function of FOV, using a log–log scale. Individual mean latencies are shown as small red circles for good navigators and blue diamonds for poor navigators. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 4
 
Performance times for good and poor navigators. Mean walk time in seconds as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 4
 
Performance times for good and poor navigators. Mean walk time in seconds as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 5
 
Performance times for good and poor navigators. Mean standing time in seconds as a function of FOV, using a log-linear scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 5
 
Performance times for good and poor navigators. Mean standing time in seconds as a function of FOV, using a log-linear scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 6
 
Performance times for good and poor navigators. Mean walking speed in meters per second as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 6
 
Performance times for good and poor navigators. Mean walking speed in meters per second as a function of FOV, using a log–log scale. Same designation of symbols as in Figure 3 is used. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 7
 
Path efficiencies of good and poor navigators. (a) Top–down view of an exemplary configuration. The gray circles are the obstacles, and the black circles represent a padded region of 0.625 m around the obstacle center to account for the radius of the tree and the average distance between a person's midline and shoulder. Movement across this imaginary boundary was counted as an obstacle hit. The red line signifies the path taken by a participant. The yellow line signifies the optimal path, that is, the shortest path from start to goal without crossing the obstacles' boundaries. (b) Mean path efficiency as a function of FOV. Individual mean path efficiencies are shown as small red circles and blue diamonds for good and poor navigators, respectively. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 7
 
Path efficiencies of good and poor navigators. (a) Top–down view of an exemplary configuration. The gray circles are the obstacles, and the black circles represent a padded region of 0.625 m around the obstacle center to account for the radius of the tree and the average distance between a person's midline and shoulder. Movement across this imaginary boundary was counted as an obstacle hit. The red line signifies the path taken by a participant. The yellow line signifies the optimal path, that is, the shortest path from start to goal without crossing the obstacles' boundaries. (b) Mean path efficiency as a function of FOV. Individual mean path efficiencies are shown as small red circles and blue diamonds for good and poor navigators, respectively. Group means are shown as large red circles and blue diamonds. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 8
 
Perceptual span analysis. Mean time-to-goal (latency + walk time) in seconds as a function of FOV using a log–log scale. Good navigator means are shown as red circles and poor navigator means as blue squares. Black and gray arrows link the FOV to the time corresponding to a 20% increase in baseline performance for the good and poor navigators, respectively. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
Figure 8
 
Perceptual span analysis. Mean time-to-goal (latency + walk time) in seconds as a function of FOV using a log–log scale. Good navigator means are shown as red circles and poor navigator means as blue squares. Black and gray arrows link the FOV to the time corresponding to a 20% increase in baseline performance for the good and poor navigators, respectively. Error bars represent ±1 SEM and FOV is in degrees of visual angle.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×