Free
Article  |   November 2013
Beyond the tangent point: Gaze targets in naturalistic driving
Author Affiliations
  • Otto Lappi
    Cognitive Science, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
    Traffic Research Unit, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
    otto.lappi@helsinki.fi
  • Esko Lehtonen
    Cognitive Science, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
    Traffic Research Unit, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
    esko.lehtonen@helsinki.fi
  • Jami Pekkanen
    Cognitive Science, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
    Traffic Research Unit, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
    jami.pekkanen@helsinki.fi
  • Teemu Itkonen
    Cognitive Science, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland
    teemu.h.itkonen@gmail.com
Journal of Vision November 2013, Vol.13, 11. doi:https://doi.org/10.1167/13.13.11
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Otto Lappi, Esko Lehtonen, Jami Pekkanen, Teemu Itkonen; Beyond the tangent point: Gaze targets in naturalistic driving. Journal of Vision 2013;13(13):11. https://doi.org/10.1167/13.13.11.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Moving in natural environments is guided by looking where you are going. When entering a bend, car drivers direct their gaze toward the inside of the curve, in the region of the curve apex. This behavior has been analyzed in terms of both “tangent point models,” which posit that drivers are looking at the tangent point (TP), and “future path models,” which posit that drivers are visually targeting a point on the desired trajectory or future path (FP). This issue remains unresolved, partly due to the challenge of representing the changing visual projection of the trajectory into the driver's field of view. This paper reports a study of naturalistic driving, in which the FP in the field of view is explicitly modeled, and the TP and reference points on the FP are simultaneously analyzed as potential gaze targets. We argue that traditional area-of-interest methods commonly interpreted as supporting the TP hypothesis are problematic when the interest is contrasting multiple gaze targets. This prompts a critical reassessment of the empirical case for the ubiquity of looking at the TP and the generality of the TP hypothesis as an account of where people look when they steer. As a basis for representing driver gaze behavior, the FP is an equally valid point of departure. There are no overwhelming theoretical or empirical reasons for favoring the TP models over the FP models.

Introduction
As an agent moves through a cluttered environment at high speeds—such as running through a forest or negotiating a series of bends on a narrow road—the trajectory of motion is constrained on both sides by edges of available path and solid obstacles. Speed selection must be governed by limitations in the amount of free space ahead. Rapid and highly efficient visuomotor and decision-making mechanisms ensure that, most of the time, the challenging task of planning and executing a continuous trajectory through potentially hazardous environments is carried out with remarkable confidence and little apparent conscious planning. Eye movements provide a useful physiological means to study these mechanisms because, on the one hand, they can be measured reliably and accurately in naturalistic conditions and, on the other hand, overt visual behavior is closely coupled to the task in most naturalistic behaviors (Land, 2006; Tatler et al., 2011). Exactly how visual space is represented, however, and how the representation of self-motion is coordinated with attentional and motor systems remain to be elucidated (Tatler & Land, 2011). 
Research on car drivers' visual behavior during curve driving has shown that when they are approaching and turning into a bend, most drivers spontaneously direct their gaze toward the inside of the bend (Figure 1). This behavior is very robust and has been found in many on-road studies (Land & Lee, 1994; Underwood et al., 1999; Land & Tatler, 2001; Chattington et al., 2007; Kandil, Rotter, & Lappe, 2009, 2010; Lehtonen, Lappi, & Summala, 2012; Lehtonen, Lappi, Kotkanen, & Summala, 2013; Lappi & Lehtonen, 2013) and in simulators (Marple-Horvat et al., 2005, Authié & Mestre, 2011). 
Figure 1
 
Variability of gaze position in the road scene illustrated by images from a forward-looking scene camera. Images are taken 1 s apart as a driver is entering a blind, uphill, right-hand bend. The sequence illustrates the way drivers scan the road scene with small saccades directed toward the inside of the bend (cf. Underwood et al., 1999; Green, 2002; Kandil et al., 2009, 2010; Land & Furneaux, 1997). The green cross estimates the driver's gaze direction; the red circle (approximately a 3° radius) represents an AOI around the TP. Visual elements of the road scene are labeled by hand. Scale bar is about 3°. Top: Gaze is within 3° of the TP, and the driver is “TP oriented.” Middle: The fixation lands on the “road ahead.” Bottom: The driver is looking “further up the road.” At the moment, unresolved is the question of which, if any, of the images depict visual orientation toward a steering point. The results that form the empirical basis for the TP hypothesis are in terms of the percentage of gaze falling into the TP AOI (the percentage of observations in which gaze has been found to be “TP oriented”).
Figure 1
 
Variability of gaze position in the road scene illustrated by images from a forward-looking scene camera. Images are taken 1 s apart as a driver is entering a blind, uphill, right-hand bend. The sequence illustrates the way drivers scan the road scene with small saccades directed toward the inside of the bend (cf. Underwood et al., 1999; Green, 2002; Kandil et al., 2009, 2010; Land & Furneaux, 1997). The green cross estimates the driver's gaze direction; the red circle (approximately a 3° radius) represents an AOI around the TP. Visual elements of the road scene are labeled by hand. Scale bar is about 3°. Top: Gaze is within 3° of the TP, and the driver is “TP oriented.” Middle: The fixation lands on the “road ahead.” Bottom: The driver is looking “further up the road.” At the moment, unresolved is the question of which, if any, of the images depict visual orientation toward a steering point. The results that form the empirical basis for the TP hypothesis are in terms of the percentage of gaze falling into the TP AOI (the percentage of observations in which gaze has been found to be “TP oriented”).
The specific gaze target and the functional role of looking into the bend, however, remain contentious. There are several models that account for this behavioral pattern at a qualitative level but for different reasons. 
Land and Lee (1994) were the first to identify the tangent point (TP) on the road edge as a possibly significant gaze target in curve driving. The TP is the visual point on the inside lane edge or road shoulder where the (apparent) orientation of the curve is reversed. As well as introducing the concept of the TP, Land and Lee put forward a model that could explain this behavior in terms of a steering strategy (see also Wann & Land, 2000). They proposed that drivers use the visual direction of the TP relative to the locomotor axis—which can be recovered from the visual direction of gaze if fixation is maintained on the TP—to judge the curvature of the bend and thereby to determine the appropriate amount of steering input. 
This interpretation remains the most common account of where we look when we steer, that is, of the commonly observed pattern of car drivers looking through a bend. Indeed, in the visual science literature, visual orientation toward the inside of a bend (the apex of a curve) has become known as “TP orientation” (Land & Lee, 1994; Underwood et al., 1999; Land & Tatler, 2001; Kandil et al., 2009, 2010). Yet it remains to be established empirically whether it is the TP the drivers are looking at or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the TP. 
We will use the term steering point for any point that the driver uses to select the appropriate speed and steering action. The question at issue, then, is whether the TP is a steering point or the steering point as the TP hypothesis is sometimes interpreted. 
The alternative future path (FP) models posit steering points on the forward-planned future trajectory and different control rules with respect to those points (Boer, 1996; Wann & Land, 2000; Wann & Swapp, 2000; Wann & Wilkie, 2004). However, the FP as a gaze target has so far remained relatively unexplored in field studies (it has been discussed in theoretical papers and investigated in simulators), and the TP and the FP have rarely been explicitly compared (an exception is the study by Kandil et al., 2010; we will return to the comparison between the present study and this study in the Discussion). One possible reason is the technical challenge of representing the FP in the gaze angle coordinate system in real-world data: This requires an estimate of the angular positions of the points in the visual field corresponding to the FP. 
By contrast, replicating the robust “TP orientation” result is relatively straightforward: An area of interest (AOI) centered on the TP is identified at each time point, and the relative frequency of gaze position observations falling within the AOI is computed. This is the traditional AOI method for studying “TP orientation” used in most of the studies that the TP orientation literature is based on.1 
But there is a fundamental difficulty in using AOI methods for investigating visual guidance in car driving if the primary interest is in contrasting the TP and FP models. This is the geometrical contiguity of the TP and the FP and the resulting AOI overlap. The TP and FP steering point models assume different gaze targets and very different mechanisms, yet all models predict that drivers look into the bend and therefore orient gaze in approximately the same general direction. This makes it potentially very difficult to resolve the differences between the models empirically. Even if the driver is using a FP steering point, “TP orientation” will still result from the spatial contiguity of the future path and the TP in the visual scene and vice versa. In typical curves, all proposed steering points often fall within a few degrees of the TP, and in these conditions, gaze would be predicted to be “TP oriented” whatever steering point or steering points the driver is using. This problem has been raised in theoretical discussions (e.g., Wilkie, Wann, & Allison, 2008), but to address it empirically (quantify it) requires a model of the FP. Moreover, if gaze position observations are classified as directed at the TP when they are below a threshold of the TP, whether or not they are also within that distance of a reference point on the FP, all geometrically ambiguous cases are thereby rendered TP orientation “by default.” (This will occur if no representation of the FP is available and/or only TP AOI catch percentages are computed.) But, in the absence of “controlling” for the presence of FP reference points in or near the TP AOI, the common method of parameterizing gaze behavior would, in itself, a priori favor the TP. 
Gaze catch percentage within an AOI tells how large the proportion of gaze is within that AOI, but to make conclusions in favor of one hypothesis (e.g., TP) against a rival hypothesis (e.g., FP), it is also required that gaze should not simultaneously be in one of the competing hypotheses' AOIs. Even more problematic is that a method that counts gaze position within the threshold of TP as “TP fixation” and fixations in the FP region as an “on-road” fixation only if gaze falls outside the TP AOI clearly a priori favors the TP over the FP. Of course, the situation is symmetrical in that regardless of whether or not the subjects are actually looking at the TP or the FP or both, using only one AOI catch percentage would seriously bias the presentation of the results in favor of the chosen reference point. In practice, however, it is TP and not points on the FP that has been used as reference point in on-road studies. But this means we must conclude that traditional AOI catch results are ambiguous and leave the matter of TP versus FP unresolved
Therefore, it is doubtful whether any study that has found “TP orientation” but has failed to also present data on gaze in relation to FP AOI should be considered as evidence for TP over FP. This is due to inherent methodological problems in the AOI methods used to establish the main body of TP results, which have not been addressed quantitatively—even if they have been raised in discussion—at a qualitative level (e.g., Wilkie, Kountouriotis, Merat, & Wann, 2010). 
Addressing these methodological challenges and providing empirical data on gaze in relation to FP AOIs in real driving is the point of departure for this study. The main argument of the paper can be outlined as follows: 
  1.  
    Measuring gaze position relative to a single AOI reference point is not only limited in what it can tell us about the driver's gaze strategies, but it is also potentially biased unless other reference points are also defined and compared. Therefore, we develop novel methods for representing real-world gaze data that rely on identifying geometrically several reference points on the FP in addition to the TP. This method allows us to define the FP in terms of angular coordinates in the driver's visual field at a given point in time. Now, points on the FP may be defined as reference points for AOIs in the same way as the TP. This is a significant methodological advantage because it removes the need to visually judge whether a gaze fixation should be judged to fall “on the road” or “at the TP.”
  2.  
    When multiple AOIs are identified, this potentially creates a problem because now AOI overlap may occur. Our geometric representation allows us, for the first time, to quantify the extent of this problem, heretofore raised only as a qualitative critique.
  3.  
    When AOI overlap occurs, some means of resolving it is needed. Using AOI thresholding, most gaze position observations no longer unambiguously belong to a TP AOI or a FP reference point AOI. (Reducing AOI size does not solve the problem if the required AOI size would be <1°. This is too small for noisy field data, and human gaze in locomotion may not even be that accurately controlled.) Here, we instead parameterize gaze position data by clustering them to the nearest reference point, thus assigning each gaze observation to a unique reference point, which allows direct comparisons to be made.
  4.  
    Even when the AOI overlap problem is avoided, the problem of contiguity remains: A reference point placed near the FP would catch some of the data even if the driver was, in reality, looking at the TP, and, likewise, the TP would catch some of the gaze even if the driver was, in reality, looking at a nearby point on the FP. Here, the burden of proof is on the proponent of the FP reference points (although mainly because of the historical precedence of the TP results). We address this by examining the dependence of the distribution of gaze in the vicinity of the TP on a geometric “curve height” variable. This way, we can take advantage of the fact that while TP and FP are usually near each other, there is variation in the geometry. (If the driver is looking only at the TP, changes in where the FP is relative to the TP should not affect gaze behavior whereas if the driver is looking at the FP, we may observe a dependency between gaze and FP location).
The results are discussed in terms of methodological developments and theoretical implications for visual steering models. 
Methods
Reference points and the FP in the visual scene
The TP can be directly identified from an image of the road scene as a geometrical visual feature within the visual field, requiring only a mapping from the image coordinates to gaze angles. This is generally not the case for the FP, which, within the driver's visual field, is a one-dimensional curve with complex geometrical relationships to visually identifiable reference points and the road edges. The gaze targets to be investigated in this study are the TP (Land & Lee, 1994); a centerline reference point (CL) (road center at the TP level), which can be considered to be approximately equivalent to the “centerline TP” of Chattington et al. (2007)2; and three points on the FP, where the FP is represented by a Bezier curve fitted to geometrically identified reference points as explained below. The points on the FP are identified in geometric terms in the same way as the TP. These are (a) the occlusion point (OP) (Lehtonen et al., 2012), (b) a FP reference point adjacent to the TP (Boer, 1996; Wann & Land, 2000), and (c) another FP reference point beyond the TP. The last two we interpret to be potential steering points in the far zone (Salvucci & Gray, 2004; see Figure 2, middle, and Supplementary Figure S1). 
Figure 2
 
Schematic illustration of three frameworks for representing gaze position in the visual scene. Top: 3° TP AOI in the vehicle frame of reference (cf. Figure 1). Middle: Scene geometry decomposed into geometric reference points and curves representing lane boundaries and FP, as used in this study. The dotted blue line indicates FP, the visual projection of the trajectory the vehicle will follow. The origin coordinate is at FPRP1, but in this metric representation, any point (including points on the FP) can be selected as the origin. Bottom: Sequencing of vehicle trajectory in world coordinates into distinct phases (approach-entry-exit) by waypoints on the trajectory that are associated with specific dynamic events (turn-point, max. yaw rate point, exit point).
Figure 2
 
Schematic illustration of three frameworks for representing gaze position in the visual scene. Top: 3° TP AOI in the vehicle frame of reference (cf. Figure 1). Middle: Scene geometry decomposed into geometric reference points and curves representing lane boundaries and FP, as used in this study. The dotted blue line indicates FP, the visual projection of the trajectory the vehicle will follow. The origin coordinate is at FPRP1, but in this metric representation, any point (including points on the FP) can be selected as the origin. Bottom: Sequencing of vehicle trajectory in world coordinates into distinct phases (approach-entry-exit) by waypoints on the trajectory that are associated with specific dynamic events (turn-point, max. yaw rate point, exit point).
The reference points identified on the FP were defined as follows: 
  1.  
    The OP is defined as the furthermost part of the FP to which a continuous, unobstructed trajectory is visible, i.e., the point on the road where the FP of the vehicle disappears from view.3
  2.  
    Following Boer (1996), we define a FP reference point to lie next to the TP (the same vertical visual elevation as the TP) but some distance into the road (in the middle of the lane, operationalized here as one quarter of the distance from the TP to the outside road edge). We will refer to this reference point as Future Path Reference Point 1 (FPRP1).
  3.  
    From FPRP1 (lane center adjacent to the TP) the FP curves behind the TP and toward the OP. To investigate visual orientation on the FP in the region between FPRP1 and the OP, we define another reference point on the FP in this region. This Future Path Reference Point 2 (FPRP2) lies “beyond the TP” in the same horizontal visual direction as the TP but on the FP (and, therefore, always at a higher visual declination because of the projection geometry).
We call the part of the bend visible between FPRP1 (TP level) and the OP (the physical limit of sight distance) the far zone of the bend and the road between the car and the level of the TP the near zone. FPRP2 is thus a reference point on the FP in the far zone (bounded by FPRP1 and the OP). Here, we follow two-level steering models, which assume that steering is influenced by visual information from different regions of extrapersonal space: a near zone that is monitored for stabilizing feedback control and a far zone, which provides anticipatory preview information (cf. Donges, 1978; Land & Horwood, 1995; Salvucci & Gray, 2004). Both the TP and the FP models are concerned with the latter. 
With relatively experienced drivers, we expected the path and road edge immediately in front of the vehicle to be very rarely fixated4 and gaze to be generally concentrated further ahead, at or beyond the TP level, and so no near zone target points were defined. Note that we use the term future path in a technical sense. It is distinct from the physical vehicle trajectory in the world, corresponding instead to the projection into the visual scene of points that fall (will fall) on that trajectory. In contrast to a phenomenological description of “the scene” (Figure 1 and Figure 2, top), road edge, road center, FP, near zone, far zone, FPRP1, FPRP2, TP, and OP are all definable in terms of angular coordinates synchronized to eye-position angular coordinate measurements, which can be tracked over time and used to define AOIs. 
Sequencing the bends into discrete phases
In this paper, we concerned ourselves most with the process of entering a bend when the driver has to initiate and maintain an appropriate amount of steering. 
We use an operational definition of cornering phases (approach–entry-exit), which decomposes the physical geometry of the turn or the vehicle's physical trajectory through it into discrete segments in terms of the driver's control actions at different points of the vehicle's trajectory (Figure 2, bottom; see also Supplementary Figure S1). This decomposition can be used to sequence the geometric trajectory of the vehicle into discrete sequential elements of “the driving line” as realized by the driver's physical actions. 
The sequence begins with an approach phase in which the driver adjusts entry speed (reduces throttle and/or applies the brakes). The entry phase begins at a turn point when the driver induces a yaw motion to the vehicle by turning the steering wheel. Steering angle and vehicle yaw rate typically increase progressively throughout the entry phase. The entry phase ends when absolute yaw rate reaches local maximum at a point of maximum yaw-rate.5 In very long corners, the entry phase may be followed by a steady cornering phase in which the steering wheel angle and yaw rate are held relatively constant with minor corrections. (This phase is not present in short turns, such as the ones investigated in this study.) The exit phase of a corner begins when the yaw rate begins to fall from a local maximum. This is when the driver first begins to unwind the steering (assuming no skid). The driver can be considered to have reached the exit point and thus to have completed the entire cornering sequence when the vehicle is no longer in yaw, assuming the turn leads to a straight. If, on the other hand, the turn leads immediately to another turn, the zero crossing of the yaw rate/steering wheel can be taken to mark the end of the exit phase of the previous turn and the entry phase of the next, and the exit point of the entire sequence is then the exit point of the final turn in the sequence. 
Subjects
Ten subjects participated in the experiment (six male, four female, age range 24–42 years, M 30 years, SD 5 years). Participants were recruited through university email lists and some through personal contacts among students and university staff. Conditions for inclusion in the experiment were having a valid driver's license; normal, uncorrected vision (qualified to drive a car without correction); and sufficient driving experience (>20,000 km). All participants were naïve to the purpose of the study (TP orientation) and were given two cinema tickets as compensation for participation. All participants gave written informed consent, and the study was approved by the local ethics committee. Data for three subjects were lost due to power-supply failure, so results from seven participants are analyzed (six males, one female, M = 30 years, SD = 6 years). 
Route and procedure
Each participant was briefed on the procedure, after which he or she filled in an informed-consent form. After adjusting the driving position, an eye-tracker profile was created and calibrated. At this time, a questionnaire regarding the driver's background was filled in by the participants. The test road (Figure 3) was a 5.13-km-long stretch of a low-standard, two-lane rural road (5.5 m pavement width, painted centerline and edge lines) with very low traffic density. 
Figure 3
 
The route used in the study (Velskolantie, Espoo: N 60.273951, E 24.654733). Turns were identified from vehicle yaw rate data and assigned GPS coordinates and a running index (1–52 northbound and 1–52 southbound). The analyzed turns are highlighted in red from turn point to exit point although only the entry phase was analyzed here because the TP disappears and the TP of the next turn often appears during exit phase. The road was run in both south-north and north-south directions for a total of four runs in each direction.
Figure 3
 
The route used in the study (Velskolantie, Espoo: N 60.273951, E 24.654733). Turns were identified from vehicle yaw rate data and assigned GPS coordinates and a running index (1–52 northbound and 1–52 southbound). The analyzed turns are highlighted in red from turn point to exit point although only the entry phase was analyzed here because the TP disappears and the TP of the next turn often appears during exit phase. The road was run in both south-north and north-south directions for a total of four runs in each direction.
All drives were carried out in daylight but sometimes in varying weather conditions (overcast or rainy). Participants drove the car to the test route, which was located 30 km from the campus, thus giving them time to familiarize themselves with the car. In addition to the participant who drove the car, a member of the research team acted as driving instructor, sitting in the front seat, giving route directions, and ensuring safety. The participants drove the test route four times at their own pace. The drivers were instructed to (a) drive as they normally would and (b) observe traffic laws and safety. In particular, they were explicitly instructed not to cut into the lane of oncoming traffic in left-hand turns if this was what they would do in normal driving. This was both a safety consideration (many of the bends were blind) and also done to reduce between-subject differences in driving lines. 
Equipment and calibration
The instrumented car was a model year 2007 Toyota Corolla 1.6 compact sedan with a manual transmission. The passenger side was equipped with brake pedals and extra mirrors for the driving instructor as well as a computer display that allowed him to monitor vehicle speed as well as the operation of the eye-tracker and data-logging systems. The car was equipped with a two-camera (Smart Eye Pro version 5.5, Gotëborg, Sweden) eye tracker operating at 60 Hz, a forward-looking VGA scene camera, and a GPS receiver (BR-355 GPS, GlobalSat Inc., USA, without differential correction). Vehicle speed and the vehicle control signals (steering, throttle, and brakes) as well as vehicle yaw rate were recorded directly from CAN-bus. All signals were synchronized and time-stamped online and stored on a computer located in the rear luggage compartment. The calibration procedure and calibration information is given in the Supplementary Methods section. 
Data preparation
The data was segmented based on the GPS coordinates of the test route. To render different trials (drives) comparable, the data was then given a location-based representation. One trial, with no traffic or other “incidents,” was chosen as a reference. The vehicle trajectory in an allocentric x,y plane (GPS coordinate system) was computed by linearly interpolating the 1-Hz GPS signal. This interpolated trajectory would then be used as the template for a route-location value with which the other signals could be associated, effectively assigning each observation a one-dimensional coordinate equivalent to travel distance along the vehicle trajectory. All participants' trials were then mapped onto this frame of reference by first best matching the observed GPS values to the reference trajectory and then projecting the intermediate observations onto the interpolated trajectory. All data preparation, visualization, and analysis were done using custom-made Python scripts except for some statistical analyses, which were done with R. 
Turn-entry locations were identified from vehicle yaw rate. Data collected in the entry phases for 21 bends from the test route were selected for detailed analysis. The turns were chosen based on visual scene geometry, taking into account the following consideration: the limitations of our representation of the FP (explained below). This meant that the turns needed to be simple, unconnected curves (rather than connected S-bends, in which the next curve becomes visible already during entry to the previous curve). This is because the Bézier curve representation (see below) cannot, at present, represent very complex road geometry very well. In S-bends, there are also multiple TPs visible at once, thus making it ambiguous where the TP AOI should be placed. Bends with pronounced dips and crests were excluded because the curve algorithm cannot, at its present state of development, handle such complexities. (The fit of the Bézier curves to road geometry can be judged from Supplementary Movies SM1 and SM2; the route locations of the entry points of analyzed curves are given in the Appendix.) 
TPs, OPs, and road edges were manually identified from still video frames (from the SmartEye Scene camera), yielding image coordinates of the features of interest. The road geometry derived from the scene images (in pixel coordinates), and the corresponding gaze-position angular coordinates from the eye tracker were transformed into angular coordinates in the vehicle reference frame (zero angle is straight ahead). To account for lens distortion in the scene camera, the video frames were rectified with barrel distortion parameters estimated with a planar chessboard pattern, using OpenCV 2.4.5 (Bradski, 2000). After the rectification, the pixels (x,y) were mapped to gaze angles (eccentricity, pitch) using pinhole-model camera parameters. The video frames (Figure 4) show undistorted scene images (horizontal and vertical axes are angles). 
Figure 4
 
Example frames of Bézier curves fitted to reference points identified manually in the video frame. (The image single frames from video of Subject 1, run 1, northbound, see Supplementary Movies SM1 and SM2.) Horizontal and vertical scales are gaze-position angular coordinates in the vehicle frame of reference (zero is straight ahead). The TP is red. The FP estimate from the spline curve representation is the thin dotted line. The OP is black, and FPRP1 is green. The colored circles represent gaze position measurement at the same route location on successive runs. (The order is B-G-R-Y.)
Figure 4
 
Example frames of Bézier curves fitted to reference points identified manually in the video frame. (The image single frames from video of Subject 1, run 1, northbound, see Supplementary Movies SM1 and SM2.) Horizontal and vertical scales are gaze-position angular coordinates in the vehicle frame of reference (zero is straight ahead). The TP is red. The FP estimate from the spline curve representation is the thin dotted line. The OP is black, and FPRP1 is green. The colored circles represent gaze position measurement at the same route location on successive runs. (The order is B-G-R-Y.)
These measures were then associated with the appropriate location coordinate, based on the time-stamp of the video frame, so that behavior in different runs through the same location could be compared. 
Bézier curves were fitted to resampled reference points on an image-by-image basis and resampled to distance. Points on the curves were then used to represent approximately the visible road and the FP. The representation for the inside road edge is a curve constructed from two quadratic Bézier curves that pass from the near point (NP, the nearest point on the road edge visible from the camera) to the TP and from the TP to the OP of the road edge (OPedge). The Bézier curves' control points and the OPedge lie on a line parallel to the line from the NP to the OP. The two control points' displacement from the TP is equal but opposite in direction and determined by the length of the span that would extend from the NP–TP span after TP to a horizontal line at the level of the OPedge. Similarly, the outside road edge, centerline, and FP were represented by splines through their respective reference points. 
The purpose of this representation was to develop a method for identifying gaze falling on or very near the FP, which the last of the Bézier curves represents. A FP reference point beyond the TP, FPRP2, was defined in terms of the Bézier curve representing the FP as the point on the curve having the same horizontal coordinate as the TP. Note that the OP and FPRP1 also fall on the FP Bézier curve, by definition. 
Results
Counting gaze position hits to preselected reference point AOIs
We first set out to replicate TP orientation based on AOI catch. We computed the percentage of gaze within 3° of the TP during curve entry (Figure 5, left). As seen in the figure, in the right-hand turns, using an AOI size typical of on-road studies (3° radius), we did observe a consistent pattern of “TP orientation” during curve entry. In the left-hand turns, it was clear that the “centerline TP” (CL, Chattington et al., 2007) is the correct reference point for “TP orientation,” not the TP of the road edge. (The road often did not have a painted centerline from which to identify a TP geometrically, so we used the road center at the level of the TP as the reference point and therefore use the term CL instead.) 
Figure 5
 
“TP orientation” (left) and “FP orientation” (right), quantified according to the traditional “AOI gaze catch” method. Left: Bar plot showing gaze catch in 3° radius AOIs around lane edge reference points when entering a bend. Left-hand and right-hand bends analyzed separately. Data aggregated across all subjects. Right: Using a representation of the FP reference points, we can perform a similar analysis to reference points on the FP. The plot shows gaze catch in 3° radius AOIs around three points used in this study: FPRP1, FPRP2, and OP. FPRP1 and FPRP2 are reference points on the FP determined in relation to the TP (see Methods for definitions of the reference points).
Figure 5
 
“TP orientation” (left) and “FP orientation” (right), quantified according to the traditional “AOI gaze catch” method. Left: Bar plot showing gaze catch in 3° radius AOIs around lane edge reference points when entering a bend. Left-hand and right-hand bends analyzed separately. Data aggregated across all subjects. Right: Using a representation of the FP reference points, we can perform a similar analysis to reference points on the FP. The plot shows gaze catch in 3° radius AOIs around three points used in this study: FPRP1, FPRP2, and OP. FPRP1 and FPRP2 are reference points on the FP determined in relation to the TP (see Methods for definitions of the reference points).
Had TP orientation been our sole concern, we could have been content to assert that, once again, “TP orientation” is exhibited in on-road curve driving. We might have added the general observation that, some of the time, gaze is also directed “further up the road” (cf. Underwood et al., 1999; Kandil et al., 2009, 2010). However, our FP representation allows us to compute gaze catch percentages for AOIs placed at different points on the FP (Figure 5, right). Changing the analysis from TP AOI to FP AOIs, we thus find substantial “FP orientation” as well! 
There are, however, serious difficulties in interpreting these kinds of bare AOI results. “FP orientation” can sometimes be merely a spurious result, arising from the proximity of the TP (AOI overlap), but equally well, “TP orientation” may be a spurious effect of the proximity of the FP. That AOI overlap is indeed present can be immediately deduced from the fact that the catch percentages for both left-hand and right-hand turns sum to more than 100%, which is only possible if many observations fall into multiple AOIs. This means that AOI overlap is present, and thus some proportion of gaze position measurements are simultaneously categorized as TP-oriented and, at the same time, FP-oriented
AOI overlap
Table 1 indicates AOI overlap in terms percentage of gaze assigned to the AOIs that are also assigned to at least one other AOI. These indicate the magnitude of the overlap problem for each of the AOIs used: How large is the proportion of observations in the AOI overlapping with equally valid alternative reference points' AOIs. We see that, already for a modest 3° AOI as used in many studies, the lane edge AOIs TP and CL overlap with future path AOIs (see Supplementary Table T1 for average distances between these reference points). Orientation toward TP or CL is hardly ever “pure” in the sense that gaze falling within threshold of the TP usually also falls within threshold of at least one of the reference points on the FP. That is, for 94% of gaze observations in the CL 3° radius AOI and 85% of gaze observations in the 3° radius TP AOI, it would be equally valid to say they are “FP reference point–oriented” with a 3° AOI threshold. 
Table 1
 
AOI overlap for 3° radius AOIs centered on different reference points. Note: The percentages indicate the relative frequency of cases in which a gaze position observation in the reference point's AOI also falls into the AOI of at least one of the other reference points.
Table 1
 
AOI overlap for 3° radius AOIs centered on different reference points. Note: The percentages indicate the relative frequency of cases in which a gaze position observation in the reference point's AOI also falls into the AOI of at least one of the other reference points.
Left Overlap Right Overlap
CL 94% TP 85%
TP 17% CL 63%
FPRP1 56% FPRP1 75%
FPRP2 79% FPRP2 63%
OP 19% OP 53%
Clustering gaze to reference points
If both TP and FP AOIs are used, it is essential that the problem of overlap should be addressed. How should one assign a gaze observation to a reference point when it is very close to many without favoring one or the other? Using a very small AOI is not the answer because the reference points will typically be only a few degrees apart (see Supplementary Tables S1 and S3), necessitating AOIs that are too small for the noise in position measurement and may be smaller than the positional accuracy of the visual system in placing gaze in the road scene. 
Instead of assigning a gaze observation as “oriented” toward a reference point if it falls within less than a predefined angular threshold from one or more reference points, we chose to assign a gaze observation to a reference point based on which reference point it is closest to. That is, each observed gaze position value was compared to the coinciding reference point position (TP, reference points on the FP spline) and categorized as “oriented” to its nearest reference point. The advantage of this clustering method is that one does not need to determine an a priori AOI threshold size. Also, there is no AOI overlap, and the catch percentages for all points sum to 100%, which makes comparing them more straightforward. 
In order to avoid spurious hits from glances into the scenery or the speedometer from being assigned to reference points, we first used a 6° radius threshold to exclude data when gaze position was >6° from TP, FPRP2, and OP (Figure 6, top). Note that the size and shape of the “window” from which gaze observations are clustered changes dynamically depending on the scene geometry. A 6° window parameter was selected on the basis that it should give a good coverage of the road in the apex region. Remaining gaze position data was then clustered into TP, FPRP2, and OP by assigning the observation to the reference point closest to gaze. Figure 6 (bottom) gives these clustering catch percentages. Table 2 gives individual subjects' data in tabular form. 
Figure 6
 
Top: Between-subjects mean gaze catch shares of reference points when clustering by nearest point (CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window). Bottom: Diagram explaining the 6° window used to select data for clustering. Gaze observations not falling within 6° of any of the three reference points CL/TP, FPRP2, or OP were classified as “outside” of the window. Gaze observations falling within the window were clustered into reference points by assigning each observation to its nearest reference point. Note that the shape and size of the “window” will change as the reference points' relative locations in the road scene vary according to curve geometry.
Figure 6
 
Top: Between-subjects mean gaze catch shares of reference points when clustering by nearest point (CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window). Bottom: Diagram explaining the 6° window used to select data for clustering. Gaze observations not falling within 6° of any of the three reference points CL/TP, FPRP2, or OP were classified as “outside” of the window. Gaze observations falling within the window were clustered into reference points by assigning each observation to its nearest reference point. Note that the shape and size of the “window” will change as the reference points' relative locations in the road scene vary according to curve geometry.
Table 2
 
Gaze catch percentage of reference points when clustering by nearest point. Notes: CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window. Individual subjects' averages by curve direction.
Table 2
 
Gaze catch percentage of reference points when clustering by nearest point. Notes: CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window. Individual subjects' averages by curve direction.
Left TP FPRP2 OP Right TP FPRP2 OP
S1 4% 47% 34% S1 20% 50% 26%
S2 14% 70% 2% S2 8% 68% 18%
S3 20% 60% 10% S3 18% 62% 10%
S4 45% 12% 17% S4 43% 16% 28%
S5 4% 34% 33% S5 3% 31% 38%
S6 34% 33% 9% S6 49% 36% 8%
S7 13% 25% 40% S7 59% 14% 5%
Table 3 gives the median distance of gaze position from the best-catch reference point to which the gaze observation was assigned showing the “hits” to be coming from within a few degrees of the reference points. (This applies to all three reference points, not only the TP.) 
Table 3
 
Median gaze distance of all cluster-assigned gaze-position observations from their respective reference points (degrees). Individual subjects' averages by curve direction.
Table 3
 
Median gaze distance of all cluster-assigned gaze-position observations from their respective reference points (degrees). Individual subjects' averages by curve direction.
Left CL FPRP2 OP Right TP FPRP2 OP
S1 2.6° 2.5° 2.9° S1 1.7° 1.6° 1.6°
S2 2.0° 2.3° 5.0° S2 2.5° 2.1° 2.5°
S3 2.2° 2.3° 3.8° S3 2.9° 2.3° 2.5°
S4 2.8° 3.5° 3.8° S4 2.1° 2.0° 2.1°
S5 1.4° 3.2° 3.4° S5 3.5° 2.6° 2.3°
S6 2.4° 3.3° 4.3° S6 2.0° 1.7° 2.4°
S7 2.8° 3.1° 2.7° S7 3.1° 1.9° 2.6°
Dependence of gaze within the TP region on geometric projection of the road
If the driver is looking at the FP, then changes in the position of the FP relative to the TP should be reflected in changes in gaze behavior. This is something that TP orientation would not predict, and so we can use this prediction to check that the clustering result is “real” rather than just the FP reference point receiving hits because of random variation around one TP. (Here, the burden of proof is perceived to be on the proponent of the future reference points because of historical precedence of the TP models.) 
We addressed this by examining whether the distribution of gaze within a 6° AOI around the TP depends on a scene parameter: “curve height.” This is the vertical angular subtense of the far zone, i.e., the vertical distance between the TP/FPRP1 level and the OP (Figure 7, top). This measure has the advantage that, unlike AOI catch methods, it relies on the variation in the angular distance between reference points. Observation of average gaze position above the TP could be affected by measurement bias, but such bias should not affect co-variation of gaze position and curve height. (When gaze position is regressed against h, measurement bias can only affect the intercept but not the slope of the regressor.) In addition, variation of the elevation of the tangent point in the vehicle frame of reference becomes controlled because the variables are represented in a coordinate system in which the tangent point is at the origin. 
Figure 7
 
Dependence between gaze–TP vertical displacement and curve height. Top: Parameter h is the vertical angular subtense of the visible road measured from the TP level. Circles indicate 6° AOIs around CL (left) and TP (right). The schematic example frames illustrate relatively extreme values of h (on the left, h is about 1°, on the right about 6°). Bottom: Robust regression fits of each individual participant's gaze observations within 6° AOIs centered on the TP (CL in left-hand turns) as a function of h. Positive dependence indicates that when the view into the curve “opens up” (or in steeper uphill curves), so gaze inside the TP AOI also rises relative to the elevation of TP.
Figure 7
 
Dependence between gaze–TP vertical displacement and curve height. Top: Parameter h is the vertical angular subtense of the visible road measured from the TP level. Circles indicate 6° AOIs around CL (left) and TP (right). The schematic example frames illustrate relatively extreme values of h (on the left, h is about 1°, on the right about 6°). Bottom: Robust regression fits of each individual participant's gaze observations within 6° AOIs centered on the TP (CL in left-hand turns) as a function of h. Positive dependence indicates that when the view into the curve “opens up” (or in steeper uphill curves), so gaze inside the TP AOI also rises relative to the elevation of TP.
Figure 7 (bottom; see also Table 4) shows that, in almost all cases, the regressor has a positive slope, indicating that the mass of gaze distribution is higher in conditions in which h has a larger value. When analyzing statistical reliability by curve direction, we see that for the right-hand direction the effect is statistically significant at the <0.05 level (two-tailed binomial test p = 0.016); for left-hand turns, the trend does not reach significance (p = 0.125) because of the divergent behavior of participant three.6 
Table 4
 
Individual subjects' Spearman correlations between vertical displacement of gaze position from the TP (in left-hand turns from the centerline TP) and curve height.
Table 4
 
Individual subjects' Spearman correlations between vertical displacement of gaze position from the TP (in left-hand turns from the centerline TP) and curve height.
Left Right
S1 0.06 0.48
S2 0.15 0.36
S3 −0.15 0.26
S4 0.21 0.32
S5 0.31 0.15
S6 0.10 0.30
S7 0.03 0.13
To visualize the time-behavior of the gaze distributions, movie representation was deemed most suitable (see Supplementary Methods for more discussion). Each frame of Movie 1 can be considered a heat map of gaze distribution data sampled at a specific route location (see Figure 8. Note that whereas Movie 1 and Figure 8 show the data arranged in temporal sequence, Figure 7 shows the data arranged by values of h). The phenomena of interest in Movie 1 are (a) how the gaze concentrates around the apex region in the scene (there is very little “search,” the gaze focusing systematically on the task-relevant region) and (b) how the gaze observations do not create a single, symmetrical, distribution as one would expect to find with random (Gaussian) noise around the TP (or the centerline TP in left-hand curves). The gaze distribution elongates and contracts depending on the angular subtense of the road as the view of the far zone opens up and contracts (i.e., the angular distances between reference points increase and decrease). 
Figure 8
 
Heat map visualizations of the distribution of all participants' raw data overlaid on the wire frame representation: sample frames from Movie 1. Gaze is seen to concentrate in the far zone (the region between FPRP1, green, and OP, black).
Figure 8
 
Heat map visualizations of the distribution of all participants' raw data overlaid on the wire frame representation: sample frames from Movie 1. Gaze is seen to concentrate in the far zone (the region between FPRP1, green, and OP, black).
Discussion
Relationship to previous research
The FP as a potential alternative gaze target has remained relatively unexplored in field studies focused on the TP. This is in spite of several theoretical papers raising it as an alternative or a complementary steering point location (Boer, 1996; Wann & Land, 2000; Wann & Swapp, 2000; Wann & Wilkie, 2004). One possible reason is the poor suitability of traditional AOI methods to simultaneously identify and compare the different models and the relative simplicity of replicating the TP result. Another reason is the methodological challenge of developing a parametric representation of the FP as a geometric entity in the visual scene coordinate system, which is clearly required to define AOIs relative to FP reference points (but which, to our knowledge, is only done for the first time in the present paper). 
Underwood et al. (1999) describe drivers “checking the road ahead,” and Kandil et al. (2010) report gazes to AOIs on “the road” and “the end of the visible road” (in addition to looking at the TP). In addition to being somewhat qualitative, this type of phenomenological characterization of eye movements potentially hides a more principled methodological problem created by the contiguity of the TP and the FP: AOI overlap. If gaze is assigned into an AOI by a threshold distance criterion, the ambiguity arising from gaze position being simultaneously within threshold of the TP and the FP needs to be resolved somehow. In the absence of a FP representation, interpreting gaze within threshold distance from the TP as “TP fixation” by default and observing a fixation to “the road” only when it does not fall within the TP AOI biases the analysis in favor of the TP. Of course, the extent of this problem depends on the projection geometry, specifically, how large the angular distances of TP and FP reference points are relative to the AOI size. Unfortunately, previous reports on TP orientation do not report these parameters or the AOI overlap resulting (as, again, this requires a representation of the FP in angular coordinates). 
The methodological conclusion we argue for is that any study purporting to show TP orientation, rather than merely orientation in the general region of the curve apex (TP or FP), needs to explicitly model the FP. This is the first active measure that needs to be taken to avoid the problems simple AOI hit–count methods encounter with the small relative angular displacement of the TP and the FP. These problems are severe, enough so to cast doubt on the ubiquity of “TP orientation” because previous on-road studies reporting this behavior have used fixed-threshold AOI counting methods and have not systematically quantified, and thus controlled for, “FP orientation.” Addressing these problems will move forward the state of the art of research into visual behavior in driving and, ultimately, lead to a more detailed understanding of driver gaze behavior, not just “orientation” toward a single AOI (or even several). The kinds of analysis tools developed here may be useful because they enable quantitative investigation of FP-oriented gaze in real driving. 
To our knowledge, only two previous studies have explicitly addressed the FP versus TP debate with real on-road data:7 Kandil et al. (2009) and Lappi, Pekkanen, and Itkonen (2013). 
Kandil et al. (2009) involved an experimental setting that compared six experienced drivers' visual behavior and steering while driving on on-ramps and off-ramps in a cloverleaf motorway junction with different gaze instructions. In the first phase of the experiment, the participants were instructed to drive naturally, and in the second phase, they were instructed to either look at the TP or use “gaze sampling.” Specifically, the instruction in the TP condition was to maintain permanent fixation on the TP and, in the gaze-sampling condition, to “successively look for and keep fixating for several seconds at points on the future path of the car.” That study reports smoother driving in the TP condition compared to the gaze-sampling condition, which the authors interpret as evidence for the TP hypothesis (i.e., that in normal conditions, the TP is the steering point) and against gaze polling. 
There are, however, concerns. First of all, the “gaze-sampling” strategy was highly artificial and does not reflect the normal optokinetic pursuit pattern in driving (Lappi & Lehtonen, 2013; Lappi et al., 2013), in which the slow phases of pursuit (“fixations to the road”) only last for a few hundred milliseconds. Thus, less “smooth” driving should not be interpreted to mean that pursuit movements “polling” the FP do not occur in normal driving as the result might have been simply due to difficulty of the abnormal gaze task. (While it may be said that the TP condition was equally artificial, this instruction would not necessitate unnatural long pursuit sweeps from very high eccentricity toward the locomotor axis, needed in order to fix a stationary point for several seconds during vehicle rotation.) Also, angular distances between TP and points “on the road” were not reported. Also, how the AOI overlap problem maybe was addressed is not clear from the methods presented.8 Thus, to what extent FP orientation does or does not occur in normal driving cannot be inferred from the results except for that glances to “the road” were present. 
The Lappi et al. (2013) study is predicated on the assumption that optokinetic pursuit eye movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. The optokinetic pattern in car drivers' gaze when looking at the curve apex region was recently demonstrated in real driving by Lappi and Lehtonen (2013) (see Authié & Mestre, 2011, for related simulator results). In that study, it was found that (a) the optokinetic pursuit has a horizontal slow phase component in the direction opposite to the bend; (b) the magnitude of the horizontal component of the pursuit movements was found to be approximately half vehicle yaw rate; and (c) gaze position is typically above the TP and displays a large horizontal variation in relation to it, being spread into the far zone along the FP. Taken together, these findings are most consistent with the assumption that the drivers are targeting fixed target points on the road in the far zone and tracking them with pursuit eye movements as predicted by gaze-polling models (Kim & Turvey, 1999; Wann & Swapp, 2000). In contrast, few fixations to the TP and no overall patterns of optokinesis consistent with regional optic flow at the TP location were found. 
Taken together, that result and the results in the present paper show that, using multiple, complementary methods for characterizing gaze behavior (gaze position, dependence of gaze distribution on curve visual geometry, optokinetic pursuit movement characteristics), converging evidence of orientation toward the FP far zone emerges. This picture is therefore somewhat different from the traditional AOI-based “TP orientation,” but we find no compelling argument that would show that one should assume by default that drivers' gaze is so often focused in the region of the curve apex because of the presence of the TP in that part of the visual scene. 
The empirical case of TP versus FP thus remains open. This is notwithstanding the number of studies that have shown TP orientation. Even high percentages of gaze position within TP AOI cannot be taken as empirical evidence against FP models if AOI overlap is not controlled for and reported. Methodologically, addressing the problem of FP/TP contiguity and AOI overlap by investigating simultaneously both TP and FP targets in this study gives examples of the type of analysis required. Empirically, it produces a somewhat different picture of driver gaze behavior than what would be gleaned from a review of the TP literature. 
When considered in isolation, an AOI at the TP catches a substantial share of gaze, but this holds equally well for the FP. In other words, we find that, working from the FP assumption, we could explain the pattern of results equally well or better than based on the TP hypothesis. Furthermore, gaze position co-varies with the FP even when variation in the position of the TP is controlled. 
Implications for steering models
The FP or points on it are an equally valid point of departure for analysis of the visual control of steering as the TP, both from a theoretical perspective and based on available data. The TP is not the sole or perhaps even the primary target of steering-related fixations on the road, most of which may be yet to discovered or defined.9 
We do not wish to argue that our methods or results will “resolve” the TP versus FP debate in favor of one model (FP) over the other (TP). That is, none of our analyses are intended to determine whether the TP or the FP is the “real” target that drivers always look at. In fact, we consider this to be something of a nonissue as there is nothing inherently incompatible in the TP and FP models. 
Indeed, it seems to us that the flexibility of human visual behavior—as exhibited in scanning the visual scene for useful visual information to enable fast and accurate locomotor behavior—is perhaps underestimated by models that posit one “best” point in the visual scene (the TP or a steering point on the FP). Such a simple picture also fails to take into account the presence of multiple potential gaze targets for functional fixations in that direction. 
Rather than to search for the steering point, it would perhaps be more realistic to assume multiple reference points, which the driver scans in some order of task priority and/or personal preference in order to update the visuomotor and memory representations he uses to determine appropriate speed and steering. If there are indeed multiple targets, then the interesting questions become what are the targets, when are they targeted, and why. A more integrative approach to interpreting scan patterns during driving might be to assume the models represent different control mechanisms operating in parallel. The only real conflict to resolve would then be the one the driver faces: when to look at the TP and when to look at the FP. 
Supplementary Materials
Acknowledgments
We thank Ms. Mona Moisala, Ms. Siiri Helokunnas, Mr. Mikko Heimola, and Mr. Eero Lumme for contributing to the collection of data and Mr. Juha Vepsäläinen for contributing video annotation software. 
Commercial relationships: none. 
Corresponding author: Otto Lappi. 
Email: otto.lappi@helsinki.fi. 
Address: Cognitive Science, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland. 
Traffic Research Unit, Institute of Behavioral Sciences, University of Helsinki, Helsinki, Finland. 
References
Authié C. N. Mestre D. R. (2011). Optokinetic nystagmus is elicited by curvilinear optic flow during high speed curve driving. Vision Research, 51, 1791–1800. [CrossRef] [PubMed]
Boer E. R. (1996). Tangent point oriented curve negotiation. In Proceedings of the 1996 IEEE (pp. 7–12), doi:10.1109/IVS.1996.566341.
Bradski G. (2000). The OpenCV Library. Dr. Dobb's Journal of Software Tools. November 01, 2000. http://www.drdobbs.com/open-source/the-opencv-library/184404319.
Chattington M. Wilson M. (2007). Eye–steering coordination in natural driving. Experimental Brain Research, 180, 1–14. [CrossRef] [PubMed]
Donges E. (1978). A two-level model of driver steering behavior. Human Factors, 20, 691–707.
Green P. (2002). Where do drivers look while driving (and for how long)? In Dewar R. E. Olson P. L. (Eds.), Human factors in traffic safety (pp. 77–110). Tucson, AZ: Lawyers & Judges Publishing Company.
Kandil F. Rotter A. Lappe M. (2009). Driving is smoother and more stable when using the tangent point. Journal of Vision, 9 (1): 11, 1–11, http://www.journalofvision.org/content/9/1/11, doi:10.1167/9.1.11. [PubMed] [Article] [CrossRef] [PubMed]
Kandil F. Rotter A. Lappe M. (2010). Car drivers attend to different gaze targets when negotiating closed vs. open bends. Journal of Vision, 10 (4): 24, 1–11, http://www.journalofvision.org/content/10/4/24, doi:10.1167/10.4.24. [PubMed] [Article] [CrossRef] [PubMed]
Kim N.-G. Turvey M. T. (1999). Eye movements and a rule for perceiving direction of heading. Ecological Psychology, 11, 233–246. [CrossRef]
Land M. (2006). Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research, 25, 296–324. [CrossRef] [PubMed]
Land M. Furneaux S. (1997). The knowledge base of the oculomotor system. Philosophical Transactions of the Royal Society London B, 352, 1239–1239. [CrossRef]
Land M. Horwood J. (1995). Which parts of the road guide steering? Nature, 377, 339–340. [CrossRef] [PubMed]
Land M. Lee D. (1994). Where we look when we steer. Nature, 369, 742–744. [CrossRef] [PubMed]
Land M. Tatler B. (2001). Steering with the head: The visual strategy of a racing driver. Current Biology, 11, 1215–1220. [CrossRef] [PubMed]
Lappi O. Lehtonen E. (2013). Eye-movements in real curve driving: Pursuit-like optokinesis in vehicle frame of reference, stability in an allocentric reference coordinate system. Journal of Eye Movement Research, 6, 1–13.
Lappi O. Pekkanen J. Itkonen T. (2013). Pursuit eye-movements in curve driving differentiate between future path and tangent point models. PLoS ONE, 8 (7), e68326. [CrossRef] [PubMed]
Lehtonen E. Lappi O. Kotkanen H. Summala H. (2013). Look-ahead fixations in curve driving. Ergonomics, 56, 34–44. [CrossRef] [PubMed]
Lehtonen E. Lappi O. Summala H. (2012). Anticipatory eye movements when approaching a curve on a rural road depend on working memory load. Transportation Research Part F, 15, 369–377. [CrossRef]
Marple-Horvat D. Chattington M. Anglesea M. Ashford D. G. Wilson M. Keil D. (2005). Prevention of coordinated eye movements and steering impairs driving performance. Experimental Brain Research, 163, 411–420. [CrossRef] [PubMed]
Robertshaw K. Wilkie R. (2008). Does gaze influence steering around a bend? Journal of Vision, 8 (4): 18, 1–13, http://www.journalofvision.org/content/8/4/18, doi:10.1167/8.4.18. [PubMed] [Article] [CrossRef] [PubMed]
Salvucci D. Gray R. (2004). A two-point visual control model of steering. Perception, 33, 1233–1248. [CrossRef] [PubMed]
Summala H. Nieminen T. Punto M. (1996). Maintaining lane position with peripheral vision during in-vehicle tasks. Human Factors, 38, 442–451. [CrossRef]
Tatler B. W. Hayhoe M. Ballard D. (2011). Eye guidance in natural vision: Reinterpreting salience. Journal of Vision, 11 (5): 5, 1–23, http://www.journalofvision.org/content/11/5/5, doi:10.1167/11.5.5. [PubMed] [Article] [CrossRef] [PubMed]
Tatler B. W. Land M. F. (2011). Vision and the representation of the surroundings in spatial memory. Philosophical Transactions of the Royal Society B, 366, 596–610. [CrossRef]
Underwood G. Chapman P. Crundall D. Cooper S. Wallén R. (1999). The visual control of steering and driving: Where do we look when negotiating curves? In Gale A. G. Brown I. D. Haslegrave C. M. Taylor S. P. (Eds.), Vision in vehicles VII. (pp. 245–252). Amsterdam: Elsevier.
Wann J. Land M. (2000). Steering with or without the flow: Is the retrieval of heading necessary? Trends in Cognitive Sciences, 4, 319–324. [CrossRef] [PubMed]
Wann J. Swapp D. (2000). Why you should look where you are going. Nature Neurocience, 3, 647. [CrossRef]
Wann J. Wilkie R. (2004). How do we control high speed steering? In Vaina L. M. Beardsley S. A. Rushton S. K. (Eds.), Optic flow and beyond (pp. 371–389). Norwell, MA: Kluwer Academic Publishers.
Wilkie R. Kountouriotis G. K. Merat N. Wann J. P. (2010). Using vision to control locomotion: Looking where you want to go. Experimental Brain Research, 204, 539–547. [CrossRef] [PubMed]
Wilkie R. M. Wann J. P. (2003). Eye-movements aid the control of locomotion. Journal of Vision, 3 (11): 3, 677–684, http://www.journalofvision.org/content/3/11/3, doi:10.1167/3.11.3. [PubMed] [Article] [CrossRef]
Wilkie R. Wann J. P. Allison R. (2008). Active gaze, visual look-ahead, and locomotor control. Journal of Experimental Psychology: Human Perception and Performance, 34, 1150–1164. [CrossRef] [PubMed]
Footnotes
1  The basic result established by Land and Lee (1994) and replicated in many studies is that an AOI centered around the TP captures a “substantial” number of gaze observations. The reported gaze catch percentages have been of the order of 50% with the exact number depending on road geometry, AOI size, and, potentially, other factors, such as driver experience or speed (the effect of which is not currently well understood). AOI sizes have ranged from 2° to 6°, depending on the study, with gaze catch values as high as 85% for a 2° AOI (Kandil et al., 2010) and, on the other hand, as little as 48% for a 6° AOI (Underwood et al., 1999) having been reported.
Footnotes
2  Following Chattington et al. (2007), we assumed that in left-hand turns the CL would act as the “TP” (rather than the driver making a saccade across the opposing lane to the road edge TP). This is notwithstanding the fact that the centerline is not always marked with painting on the study road, in which case it does not present as salient a visual feature. It seems nevertheless reasonable to assume that experienced drivers can determine the boundaries of their own lane even without the help of markings (markings are not mandatory for successful steering, for example, on gravel-surfaced roads), and inspection of the eye-movement visualizations suggests that drivers rarely look across the opposing lane at the TP on the road edge.
Footnotes
3  For the purpose of identifying a unique occlusion point in all conditions, a sharper definition than the one used in Lehtonen et al. (2012) is called for. There, the occlusion point was defined as the point where “the road” disappears from view. Because the road has width, there is no such unique point. For example, there is one occlusion point on the FP as well as two occlusion points on the road edges and one on the road centerline. When there are good sight lines, these points converge, but on a road with many blind turns and especially crests, they sometimes diverge quite considerably. (Imagine the road edges moving apart when approaching a crest.)
Footnotes
4  Summala, Nieminen, and Punto (1996) showed that experienced drivers can use peripheral vision to monitor the road edge—albeit this was on a straight road.
Footnotes
5  This is one operational definition of the curve “peak” or apex point. Other possibilities would be to use minimum path curvature or, if the driver cuts to the inside, minimum lateral distance from the lane edge.
Footnotes
6  Inspection of the gaze behavior shows that this (relatively inexperienced) participant is looking quite close in front of the vehicle: near the FPRP1 or even at the further edge of the near zone. However, whether this behavior really is the reason for the deviant pattern and, if so, why it should increase in more open curves, that is, in situations when the value of h is higher, is not clear.
Footnotes
7  It should be noted that both studies were carried out on motorway ramps. How driver behavior in these conditions compares to narrow and winding roads as used in the Land and Lee (1994) study and the present paper remains to be established. The FP has also been explicitly addressed in a number of simulator studies (e.g., Wilkie & Wann, 2003; Robertshaw & Wilkie, 2008). While these studies have failed to find extensive TP fixation—and consequently interpret the results in favor of FP targeting—the differences between the driving task compared to real driving make it difficult to decide what to make of the reported absence of TP orientation. (The subjects steered a simple simulator at constant speed without opportunity to adjust their speed and with explicit instruction to maintain central lane position and not “cut” into the inside of the bend.)
Footnotes
8  The authors did not have available a quantitative estimate of the projection of the FP in the drivers' field of view in terms of visual angles. Instead, the gaze target was determined by visual inspection and the classification then coded manually into the data. They resolved the conflict arising from a fixation landing equally close to several reference locations (AOI overlap) by inspecting previous and subsequent frames in cases that they found unclear. What methods were employed is not explained in detail, which makes it difficult to compare the results.
Footnotes
9  We do not present FPRP2 as an alternative “steering point” intended to replace the TP as a general account of where we look when we steer. We simply use it as a reference point on the FP in the far zone between FPRP1 and FPRP2; this is a methodological point. Using the full FP representation (e.g., in clustering, computing the proportion of gaze falling closer to some point on the FP than the TP) would be an unfair comparison because the TP is only a single point.
Figure 1
 
Variability of gaze position in the road scene illustrated by images from a forward-looking scene camera. Images are taken 1 s apart as a driver is entering a blind, uphill, right-hand bend. The sequence illustrates the way drivers scan the road scene with small saccades directed toward the inside of the bend (cf. Underwood et al., 1999; Green, 2002; Kandil et al., 2009, 2010; Land & Furneaux, 1997). The green cross estimates the driver's gaze direction; the red circle (approximately a 3° radius) represents an AOI around the TP. Visual elements of the road scene are labeled by hand. Scale bar is about 3°. Top: Gaze is within 3° of the TP, and the driver is “TP oriented.” Middle: The fixation lands on the “road ahead.” Bottom: The driver is looking “further up the road.” At the moment, unresolved is the question of which, if any, of the images depict visual orientation toward a steering point. The results that form the empirical basis for the TP hypothesis are in terms of the percentage of gaze falling into the TP AOI (the percentage of observations in which gaze has been found to be “TP oriented”).
Figure 1
 
Variability of gaze position in the road scene illustrated by images from a forward-looking scene camera. Images are taken 1 s apart as a driver is entering a blind, uphill, right-hand bend. The sequence illustrates the way drivers scan the road scene with small saccades directed toward the inside of the bend (cf. Underwood et al., 1999; Green, 2002; Kandil et al., 2009, 2010; Land & Furneaux, 1997). The green cross estimates the driver's gaze direction; the red circle (approximately a 3° radius) represents an AOI around the TP. Visual elements of the road scene are labeled by hand. Scale bar is about 3°. Top: Gaze is within 3° of the TP, and the driver is “TP oriented.” Middle: The fixation lands on the “road ahead.” Bottom: The driver is looking “further up the road.” At the moment, unresolved is the question of which, if any, of the images depict visual orientation toward a steering point. The results that form the empirical basis for the TP hypothesis are in terms of the percentage of gaze falling into the TP AOI (the percentage of observations in which gaze has been found to be “TP oriented”).
Figure 2
 
Schematic illustration of three frameworks for representing gaze position in the visual scene. Top: 3° TP AOI in the vehicle frame of reference (cf. Figure 1). Middle: Scene geometry decomposed into geometric reference points and curves representing lane boundaries and FP, as used in this study. The dotted blue line indicates FP, the visual projection of the trajectory the vehicle will follow. The origin coordinate is at FPRP1, but in this metric representation, any point (including points on the FP) can be selected as the origin. Bottom: Sequencing of vehicle trajectory in world coordinates into distinct phases (approach-entry-exit) by waypoints on the trajectory that are associated with specific dynamic events (turn-point, max. yaw rate point, exit point).
Figure 2
 
Schematic illustration of three frameworks for representing gaze position in the visual scene. Top: 3° TP AOI in the vehicle frame of reference (cf. Figure 1). Middle: Scene geometry decomposed into geometric reference points and curves representing lane boundaries and FP, as used in this study. The dotted blue line indicates FP, the visual projection of the trajectory the vehicle will follow. The origin coordinate is at FPRP1, but in this metric representation, any point (including points on the FP) can be selected as the origin. Bottom: Sequencing of vehicle trajectory in world coordinates into distinct phases (approach-entry-exit) by waypoints on the trajectory that are associated with specific dynamic events (turn-point, max. yaw rate point, exit point).
Figure 3
 
The route used in the study (Velskolantie, Espoo: N 60.273951, E 24.654733). Turns were identified from vehicle yaw rate data and assigned GPS coordinates and a running index (1–52 northbound and 1–52 southbound). The analyzed turns are highlighted in red from turn point to exit point although only the entry phase was analyzed here because the TP disappears and the TP of the next turn often appears during exit phase. The road was run in both south-north and north-south directions for a total of four runs in each direction.
Figure 3
 
The route used in the study (Velskolantie, Espoo: N 60.273951, E 24.654733). Turns were identified from vehicle yaw rate data and assigned GPS coordinates and a running index (1–52 northbound and 1–52 southbound). The analyzed turns are highlighted in red from turn point to exit point although only the entry phase was analyzed here because the TP disappears and the TP of the next turn often appears during exit phase. The road was run in both south-north and north-south directions for a total of four runs in each direction.
Figure 4
 
Example frames of Bézier curves fitted to reference points identified manually in the video frame. (The image single frames from video of Subject 1, run 1, northbound, see Supplementary Movies SM1 and SM2.) Horizontal and vertical scales are gaze-position angular coordinates in the vehicle frame of reference (zero is straight ahead). The TP is red. The FP estimate from the spline curve representation is the thin dotted line. The OP is black, and FPRP1 is green. The colored circles represent gaze position measurement at the same route location on successive runs. (The order is B-G-R-Y.)
Figure 4
 
Example frames of Bézier curves fitted to reference points identified manually in the video frame. (The image single frames from video of Subject 1, run 1, northbound, see Supplementary Movies SM1 and SM2.) Horizontal and vertical scales are gaze-position angular coordinates in the vehicle frame of reference (zero is straight ahead). The TP is red. The FP estimate from the spline curve representation is the thin dotted line. The OP is black, and FPRP1 is green. The colored circles represent gaze position measurement at the same route location on successive runs. (The order is B-G-R-Y.)
Figure 5
 
“TP orientation” (left) and “FP orientation” (right), quantified according to the traditional “AOI gaze catch” method. Left: Bar plot showing gaze catch in 3° radius AOIs around lane edge reference points when entering a bend. Left-hand and right-hand bends analyzed separately. Data aggregated across all subjects. Right: Using a representation of the FP reference points, we can perform a similar analysis to reference points on the FP. The plot shows gaze catch in 3° radius AOIs around three points used in this study: FPRP1, FPRP2, and OP. FPRP1 and FPRP2 are reference points on the FP determined in relation to the TP (see Methods for definitions of the reference points).
Figure 5
 
“TP orientation” (left) and “FP orientation” (right), quantified according to the traditional “AOI gaze catch” method. Left: Bar plot showing gaze catch in 3° radius AOIs around lane edge reference points when entering a bend. Left-hand and right-hand bends analyzed separately. Data aggregated across all subjects. Right: Using a representation of the FP reference points, we can perform a similar analysis to reference points on the FP. The plot shows gaze catch in 3° radius AOIs around three points used in this study: FPRP1, FPRP2, and OP. FPRP1 and FPRP2 are reference points on the FP determined in relation to the TP (see Methods for definitions of the reference points).
Figure 6
 
Top: Between-subjects mean gaze catch shares of reference points when clustering by nearest point (CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window). Bottom: Diagram explaining the 6° window used to select data for clustering. Gaze observations not falling within 6° of any of the three reference points CL/TP, FPRP2, or OP were classified as “outside” of the window. Gaze observations falling within the window were clustered into reference points by assigning each observation to its nearest reference point. Note that the shape and size of the “window” will change as the reference points' relative locations in the road scene vary according to curve geometry.
Figure 6
 
Top: Between-subjects mean gaze catch shares of reference points when clustering by nearest point (CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window). Bottom: Diagram explaining the 6° window used to select data for clustering. Gaze observations not falling within 6° of any of the three reference points CL/TP, FPRP2, or OP were classified as “outside” of the window. Gaze observations falling within the window were clustered into reference points by assigning each observation to its nearest reference point. Note that the shape and size of the “window” will change as the reference points' relative locations in the road scene vary according to curve geometry.
Figure 7
 
Dependence between gaze–TP vertical displacement and curve height. Top: Parameter h is the vertical angular subtense of the visible road measured from the TP level. Circles indicate 6° AOIs around CL (left) and TP (right). The schematic example frames illustrate relatively extreme values of h (on the left, h is about 1°, on the right about 6°). Bottom: Robust regression fits of each individual participant's gaze observations within 6° AOIs centered on the TP (CL in left-hand turns) as a function of h. Positive dependence indicates that when the view into the curve “opens up” (or in steeper uphill curves), so gaze inside the TP AOI also rises relative to the elevation of TP.
Figure 7
 
Dependence between gaze–TP vertical displacement and curve height. Top: Parameter h is the vertical angular subtense of the visible road measured from the TP level. Circles indicate 6° AOIs around CL (left) and TP (right). The schematic example frames illustrate relatively extreme values of h (on the left, h is about 1°, on the right about 6°). Bottom: Robust regression fits of each individual participant's gaze observations within 6° AOIs centered on the TP (CL in left-hand turns) as a function of h. Positive dependence indicates that when the view into the curve “opens up” (or in steeper uphill curves), so gaze inside the TP AOI also rises relative to the elevation of TP.
Figure 8
 
Heat map visualizations of the distribution of all participants' raw data overlaid on the wire frame representation: sample frames from Movie 1. Gaze is seen to concentrate in the far zone (the region between FPRP1, green, and OP, black).
Figure 8
 
Heat map visualizations of the distribution of all participants' raw data overlaid on the wire frame representation: sample frames from Movie 1. Gaze is seen to concentrate in the far zone (the region between FPRP1, green, and OP, black).
Table 1
 
AOI overlap for 3° radius AOIs centered on different reference points. Note: The percentages indicate the relative frequency of cases in which a gaze position observation in the reference point's AOI also falls into the AOI of at least one of the other reference points.
Table 1
 
AOI overlap for 3° radius AOIs centered on different reference points. Note: The percentages indicate the relative frequency of cases in which a gaze position observation in the reference point's AOI also falls into the AOI of at least one of the other reference points.
Left Overlap Right Overlap
CL 94% TP 85%
TP 17% CL 63%
FPRP1 56% FPRP1 75%
FPRP2 79% FPRP2 63%
OP 19% OP 53%
Table 2
 
Gaze catch percentage of reference points when clustering by nearest point. Notes: CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window. Individual subjects' averages by curve direction.
Table 2
 
Gaze catch percentage of reference points when clustering by nearest point. Notes: CL for left-hand bends, TP of right-hand bends, FPRP2 or OP; clustering observations falling into 6° window. Individual subjects' averages by curve direction.
Left TP FPRP2 OP Right TP FPRP2 OP
S1 4% 47% 34% S1 20% 50% 26%
S2 14% 70% 2% S2 8% 68% 18%
S3 20% 60% 10% S3 18% 62% 10%
S4 45% 12% 17% S4 43% 16% 28%
S5 4% 34% 33% S5 3% 31% 38%
S6 34% 33% 9% S6 49% 36% 8%
S7 13% 25% 40% S7 59% 14% 5%
Table 3
 
Median gaze distance of all cluster-assigned gaze-position observations from their respective reference points (degrees). Individual subjects' averages by curve direction.
Table 3
 
Median gaze distance of all cluster-assigned gaze-position observations from their respective reference points (degrees). Individual subjects' averages by curve direction.
Left CL FPRP2 OP Right TP FPRP2 OP
S1 2.6° 2.5° 2.9° S1 1.7° 1.6° 1.6°
S2 2.0° 2.3° 5.0° S2 2.5° 2.1° 2.5°
S3 2.2° 2.3° 3.8° S3 2.9° 2.3° 2.5°
S4 2.8° 3.5° 3.8° S4 2.1° 2.0° 2.1°
S5 1.4° 3.2° 3.4° S5 3.5° 2.6° 2.3°
S6 2.4° 3.3° 4.3° S6 2.0° 1.7° 2.4°
S7 2.8° 3.1° 2.7° S7 3.1° 1.9° 2.6°
Table 4
 
Individual subjects' Spearman correlations between vertical displacement of gaze position from the TP (in left-hand turns from the centerline TP) and curve height.
Table 4
 
Individual subjects' Spearman correlations between vertical displacement of gaze position from the TP (in left-hand turns from the centerline TP) and curve height.
Left Right
S1 0.06 0.48
S2 0.15 0.36
S3 −0.15 0.26
S4 0.21 0.32
S5 0.31 0.15
S6 0.10 0.30
S7 0.03 0.13
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×