Open Access
Article  |   July 2024
Fork in the road: How self-efficacy related to walking across terrain influences gaze behavior and path choice
Author Affiliations
  • Vinicius da Eira Silva
    Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada
    Institute for Neuroscience and Neurotechnology, Simon Fraser University, Burnaby, BC, Canada
    vdsilva@sfu.ca
  • Daniel S. Marigold
    Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada
    Institute for Neuroscience and Neurotechnology, Simon Fraser University, Burnaby, BC, Canada
    daniel_marigold@sfu.ca
Journal of Vision July 2024, Vol.24, 7. doi:https://doi.org/10.1167/jov.24.7.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vinicius da Eira Silva, Daniel S. Marigold; Fork in the road: How self-efficacy related to walking across terrain influences gaze behavior and path choice. Journal of Vision 2024;24(7):7. https://doi.org/10.1167/jov.24.7.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Decisions about where to move occur throughout the day and are essential to life. Different movements may present different challenges and affect the likelihood of achieving a goal. Certain choices may have unintended consequences, some of which may cause harm and bias the decision. Movement decisions rely on a person gathering necessary visual information via shifts in gaze. Here we sought to understand what influences this information-seeking gaze behavior. Participants chose between walking across one of two paths that consisted of terrain images found in either hiking or urban environments. We manipulated the number and type of terrain of each path, which altered the amount of available visual information. We recorded gaze behavior during the approach to the paths and had participants rate the confidence in their ability to walk across each terrain type (i.e., self-efficacy) as though it was real. Participants did not direct gaze more to the path with greater visual information, regardless of how we quantified information. Rather, we show that a person's perception of their motor abilities predicts how they visually explore the environment with their eyes as well as their choice of action. The greater the self-efficacy in walking across one path, the more they directed gaze to it and the more likely they chose to walk across it.

Introduction
Decisions require information about available choices. In complex environments, like a busy street, shopping mall, or hiking trail, there is a multitude of information that competes for attention and can collectively inform the eventual choice of action. Shifts in gaze allow a person to extract timely, high-resolution visual information necessary to make a goal-directed decision. But what drives the decision of where and for how long to direct gaze in these natural behaviors? And how does this information affect the decision about how to act? 
In goal-directed, natural behaviors, gaze is directed to task-relevant features of the environment (Land, Mennie, & Rusted, 1999; Marigold & Patla, 2007; Rothkopf, Ballard, & Hayhoe, 2007). More specifically, gaze is drawn to areas of uncertainty, thus allowing for increased information gain (Daddaoua, Lopes, & Gottlieb, 2016; Domínguez-Zamora, Gunn, & Marigold, 2018; Hayhoe, 2017; Sprague & Ballard, 2003; Sprague, Ballard, & Robinson, 2007; Sullivan, Johnson, Rothkopf, Ballard, & Hayhoe, 2012; Tong, Zohar, & Hayhoe, 2017). However, people can exhibit different preferences for information gain, which predicts their subsequent choice of action (Domínguez-Zamora & Marigold, 2021). When decisions are made during movement, such as walking or driving, there is limited time to gather information, and the information available may change during the decision process. Consequently, this requires a trade-off between exploring several areas with gaze and fixating on a restricted area of the environment. The decision to acquire further information may depend on a person's confidence in the information they have currently collected. In support, previous work shows that decision confidence can influence further information-seeking behavior, and that decisions are sensitive to the cost of obtaining the additional information (Boldt, Blundell, & De Martino, 2019; Desender, Boldt, & Yeung, 2018; Desender, Murphy, Boldt, Verguts, & Yeung, 2019; Schulz, Fleming, & Dayan, 2023). 
When the task involves deciding between different movements, obtaining information about the environment is only one of many considerations. Certain action choices may have undesirable consequences. With walking, this can include a loss of balance and potential injury, getting lost, and/or expending greater energy due to the added distance to travel. The decision of which action to choose may depend, in part, on the perception of one's task-specific abilities, or self-efficacy (Bandura, 1977; Cramer, Neal, & Brodsky, 2009). For instance, it is reasonable to expect that a person will favor an action they perceive poses less danger or believe they are more capable of successfully completing. If gaze serves to gather relevant information about action alternatives, and a person can use this information to evaluate their ability to perform each action, then might self-efficacy affect gaze decisions in addition to action choice? 
To address this question, we had participants decide between walking across one of two paths that we projected on the ground. Both paths had images of terrain commonly found in either hiking or urban settings. Participants indicated their chosen path by walking across it. Because of the role of gaze in gaining information, we designed two protocols that varied in the information available in each walking path. In one, the two paths had a different number of terrain patches; we assumed that the path with three types of terrain contained more visual information than the path with one type of terrain. In the other, both paths had an equal number of terrain patches but differed in terms of the information contained within the visual images. In this protocol, we analyzed pixel-level metrics to quantify visual information. This included a measure of the information gained by looking at neighboring pixels in the terrain images (Proulx & Parrott, 2008). If information-seeking alone explains gaze behavior, we would expect a greater number of fixations and/or longer gaze time on the path with greater information, thus allowing for more information on which to base a subsequent walking decision. However, we propose that self-efficacy in walking across a path modifies information-seeking behavior. In this sense, we are interested in how confidence in one's abilities relates to decision-making. This differs from typical decision-making studies that focus on confidence in one's decision, where decision confidence is defined as either the probability (or certainty) of being correct (Desender et al., 2018; Kiani & Shadlen, 2009; Kiani, Corthell, & Shadlen, 2014; Pouget, Drugowitsch, & Kepecs, 2016) or an estimate of decision reliability (Boundy-Singer, Ziemba, & Goris, 2023; Caziot & Mamassian 2021; Mamassian & de Gardelle 2022). Thus we tested the hypothesis that self-efficacy related to walking across a path affects where and for how long gaze is directed to each path in addition to the choice of which path to take. We had participants rate the confidence in their ability to walk across each type of terrain (i.e., self-efficacy) as though they had to step on it in real life. We predicted that participants would have a greater number of fixations and longer gaze times on walking paths where they had greater confidence in their abilities to walk across the presented terrain. We show that a simple information-seeking perspective is insufficient to explain gaze patterns when choosing between walking paths; participants did not direct gaze more to the path with greater information, regardless of how we quantified information. Rather, we show that the greater the self-efficacy in walking across one path, the more participants directed gaze to it. These results support the notion that a person's perception of their abilities impacts gaze decisions. 
Methods
Participants
Sixteen healthy adults participated in this study (five women and 11 men; mean age = 26 ± 3 years). We chose this sample size based on the higher end of typical sample sizes used in experiments of gaze during naturalistic walking behavior. Because of problems with recording gaze, we excluded one participant for one block of trials. Participants did not have any known visual, neurological, muscular, or joint disorder that could affect their walking or gaze behavior. The Office of Research Ethics at Simon Fraser University approved the study protocol (no. 30000870), and participants provided informed written consent before participating. 
Experimental design
Participants performed a visually guided walking paradigm that required them to walk across one of two paths that we projected on the ground. We used images of terrain to provide greater control over the design of the paths and because they eliminate energetic and stability costs associated with walking across real terrain, which are known to affect gaze (Domínguez-Zamora & Marigold, 2019; Domínguez-Zamora & Marigold, 2021; Matthis, Yates, & Hayhoe, 2018). We created the path images with Photoshop (Adobe Inc., San Jose, CA, USA). Each path was approximately 2 m long and 55 cm wide and joined at the start (see Figure 1). We used two different environmental themes to create our paths: hiking and urban. For the hiking environment, we used terrain patches commonly found on hiking trails, such as mud, tree roots, dirt, and rocks. For the urban environment, we used asphalt, cobblestones, and concrete. Paths in the same projection always shared a common theme, so that participants never faced a scenario where they had one hiking path and an urban path in the same environment. 
Figure 1.
 
Examples of the environments used in this study. (A) An example of one of four urban environments used in the 1vs3 protocol. Terrain types include potholes (bottom left path), uneven sidewalk (middle left path), cracked sidewalk (top left path), and cobblestones with puddles (right path). None of the four hiking environments are shown. (B) An example of one of two hiking environments used in the 3vs3 protocol. Terrain types include damp dirt (bottom left path), tree roots (middle left path), rocks (top left path), dry dirt (bottom right path), mud (middle right path), and wooden bridge (top right path). None of the two urban environments are shown. See Figure A1 for which terrain are used in the other environments in each protocol.
Figure 1.
 
Examples of the environments used in this study. (A) An example of one of four urban environments used in the 1vs3 protocol. Terrain types include potholes (bottom left path), uneven sidewalk (middle left path), cracked sidewalk (top left path), and cobblestones with puddles (right path). None of the four hiking environments are shown. (B) An example of one of two hiking environments used in the 3vs3 protocol. Terrain types include damp dirt (bottom left path), tree roots (middle left path), rocks (top left path), dry dirt (bottom right path), mud (middle right path), and wooden bridge (top right path). None of the two urban environments are shown. See Figure A1 for which terrain are used in the other environments in each protocol.
We configured the environments in MATLAB (The MathWorks, Natick, MA, USA) with the Psychophysics Toolbox, version 3 (Brainard, 1997). An LCD projector (Epson PowerLight 5535U, brightness of 5500 lumens) displayed the environments on a black uniform mat. To diminish the effect of environmental references and increase image visibility, participants walked under reduced light conditions (range of 1.1 to 3.5 lux, like a moonlit night). We recorded kinematic data at 100 Hz using two Optotrak Certus motion capture cameras (Northern Digital, Waterloo, ON, Canada) positioned perpendicular to the walking path. This involved recording infrared-emitting position markers placed on the participant's head and chest and bilaterally on each midfoot (second to third metatarsal head). We also recorded gaze data at 100 Hz using a high-speed mobile eye tracker (Tobii Pro Glasses 3; Tobii Technology Inc., Reston, VA, USA) mounted on the participant's head and synchronized with the motion capture system. We calibrated the eye tracker based on instructions from the Tobii Pro Glasses user manual before each of the two protocols. 
Experimental protocol
Before the start of the experimental trials, we presented participants with images on the ground of the 12 types of terrains we used to construct the environments. This allowed the participants to become familiar with the different terrain. Subsequently, we had participants complete two different protocols (see Figure 1), the order of which was counterbalanced. In one protocol, referred to as 1vs3, we used eight different environments (four hiking and four urban), and participants completed 32 walking trials (four trials of each environment in random order). Environments had two different paths of the same setting (hiking or urban), where one path consisted of one type of terrain (uniform path) and one path consisted of three types of terrain (nonuniform path). In the other protocol, referred to as 3vs3, we used four different environments (two hiking and two urban), and participants completed 20 walking trials (five trials of each environment in random order). Environments had two different paths of the same setting (hiking or urban), which each consisted of three types of terrain. See  Appendix Figure A1 for which terrains are used in the environments in each protocol. 
In both protocols, for each walking trial, participants started from a standing position approximately 1.5 m from the projected walking paths. We projected a fixation cross at the center of the projection area, approximately 2 m from the starting point and instructed participants to maintain their gaze on it until the image of the environment appeared. After one second, the cross disappeared and one of the environments appeared. We asked participants to remain stationary and freely visually explore the terrains for two seconds. In real life, a person would likely see their path choices well in advance and could visually explore as they approach from a distance. Our choice to allow participants two seconds to explore from a stationary position substituted for this ability, since we did not have the walking space for participants to complete more than two steps before reaching one of the paths. After the two seconds, an auditory cue signaled participants to begin walking. We asked participants to pretend the terrains were real, to choose the path they would normally take as if outside of the lab, and to step where they would normally step if they faced that terrain in real life. We instructed participants to walk at a self-selected speed and stop two steps after walking across their chosen path. An experimenter recorded which path the participant chose on each trial during both protocols. 
After completing all experimental walking trials, we had each participant rate the confidence in their ability to walk across each type of terrain (i.e., self-efficacy rating) a single time. Specifically, we asked participants: “For each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10 (where 1 is not at all confident and 10 is extremely confident).” 
Data and statistical analyses
We filtered kinematic data using a 6 Hz low-pass Butterworth algorithm. We used these data to calculate the approach phase (defined as the time between the start of the trial and the participant's first foot contact with the fork in the joined paths). Because we were interested in understanding the decision-making process, we only analyzed gaze behavior during the approach phase. We used JMP software, version 16 (SAS Institute Inc., Cary, NC, USA), for all statistical analyses, with an alpha level of 0.05 (except where noted). 
To analyze gaze data, we used GlassesViewer (Niehorster, Hessels, & Benjamins, 2020). We defined fixations as the times during which a target or region on the ground stabilized on the retina and detected them based on the slow-phase classifier described in Hessels, van Doorn, Benjamins, Holleman, & Hooge (2020). This classifier uses an adaptive velocity threshold based on estimated gaze velocity. We used the following classifier parameters: 5000°/s start velocity threshold; 50 ms minimum fixation duration; lambda slow/fast separation threshold of 2.5; 8 s moving window. We used the 30 Hz eye tracker video with the gaze location superimposed on the image to verify the presence and location of fixations. To quantify gaze behavior, we calculated the number of fixations and gaze time (i.e., sum of fixation times) on each terrain. We normalized gaze time on each terrain by the approach time to control for any differences in gait initiation and speed across trials and participants. 
For both protocols, we first sought to demonstrate a link between gaze behavior and path choice (decision-making). Specifically, we compared the number of fixations (or gaze time) during the approach phase on the chosen path with the non-chosen path using separate paired t tests. We predicted a greater number of fixations and gaze times on the chosen path. 
We manipulated the number and type of terrain in each path to alter the amount of visual information present using two different approaches (1vs3 protocol and 3vs3 protocol). For the 1vs3 protocol, we assumed that the nonuniform path (with three types of terrain) contains more visual information than the uniform path (with one type of terrain). Thus we compared the number of fixations (or gaze time) on the uniform and nonuniform paths using paired t tests. Next, we tested whether self-efficacy associated with a path predicted gaze behavior on a path. Specifically, we performed a linear mixed-effects model, with the number of fixations on the nonuniform path as the response variable and self-efficacy rating on the uniform path as the predictor variable. We included participant as a random effect. We also performed a linear regression, with gaze time on the nonuniform path as the response variable and self-efficacy rating on the uniform path as the predictor variable. In this model, we removed the random effect of participant because the estimated variance of this effect contributed zero percent to the total. For each model, we included each participant's self-efficacy ratings, thus accounting for individual perceptions. We used ratings on the uniform path, rather than the nonuniform path, because only one type of terrain is present and is thus easier to interpret. We used a chi-squared test to determine whether participants more frequently chose the path that they were more confident in their ability to walk across. 
For the 3vs3 protocol, we quantified information as the information contained in the visual image of each patch of terrain using several different metrics (Proulx & Parrott, 2008). Specifically, we calculated marginal (or Shannon) entropy (ME) as:  
\begin{eqnarray}H\left( y \right) = \ - \mathop \sum \limits_{i = 1}^N p\left( {{{y}_i}} \right)\log p\left( {{{y}_i}} \right) \quad \end{eqnarray}
(1)
where p(yi) is the probability of observing a pixel value independent of its position in the image and N is the number of bins of pixel values. In addition, we calculated the mean information gain (MIG), as described in detail in Proulx and Parrott (2008) and based on Andrienko, Brilliantov, & Kurths (2000). MIG requires calculation of the joint entropy, which is defined as:  
\begin{eqnarray}H\left( x \right) = \ - \mathop \sum \limits_{i = 1}^{{{N}^4}} p\left( {{{x}_i}} \right)\log p\left( {{{x}_i}} \right) \quad \end{eqnarray}
(2)
where p(xi) is the probability of finding a 2 × 2 color combination xi in the image and N4 is the number of theoretical combinations. MIG is the joint entropy minus the marginal entropy and specified as:  
\begin{eqnarray}MIG = \ \frac{{H\left( x \right) - H\left( y \right)}}{{\log{{N}^4} - \log {{N}^1}}} \quad \end{eqnarray}
(3)
where we used an N = 8. MIG is zero for uniform (ordered) patterns and is 1 for random (disordered) patterns. High values of MIG represent high information gain. Due to the curvature of each path, we extracted the maximum rectangular area of each terrain patch for each environment to run the information analysis on. This yielded 24 terrain patches (6 terrains/environment × 4 environments). 
We determined ME and MIG for each patch of terrain in each environment based on its greyscale image, the three components of the CIE-L*a*b* color space, and the three components—hue, saturation, value—of the HSV color space using the MIG_simple.m routine from Proulx and Parrott (2008). The CIE-L*a*b* color space is based on the human visual system and the opponent process theory of color vision, while the HSV color space is based on how humans perceive color. Next, we used a series of linear regressions or linear mixed-effects models, with either the mean number of fixations or the gaze time (with a square root transformation to ensure normality) on the patch of terrain as the response variable, one of the information metrics as the predictor variable, and participant as a random effect if the estimated variance of this effect contributed > 0 percent to the total. Due to the number of tests with this analysis (14 related to number of fixation and 14 related to gaze time), we used a more conservative alpha level of 0.01, as opposed to 0.05 for all other statistical analyses. We found two statistically significant models, for ME and MIG associated with hue (see Results), but only used the latter in our subsequent analysis because the model had a lower Akaike information criterion (AIC). Next, we asked whether gaze is directed more to the path with greater information. Specifically, we determined whether the left or right path of each of the four environments had greater information based on the average (or sum of) information across the three terrain types, the most informative terrain, and the least informative terrain. We used these three methods to quantify the amount of information in each path because we did not know a priori which, if any, the brain might use. Subsequently, we performed separate paired t tests comparing mean number of fixations or gaze time on the left and right paths for each environment. 
For this (3vs3) protocol, for each environment, we calculated the difference in self-efficacy ratings between the two paths. We reasoned that people care more about (or have different perceptions of their ability to handle) a certain type of terrain based on previous experience with it. Thus a person can form a perception of their ability to walk across a given path based on how they rate their ability to walk across each type of terrain present. This can include an average rating of the three terrain types, the highest-rated terrain, or the lowest-rated terrain. Again, because we did not know a priori which method the brain might use, we tested models of each and compared statistically significant ones using the AIC. Specifically, we used linear mixed-effects models, with the number of fixations (or gaze time) on the chosen path as the response variable and the difference in self-efficacy ratings between paths as the predictor variable. In all cases, we included participant as a random effect. We used chi-squared tests to determine whether participants more frequently chose the path that they were more confident in their ability to walk across (based on an average rating of the three terrain types, the highest-rated terrain, or the lowest-rated terrain). 
Data availability
Data used for statistical analyses and figures are available at: https://osf.io/3c9r6/
Results
Gaze behavior is related to path choice
To show that gaze behavior is involved in decision-making in our task, we compared the number of fixations and gaze time on the chosen and non-chosen paths. For the 1vs3 protocol, participants had a greater number of fixations on the chosen (7.8 ± 1.1) versus the non-chosen path (3.9 ± 0.9) (paired t test: t14 = −10.6, p = 4.77e-8, Cohen's d = 2.72). Participants also showed longer gaze time on the chosen (0.48 ± 0.05) versus the non-chosen path (0.20 ± 0.05) (paired t test: t14 = −13.6, p = 1.91e-9, Cohen's d = 3.50). We found similar results for the 3vs3 protocol, where participants had a greater number of fixations on the chosen (7.5 ± 1.2) versus the non-chosen path (4.1 ± 1.0) (paired t test: t15 = −9.8, p = 6.59e-8, Cohen's d = 2.45), and longer gaze time on the chosen (0.45 ± 0.07) versus the non-chosen path (0.22 ± 0.05) (paired t test: t15 = −8.4, p = 4.96e-7, Cohen's d = 2.09). These results suggest that gaze is related to path choice. 
Gaze behavior is related to information when looking at individual terrains, but not whole paths
We manipulated the number and type of terrain in each path to alter the amount of visual information present using two different approaches (1vs3 protocol and 3vs3 protocol); this allowed us to test whether a simple information-seeking strategy explains gaze decisions or the perception of one's abilities modifies information-seeking gaze behavior. We first asked whether gaze is directed to the terrain images with the greatest amount of visual information, regardless of the path chosen. To address this question, we used the 3vs3 protocol, where we determined the ME and MIG for each patch of terrain in each environment based on its greyscale image, the three components of the CIE-L*a*b* color space, and the three components—hue, saturation, value—of the HSV color space (Proulx & Parrott, 2008). Greater amount of visual information contained within the terrain images equates to greater values of MIG and ME.  Appendix Tables A1 and A2 show MIG and ME values, respectively, for each terrain of each environment. Examples of different terrains and their respective MIGhue values are found in Figure 2A. We determined the relationship between these information metrics and number of fixations and gaze time on each terrain. We found that increases in the MIG of the hue component (MIGhue) of a terrain significantly associated with increases in number of fixations (R2 = 0.22, β = 4.82 [95% CI = 3.89, 5.75], p = 1.40e-21) and gaze time (Figure 2B; R2 = 0.19, β = 1.17 [95% CI = 0.93, 1.41], p = 9.89e-20). Greater ME of the hue component (MEhue) of a terrain also significantly associated with increases in number of fixations (R2 = 0.13, β = 1.79 [95% CI = 1.30, 2.28], p = 2.86e-12), and gaze time (R2 = 0.09, β = 0.40 [95% CI = 0.28, 0.53], p = 8.79e-10). However, the statistical model for MIGhue had a lower AIC value than the model with MEhue for number of fixations (ΔAIC = 43.7) and gaze time (ΔAIC = 45.3). Consequently, we used MIGhue for subsequent analyses. No other color-space information metric predicted gaze behavior (see  Appendix Tables A3 and A4). Thus there is only limited evidence to indicate that gaze is attracted to greater visual information regions within the paths. 
Figure 2.
 
Relationship between mean information gain (MIG) and gaze behavior during the approach (decision-making) phase in the 3vs3 protocol. (A) Example terrain image with low (top), medium (middle), and high (bottom) MIG based on hue. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) and MIG based on hue in a terrain image. Gaze time is normalized to approach phase duration. In each scatterplot, solid black lines show the linear fits obtained from the models and gray shaded regions represent the 95% confidence intervals.
Figure 2.
 
Relationship between mean information gain (MIG) and gaze behavior during the approach (decision-making) phase in the 3vs3 protocol. (A) Example terrain image with low (top), medium (middle), and high (bottom) MIG based on hue. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) and MIG based on hue in a terrain image. Gaze time is normalized to approach phase duration. In each scatterplot, solid black lines show the linear fits obtained from the models and gray shaded regions represent the 95% confidence intervals.
We next asked whether the amount of visual information across the entire path influences gaze. For the 1vs3 protocol, we defined information based on the number of terrains in each path. Specifically, we assumed that the path with three types of terrain (nonuniform) contained more visual information than the path with a single type of terrain (uniform). From a simple information-seeking perspective, we would predict that gaze is directed more frequently (and for longer duration) to the nonuniform path. We ran separate paired t tests comparing the number of fixations and gaze time on each type of path (Figure 3A). Participants made a similar number of fixations on both paths (uniform path = 5.84 ± 0.80; nonuniform path = 5.90 ± 0.97; paired t test: t14 = −0.21, p = 0.835, Cohen's d = 0.05). Participants also had similar gaze times on each path (uniform path = 0.34 ± 0.06; nonuniform path = 0.34 ± 0.04; paired t test: t14 = 0.35, p = 0.733, Cohen's d = 0.09). Thus the amount of information did not attract gaze in this protocol. 
Figure 3.
 
How the amount of information (number of terrain types or MIGhue) within a path choice relates to gaze behavior during the approach (decision-making) phase. (A) Group mean ± SE number of fixations (top) or gaze time (bottom) on the path with low information (1 terrain type; uniform path) and the path with high information (3 terrain types; nonuniform path). (B) Group mean ± SE number of fixations (top) or gaze time (bottom) on the left and right path options. Inset: the chart shows which path, in each environment (env.), had greater MIGhue based on the mean of each terrain, the minimum (min.) terrain, or the maximum (max.) terrain. Gaze time is normalized to approach phase duration. Mean individual participant (n = 15 or 16, depending on the protocol) data values are superimposed. Asterisk indicates a statistically significant difference between the left and right path (p < 0.05).
Figure 3.
 
How the amount of information (number of terrain types or MIGhue) within a path choice relates to gaze behavior during the approach (decision-making) phase. (A) Group mean ± SE number of fixations (top) or gaze time (bottom) on the path with low information (1 terrain type; uniform path) and the path with high information (3 terrain types; nonuniform path). (B) Group mean ± SE number of fixations (top) or gaze time (bottom) on the left and right path options. Inset: the chart shows which path, in each environment (env.), had greater MIGhue based on the mean of each terrain, the minimum (min.) terrain, or the maximum (max.) terrain. Gaze time is normalized to approach phase duration. Mean individual participant (n = 15 or 16, depending on the protocol) data values are superimposed. Asterisk indicates a statistically significant difference between the left and right path (p < 0.05).
For the 3vs3 protocol, if gaze is biased to the path with greater information, we would expect a greater frequency (and longer duration) to the left or right path depending on which one contained more visual information (i.e., greater MIG). For a given path, a person may rely on an average (or sum of) information across the three terrain types, the most informative terrain, or the least informative terrain to make decisions. Because we did not know a priori which method the brain might use, we used the MIGhue for each terrain and determined for each environment which path had the most amount of information based on these three approaches. The inset chart in Figure 3B summarizes which path contained the most information for each environment. Except for environment 4 (Figure 3B), we did not detect significant differences between the left and the right path in the number of fixations (paired t tests: environment 1, t15 = −1.89, p = 0.078; environment 2, t15 = −1.17, p = 0.260; environment 3, t15 = 0.75, p = 0.467) or gaze time (paired t tests: environment 1, t15 = −1.80, p = 0.092; environment 2, t15 = −1.61, p = 0.129; environment 3, t15 = 0.25, p = 0.802). For environment 4, participants fixated the left path more frequently (paired t test: t15 = −5.85, p = 3.17e-5) and for longer duration (paired t test: t15 = −5.65, p = 4.63e-5) than the right path. Overall, when considering each entire walking path, our results demonstrate that visual information, by itself, does not account for the observed gaze behavior. 
Although the amount of information did not have a great impact on gaze behavior, we tested whether participants were more likely to choose the path with more information or less information. In the 1vs3 protocol, we assumed the nonuniform path contained more information. In this protocol, participants chose the path with less information 63% of the time (χ2: p = 0.009). In the 3vs3 protocol, we used the average information (MIGhue) across each path. In this protocol, we did not detect a significant bias in the preference of path (χ2: p = 0.110), with participants choosing the path with more information only 58% of the time. 
Self-efficacy in walking across the terrain influences gaze behavior
Gaze is directed more to the chosen path. Here we asked whether participants select the path based on self-efficacy; participants rated each terrain type after the experiment. Participants chose the path with the highest averaged self-efficacy rating (equal path rating trials excluded) on average 77% of the time for the 1vs3 protocol (χ2: p = 6.7e-8) and 69% of the time for the 3vs3 protocol (χ2: p = 1.4e-4). Individual self-efficacy scores are shown in Tables A5 and A6. When participants chose the path with a higher self-efficacy rating in the 1vs3 and 3vs3 protocols, the chosen path was the one with less information 58% and 42% of the time, respectively. 
Given that gaze is directed more to the chosen path, and participants select the path they have greater confidence in their ability to walk across more frequently, we tested the hypothesis that self-efficacy related to the path affects where and for how long gaze is directed. For the 1vs3 protocol, we determined the relationship between the self-efficacy rating on the uniform path and number of fixations and gaze time on the nonuniform path. We expected participants to explore the nonuniform path via gaze less when self-efficacy in walking across the uniform path was larger. This is because they would be more likely to choose to walk across the uniform path in this case and thus require less information about the nonuniform path. Self-efficacy ratings on the uniform path predicted the number of fixations made on the nonuniform path (R2 = 0.31, β = −0.26 [95% CI = −0.43, −0.10], p = 0.002), but not gaze time (R2 = 0.03, β = −0.008 [95% CI = −0.019, 0.002], p = 0.129). Specifically, participants made a greater number of fixations on the nonuniform path as self-efficacy ratings on the uniform path decreased (Figure 4A). 
Figure 4.
 
How self-efficacy related to walking across a path influences gaze behavior during the approach (decision-making) phase. (A) Scatterplots of the number of fixations (top) or gaze time (bottom) on the nonuniform path versus self-efficacy rating on the uniform path. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) on the chosen path versus the difference in self-efficacy rating between the two paths. Inset: illustrations of how the difference is calculated. Each section represents a terrain patch, with a self-efficacy score inside (larger scores equal greater self-efficacy related to walking across it). The arrows point toward the chosen path (which has gray sections as opposed to black outlined sections). The colored terrain numbers indicate which terrain is used in the calculation, which is based on the lowest-rated terrain (left), the highest-rated terrain (middle), or the mean rating of the three terrain types within a path (right). Gaze time is normalized to approach phase duration. In each scatterplot, solid lines show the linear fits obtained from the models and shaded regions represent the 95% confidence intervals.
Figure 4.
 
How self-efficacy related to walking across a path influences gaze behavior during the approach (decision-making) phase. (A) Scatterplots of the number of fixations (top) or gaze time (bottom) on the nonuniform path versus self-efficacy rating on the uniform path. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) on the chosen path versus the difference in self-efficacy rating between the two paths. Inset: illustrations of how the difference is calculated. Each section represents a terrain patch, with a self-efficacy score inside (larger scores equal greater self-efficacy related to walking across it). The arrows point toward the chosen path (which has gray sections as opposed to black outlined sections). The colored terrain numbers indicate which terrain is used in the calculation, which is based on the lowest-rated terrain (left), the highest-rated terrain (middle), or the mean rating of the three terrain types within a path (right). Gaze time is normalized to approach phase duration. In each scatterplot, solid lines show the linear fits obtained from the models and shaded regions represent the 95% confidence intervals.
For the 3vs3 protocol, both paths consisted of three different types of terrain. For each environment, we calculated the difference in self-efficacy ratings between the chosen and non-chosen paths to determine whether people consider the difference in how confident they are in their ability to walk across the terrain in each path when making gaze decisions. We used this metric because the brain can assign and compare values for each choice when making decisions (Rangel, Camerer, & Montague, 2008), and changes in gaze behavior (i.e., saccade vigor) occur as a function of the difference in the subjective value that a participant assigns to each of two options (Reppert, Lempert, Glimcher, & Shadmehr, 2015). Like information, a person may form a perception of their ability to walk across a given path based on their average self-efficacy rating of the three terrain types, the highest-rated terrain, or the lowest-rated terrain of each path. Again, because we did not know a priori which method the brain might use, we tested models of each. The difference in the lowest-rated terrain between paths predicted gaze behavior (left column of Figure 4B; number of fixations: R2 = 0.51, β = 0.29 [95% CI = 0.11, 0.46], p = 0.002; gaze time: R2 = 0.48, β = 0.023 [95% CI = 0.010, 0.035], p = 0.0006). The difference in the mean terrain rating in each path also predicted gaze behavior (right column of Figure 4B; number of fixations: R2 = 0.51, β = 0.41 [95% CI = 0.13, 0.70], p = 0.005; gaze time: R2 = 0.49, β = 0.029 [95% CI = 0.011, 0.046], p = 0.002). For the number of fixations, the linear mixed-effects model using the difference in lowest-rated terrain better explained the data (ΔAIC = 92.1). For gaze time, however, the linear mixed-effects model using the difference in mean terrain rating better explained the data (ΔAIC = 44). In both cases, a greater difference (i.e., higher rating with the eventually chosen path compared to the non-chosen path) associated with more fixations and longer gaze times on the chosen path. Taken together, our results suggest that the brain considers one's ability to walk across terrain when deciding where to direct gaze. 
Discussion
The brain must decide which sources of information to sample via gaze to make the best possible decision about an action. Previous work shows that confidence in making the correct choice in perceptual decision or multi-arm bandit tasks influences information-seeking behavior (Boldt et al., 2019; Desender et al., 2018; Desender et al., 2019; Pescetelli, Hauperich, & Yeung, 2021). However, when dealing with decisions related to motor behavior, one's confidence in their ability to successfully execute the action (i.e., self-efficacy) is an important consideration in the decision-making process. Here we tested the hypothesis that self-efficacy related to walking across a path affects where and for how long gaze is directed to each path in addition to the choice of which path to take. We used a forced-choice paradigm, where we manipulated the number and types of terrain in each path. Participants looked more to the path they eventually chose to walk across, which was usually the path they had greater confidence in their ability to walk across. Although MIG of hue in the terrain predicted the frequency and duration of gaze on a given terrain patch, participants did not direct gaze more to the overall path with greater information. Rather, we found that less confidence in one's ability to walk across the uniform (single) terrain path associated with a greater number of fixations on the nonuniform (three) terrain path. With two nonuniform paths, greater confidence in one's ability to walk across the path eventually chosen, relative to the alternative, associated with a greater number of fixations and gaze time on it during the approach. Taken together, our results suggest that the brain uses self-efficacy to guide gaze and walking decisions in complex environments. 
A potential limitation of our study is that we projected images of terrain on the ground rather than having participants walk across real terrain. We chose this method for two reasons. One, our lab-based, simulated terrain provided us with greater control over the design of the walking paths. Two, the projected images eliminate (or reduce) any energetic and/or stability cost associated with walking across real terrain. This latter point is important since both motor costs influence gaze behavior (Domínguez-Zamora & Marigold, 2019, Domínguez-Zamora & Marigold, 2021; Matthis et al., 2018; Moskowitz et al., 2023), and we wanted to demonstrate a role for self-efficacy. We believe our results generalize to the real world for at least two reasons. First, participants usually chose the path they had more confidence in their ability to walk across (>69% of trials). Second, participants often avoided stepping on the images of hazards when choosing where to place their feet on the paths. Specifically, we observed participants avoiding the rocks (1vs3 protocol: 100% of the time; 3vs3 protocol: 86% of the time), potholes (both protocols: 100% of the time), puddles (both protocols: 100% of the time), and mud (1vs3 protocol: 75% of the time; 3vs3 protocol: 86% of the time). Note that this quantification is based on only four (1vs3 protocol) and six participants (3vs3 protocol) because of limitations in the visual field of the scene camera of the eye tracker (e.g., visibility of the foot); other participants, however, had similar behavior, as witnessed during the experiments. 
The amount of available visual information contained within a patch of terrain, but not the walking path, predicted gaze allocation. In natural behavior, the brain appears to direct gaze to gain information and solve task-relevant uncertainties about the environment (Hayhoe, 2017; Sprague & Ballard, 2003; Sprague et al., 2007). Indeed, Domínguez-Zamora et al. (2018) found that having to step to the center of targets led to longer gaze times when the target was blurred (and thus the center location more uncertain). Furthermore, Tong et al. (2017) found a greater number of fixations to task-relevant obstacles to avoid when their random motion increased the uncertainty of their position. Thus we reasoned that one possibility in our study was that participants would direct gaze more frequently (or for longer duration) to individual patches of terrain (or the entire walking path) with greater amounts of visual information to reduce uncertainty associated with that location. We defined the amount of information as either the MIG (or ME) across different color spaces of each terrain (3vs3 protocol) or the number of terrain patches along a path (1v3 protocol). At the level of individual terrain (3vs3 protocol), we found a significant association between MIGhue and number of fixations and gaze time on a patch of terrain. Hue is a representative measure on how the human visual system perceives wavelengths in the light signal (Proulx & Parrott, 2008). The brain may initially perceive each object/surface in the environment based on their dominant hue (Neitz & Neitz, 2008; Stoughton & Conway, 2008). By simplifying surfaces to their hue, the brain can faster differentiate between them (Bornstein & Korda, 1984), aiding in object recognition and calculations of the number of different surfaces in an environment (Valberg, 2001). However, we found no clear relationship between gaze and our metrics of information (with either protocol) when considering the entire path. It is important to acknowledge that there are other ways to quantify visual information besides ME and MIG, which might show a relationship to gaze metrics. We opted to use MIG and ME as they are common techniques when determining information from images of natural environments (Proulx & Parrott, 2008). Taken together, our results suggest that other factors beyond information also mediate gaze decisions. 
In broad terms, there are two decisions to make in our walking task: where to look (thus ensuring appropriate information is gathered) and which path to take. One view is that self-efficacy affects path choice and gaze reflects that decision such that a person looks more to the path they intend to take. Indeed, it is well known from past research that people look where they intend to step/walk (e.g., Marigold & Patla, 2007). And our results show that gaze is biased to the chosen path. In this sense, self-efficacy may indirectly affect gaze. Another view is that self-efficacy is one of the inputs into a gaze decision algorithm as well as a path choice algorithm. Under this perspective, gaze behavior should vary with the difference in self-efficacy between paths across different environments. Could an explanation for our results be simply that participants look more at the path they intend to take? Figure 4B shows that people change how much and how long they look at the chosen path (y-axis) as a function of the difference in self-efficacy between two path options (chosen minus non-chosen; x-axis). That is, all data points are from chosen paths. If gaze was simply directed to the path a person intends to take, then we would expect relatively consistent gaze values on the chosen path regardless of relative self-efficacy ratings, leading to near-zero (non-significant) slopes in our regressions. However, this was not the case. 
Self-efficacy in walking on specific terrain may influence how people directed gaze to gain information. Specifically, when there was an equal number of different terrains within each of two path options, we showed that people considered the difference in self-efficacy between paths (the difference between the lowest-rated terrains or between mean ratings across the terrains). The greater the confidence in their ability to walk across the path they eventually chose relative to the non-chosen path, the more people directed gaze to the chosen path. When one path option had a greater number of terrain types, we demonstrated that people were more likely to visually explore this path when they had low self-efficacy with the alternative. If self-efficacy related to an option is high, there is less need to seek out additional information to inform the decision of which path to take. In this sense, people will exploit (i.e., direct gaze more to) the path option they have greater confidence in their ability to walk across. These results are in line with previous research that operationally defined decision confidence as the certainty of a correct response (Desender et al., 2018; Pescetelli et al., 2021). In these perceptual decision-making paradigms, participants were more likely to view a simpler version of a stimulus again (Desender et al., 2018) or seek advice about the correct answer (Pescetelli et al., 2021) when they were less confident about their initial judgement. Thus confidence in one's ability to perform a motor action (self-efficacy) and confidence in a decision appear to influence behavior in a similar manner and as a result, may share similar neural substrates. 
How do our results relate to models of gaze and decision making? Like with a choice between two foot-placement targets (Domínguez-Zamora & Marigold, 2021), we found that participants looked more to the chosen walking path. This is consistent with predictions of the attentional drift-diffusion and gaze-weighted linear accumulator models (Krajbich, Armel, & Rangel, 2010; Thomas, Molter, Krajbich, Heekeren, & Mohr, 2019). These models assume that momentary gaze directed to an item introduces a choice bias for that option, and when not looking at an item, its decision value is discounted. However, these models were not designed to address motor behavior, where choices may change as movement unfolds, movements have consequences (which may affect balance), and where a person must decide how to act based on accumulated information from gaze. A more suitable model to compare our results with is one developed by Sprague and colleagues (Sprague & Ballard, 2003; Sprague et al., 2007), which aims to explain gaze allocation during a motor behavior that is divided into subtasks. The model assigns a value to a saccade based on the expected negative consequences of the uncertainty created when a fixation is not made to a specific task-relevant location. Gaze is directed to reduce the uncertainty (or gain information) about aspects of the environment where it serves to maximize a reward associated with an action or task goal. In our study, however, participants did not direct gaze more to the path with greater visual information and thus, would not have reduced task-relevant uncertainty as much as possible. Neither the evidence accumulation models described above, nor this reinforcement learning-based model address the fact that self-efficacy can bias gaze. Our results suggest that decision-making models of gaze should incorporate the idea of self-efficacy, at least when trying to explain motor behavior. 
Overall, our results are consistent with the idea that confidence in one's motor abilities predicts how information about the environment in gathered via gaze in addition to one's choice of walking path to take. Thus we suggest that self-efficacy is one of the brain's inputs in the decision-making process that guides gaze and walking behaviors. Decisions related to motor behavior are ubiquitous. Since movement choices have consequences, such as whether they are benign or can lead to potential harm, self-efficacy is likely to impact one's decision about where or how to move. Thus our work has broad implications about how we gather information and make decisions. Future work should determine additional factors that can modify information-seeking gaze behavior and how they interact with self-efficacy. 
Acknowledgments
The authors thank Ian Bercovitz for advice on statistical analyses and Laura Gimenes for help with terrain illustrations. 
Supported by the Natural Sciences and Engineering Research Council of Canada (NSERC RGPIN-2019-04440, D.S.M.). 
Commercial relationships: none. 
Corresponding author: Daniel Marigold. 
Email: daniel_marigold@sfu.ca. 
Address: Department of Biomedical Physiology and Kinesiology, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada. 
References
Andrienko, Y. A., Brilliantov, N. V., & Kurths, J. (2000). Complexity of two-dimensional patterns. The European Physical Journal B-Condensed Matter and Complex Systems, 15, 539–546, https://doi.org/10.1007/s100510051157. [CrossRef]
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215, https://doi.org/10.1037//0033-295x.84.2.191. [CrossRef] [PubMed]
Boldt, A., Blundell, C., & De Martino, B. (2019). Confidence modulates exploration and exploitation in value-based learning. Neuroscience of Consciousness, 2019, niz004, https://doi.org/10.1093/nc/niz004. [CrossRef] [PubMed]
Bornstein, M. H., & Korda, N. O. (1984). Discrimination and matching within and between hues measured by reaction times: Some implications for categorical perception and levels of information processing. Psychological Research, 46(3), 207–222, https://doi.org/10.1007/BF00308884. [CrossRef] [PubMed]
Boundy-Singer, Z. M., Ziemba, C. M., & Goris, R. L. T. (2023). Confidence reflects a noisy decision reliability estimate. Nature Human Behaviour, 7, 142–154, https://doi.org/10.1038/s41562-022-01464-x. [CrossRef] [PubMed]
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436. [CrossRef] [PubMed]
Caziot, B., & Mamassian, P. (2021). Perceptual confidence judgements reflect self-consistency. Journal of Vision, 21(12):8, 1–15. https://doi.org/10.1167/jov.21.12.8. [CrossRef]
Cramer, R. J., Neal, T. M. S., & Brodsky, S. L. (2009). Self-efficacy and confidence: Theoretical distinctions and implications for trial consultation. Consulting Psychology Journal: Practice and Research, 61, 319–334, https://doi.org/10.1037/a0017310. [CrossRef]
Daddaoua, N., Lopes M., & Gottlieb, J. (2016). Intrinsically motivated oculomotor exploration guided by uncertainty reduction and conditioned reinforcement in non-human primates. Scientific Reports, 6, 20202, https://doi.org/10.1038/srep20202. [CrossRef] [PubMed]
Desender, K., Boldt, A., & Yeung, N. (2018). Subjective confidence predicts information seeking in decision making. Psychological Research, 29(5), 761–778, https://doi.org/10.1177/0956797617744771.
Desender, K., Murphy, P., Boldt, A., Verguts, T., & Yeung, N. (2019). A postdecisional neural marker of confidence predicts information-seeking in decision-making. Journal of Neuroscience, 39(17), 3309–3319, https://doi.org/10.1523/JNEUROSCI.2620-18.2019. [CrossRef] [PubMed]
Domínguez-Zamora, F. J., & Marigold, D. S. (2019). Motor cost affects the decision of when to shift gaze for guiding movement. Journal of Neurophysiology, 122(1), 378–388, https://doi.org/10.1152/jn.00027.2019. [CrossRef] [PubMed]
Domínguez-Zamora, F. J., & Marigold, D. S. (2021). Motives driving gaze and walking decisions. Current Biology, 31(8), 1632–1642, https://doi.org/10.1016/j.cub.2021.01.069. [CrossRef] [PubMed]
Domínguez-Zamora, F. J., Gunn, S. M., & Marigold, D. S. (2018). Adaptive gaze strategies to reduce environmental uncertainty during a sequential visuomotor behaviour. Scientific Reports, 8, 14112, https://doi.org/10.1038/s41598-018-32504-0. [CrossRef] [PubMed]
Hayhoe, M. M. (2017). Vision and action. Annual Review of Vision Science, 3, 389–413, https://doi.org/10.1146/annurev-vision-102016-061437. [CrossRef] [PubMed]
Hessels, R. S., van Doorn, A. J., Benjamins, J. S., Holleman, G. A., & Hooge, I. T. C. (2020). Task-related gaze control in human crowd navigation. Attention, Perception, & Psychophysics, 82(5), 2482–2501, https://doi.org/10.3758/s13414-019-01952-9. [PubMed]
Kiani, R., Corthell, L., & Shadlen, M. N. (2014). Choice certainty is informed by both evidence and decision time. Neuron, 84(6), 1329–1342, https://doi.org/10.1016/j.neuron.2014.12.015. [PubMed]
Kiani, R., & Shadlen, M. N. (2009). Representation of confidence associated with a decision by neurons in the parietal cortex. Science, 324(5928), 759–764, https://doi.org/10.1126/science.1169405. [PubMed]
Krajbich, I., Armel, C., & Rangel, A. (2010). Visual fixations and the computation and comparison of value in simple choice. Nature neuroscience, 13(10), 1292–1298, https://doi.org/10.1038/nn.2635. [PubMed]
Land, M., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28(11), 1311–1328, https://doi.org/10.1068/p2935. [PubMed]
Mamassian, P., & de Gardelle, V. (2022). Modeling perceptual confidence and the confidence forced-choice paradigm. Psychological Review, 129, 976–998, https://doi.org/10.1037/rev0000312. [PubMed]
Marigold, D. S., & Patla, A. E. (2007). Gaze fixation patterns for negotiating complex ground terrain. Neuroscience, 144(1), 302–313, https://doi.org/10.1016/j.neuroscience.2006.09.006. [PubMed]
Matthis, J. S., Yates, J. L., & Hayhoe, M. M. (2018). Gaze and the control of foot placement when walking in natural terrain. Current Biology, 28(8), 1224–1233, https://doi.org/10.1016/j.cub.2018.03.008. [PubMed]
Moskowitz, J. B., Berger, S. A., Fooken, J., Castelhano, M. S., Gallivan, J. P., & Flanagan, J. R. (2023). The influence of movement-related costs when searching to act and acting to search. Journal of Neurophysiology, 129(1), 115–130, https://doi.org/10.1152/jn.00305.2022. [PubMed]
Niehorster, D. C., Hessels, R. S., & Benjamins, J. S. (2020). GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker. Behavior Research Methods, 52(3), 1244–1253, https://doi.org/10.3758/s13428-019-01314-1. [PubMed]
Neitz, J., & Neitz, M. (2008). Colour vision: The wonder of hue. Current Biology, 18(16), R700–R702, https://doi.org/10.1016/j.cub.2008.06.062. [PubMed]
Pescetelli, N., Hauperich, A-K., & Yeung N. (2021). Confidence, advice seeking and changes of mind in decision making. Cognition, 215, 104810, https://doi.org/10.1016/j.cognition.2021.104810. [PubMed]
Pouget, A., Drugowitsch, J., & Kepecs, A. (2016). Confidence and certainty: Distinct probabilistic quantities for different goals. Nature Neuroscience, 19(3), 366–374, https://doi.org/10.1038/nn.4240. [PubMed]
Proulx, R., & Parrott, L. (2008). Measures of structural complexity in digital images for monitoring the ecological signature of an old-growth forest ecosystem. Ecological Indicators, 8(3), 270–284, https://doi.org/10.1016/j.ecolind.2007.02.005.
Rangel, A., Camerer, C., & Montague, PR. (2008). A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience, 9(7), 545–556, https://doi.org/10.1038/nrn2357. [PubMed]
Reppert, T. R., Lempert, K. M., Glimcher, P. W., & Shadmehr, R. (2015). Modulation of saccade vigor during value-based decision making. Journal of Neuroscience, 35(46), 15369–15378, https://doi.org/10.1523/JNEUROSCI.2621-15.2015. [PubMed]
Rothkopf, C. A., Ballard, D. H., & Hayhoe, M. M. (2007). Task and context determine where you look. Journal of Vision, 7(16), 1–20, https://doi.org/10.1167/7.14.16.
Schulz, L., Fleming, S. M., & Dayan, P. (2023) Metacognitive computations for information search: Confidence in control. Psychological Review, 130(3), 604–639, https://doi.org/10.1037/rev0000401. [PubMed]
Sprague, N., & Ballard D. (2003). Eye movements for reward maximization. Advances in Neural Information Processing Systems, 16, 1467.
Sprague, N., Ballard, D., & Robinson, A. (2007). Modeling embodied visual behaviors. ACM Transactions on Applied Perception (TAP), 4, 11, https://doi.org/10.1145/1265957.1265960.
Stoughton, C. M., & Conway, B. R. (2008). Neural basis for unique hues. Current Biology, 18(16), R698–R699, https://doi.org/10.1016/j.cub.2008.06.018. [PubMed]
Sullivan, B. T., Johnson, L., Rothkopf, C. A., Ballard, D., & Hayhoe, M. (2012). The role of uncertainty and reward on eye movements in a virtual driving task. Journal of Vision, 12(19), 1–17, https://doi.org/10.1167/12.13.19.
Thomas, A. W., Molter, F., Krajbich, I., Heekeren, H. R., & Mohr, P. N. C. (2019). Gaze bias differences capture individual choice behaviour. Nature Human Behaviour, 3(6), 625–635, https://doi.org/10.1038/s41562-019-0584-8. [PubMed]
Tong, M. H., Zohar, O., & Hayhoe, M. M. (2017). Control of gaze while walking: Task structure, reward, and uncertainty. Journal of Vision, 17(28), 1–19, https://doi.org/10.1167/17.1.28.
Valberg, A. (2001). Unique hues: An old problem for a new generation. Vision Research, 41(13), 1645–1657, https://doi.org/10.1016/s0042-6989(01)00041-4. [PubMed]
Appendix
Figure A1.
 
Terrain layout for each environment for the 3vs3 and 1vs3 protocols.
Figure A1.
 
Terrain layout for each environment for the 3vs3 and 1vs3 protocols.
Table A1.
 
Mean information gain (MIG) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A1.
 
Mean information gain (MIG) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A2.
 
Marginal entropy (ME) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A2.
 
Marginal entropy (ME) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A3.
 
Relationship between the number of fixations (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality. Only the models with MEhue and MIGhue included participant as a random effect since the variance estimate of this effect contributed >0% to the total estimate.
Table A3.
 
Relationship between the number of fixations (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality. Only the models with MEhue and MIGhue included participant as a random effect since the variance estimate of this effect contributed >0% to the total estimate.
Table A4.
 
Relationship between normalized gaze time (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality.
Table A4.
 
Relationship between normalized gaze time (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality.
Table A5.
 
Self-efficacy ratings for hiking-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
Table A5.
 
Self-efficacy ratings for hiking-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
Table A6.
 
Self-efficacy ratings for urban-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
Table A6.
 
Self-efficacy ratings for urban-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
Figure 1.
 
Examples of the environments used in this study. (A) An example of one of four urban environments used in the 1vs3 protocol. Terrain types include potholes (bottom left path), uneven sidewalk (middle left path), cracked sidewalk (top left path), and cobblestones with puddles (right path). None of the four hiking environments are shown. (B) An example of one of two hiking environments used in the 3vs3 protocol. Terrain types include damp dirt (bottom left path), tree roots (middle left path), rocks (top left path), dry dirt (bottom right path), mud (middle right path), and wooden bridge (top right path). None of the two urban environments are shown. See Figure A1 for which terrain are used in the other environments in each protocol.
Figure 1.
 
Examples of the environments used in this study. (A) An example of one of four urban environments used in the 1vs3 protocol. Terrain types include potholes (bottom left path), uneven sidewalk (middle left path), cracked sidewalk (top left path), and cobblestones with puddles (right path). None of the four hiking environments are shown. (B) An example of one of two hiking environments used in the 3vs3 protocol. Terrain types include damp dirt (bottom left path), tree roots (middle left path), rocks (top left path), dry dirt (bottom right path), mud (middle right path), and wooden bridge (top right path). None of the two urban environments are shown. See Figure A1 for which terrain are used in the other environments in each protocol.
Figure 2.
 
Relationship between mean information gain (MIG) and gaze behavior during the approach (decision-making) phase in the 3vs3 protocol. (A) Example terrain image with low (top), medium (middle), and high (bottom) MIG based on hue. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) and MIG based on hue in a terrain image. Gaze time is normalized to approach phase duration. In each scatterplot, solid black lines show the linear fits obtained from the models and gray shaded regions represent the 95% confidence intervals.
Figure 2.
 
Relationship between mean information gain (MIG) and gaze behavior during the approach (decision-making) phase in the 3vs3 protocol. (A) Example terrain image with low (top), medium (middle), and high (bottom) MIG based on hue. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) and MIG based on hue in a terrain image. Gaze time is normalized to approach phase duration. In each scatterplot, solid black lines show the linear fits obtained from the models and gray shaded regions represent the 95% confidence intervals.
Figure 3.
 
How the amount of information (number of terrain types or MIGhue) within a path choice relates to gaze behavior during the approach (decision-making) phase. (A) Group mean ± SE number of fixations (top) or gaze time (bottom) on the path with low information (1 terrain type; uniform path) and the path with high information (3 terrain types; nonuniform path). (B) Group mean ± SE number of fixations (top) or gaze time (bottom) on the left and right path options. Inset: the chart shows which path, in each environment (env.), had greater MIGhue based on the mean of each terrain, the minimum (min.) terrain, or the maximum (max.) terrain. Gaze time is normalized to approach phase duration. Mean individual participant (n = 15 or 16, depending on the protocol) data values are superimposed. Asterisk indicates a statistically significant difference between the left and right path (p < 0.05).
Figure 3.
 
How the amount of information (number of terrain types or MIGhue) within a path choice relates to gaze behavior during the approach (decision-making) phase. (A) Group mean ± SE number of fixations (top) or gaze time (bottom) on the path with low information (1 terrain type; uniform path) and the path with high information (3 terrain types; nonuniform path). (B) Group mean ± SE number of fixations (top) or gaze time (bottom) on the left and right path options. Inset: the chart shows which path, in each environment (env.), had greater MIGhue based on the mean of each terrain, the minimum (min.) terrain, or the maximum (max.) terrain. Gaze time is normalized to approach phase duration. Mean individual participant (n = 15 or 16, depending on the protocol) data values are superimposed. Asterisk indicates a statistically significant difference between the left and right path (p < 0.05).
Figure 4.
 
How self-efficacy related to walking across a path influences gaze behavior during the approach (decision-making) phase. (A) Scatterplots of the number of fixations (top) or gaze time (bottom) on the nonuniform path versus self-efficacy rating on the uniform path. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) on the chosen path versus the difference in self-efficacy rating between the two paths. Inset: illustrations of how the difference is calculated. Each section represents a terrain patch, with a self-efficacy score inside (larger scores equal greater self-efficacy related to walking across it). The arrows point toward the chosen path (which has gray sections as opposed to black outlined sections). The colored terrain numbers indicate which terrain is used in the calculation, which is based on the lowest-rated terrain (left), the highest-rated terrain (middle), or the mean rating of the three terrain types within a path (right). Gaze time is normalized to approach phase duration. In each scatterplot, solid lines show the linear fits obtained from the models and shaded regions represent the 95% confidence intervals.
Figure 4.
 
How self-efficacy related to walking across a path influences gaze behavior during the approach (decision-making) phase. (A) Scatterplots of the number of fixations (top) or gaze time (bottom) on the nonuniform path versus self-efficacy rating on the uniform path. (B) Scatterplots of the number of fixations (top) or gaze time (bottom) on the chosen path versus the difference in self-efficacy rating between the two paths. Inset: illustrations of how the difference is calculated. Each section represents a terrain patch, with a self-efficacy score inside (larger scores equal greater self-efficacy related to walking across it). The arrows point toward the chosen path (which has gray sections as opposed to black outlined sections). The colored terrain numbers indicate which terrain is used in the calculation, which is based on the lowest-rated terrain (left), the highest-rated terrain (middle), or the mean rating of the three terrain types within a path (right). Gaze time is normalized to approach phase duration. In each scatterplot, solid lines show the linear fits obtained from the models and shaded regions represent the 95% confidence intervals.
Figure A1.
 
Terrain layout for each environment for the 3vs3 and 1vs3 protocols.
Figure A1.
 
Terrain layout for each environment for the 3vs3 and 1vs3 protocols.
Table A1.
 
Mean information gain (MIG) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A1.
 
Mean information gain (MIG) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A2.
 
Marginal entropy (ME) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A2.
 
Marginal entropy (ME) for each terrain and environment. Env = environment (1 to 4); p = position along path (p1 = proximal terrain on left path; p2 = middle terrain on left path; p3 = distal terrain on left path; p4 = proximal terrain on right path; p5 = middle terrain on right path; p6 = distal terrain on right path); grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space.
Table A3.
 
Relationship between the number of fixations (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality. Only the models with MEhue and MIGhue included participant as a random effect since the variance estimate of this effect contributed >0% to the total estimate.
Table A3.
 
Relationship between the number of fixations (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality. Only the models with MEhue and MIGhue included participant as a random effect since the variance estimate of this effect contributed >0% to the total estimate.
Table A4.
 
Relationship between normalized gaze time (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality.
Table A4.
 
Relationship between normalized gaze time (response variable) versus information metrics (predictor variables). AIC = Akaike information criterion; ME = marginal entropy; MIG = mean information gain; grey = greyscale; L* = perceptual lightness of CIE L*a*b* color space; a* = red/green scale of CIE L*a*b* color space; b* = blue/yellow scale of CIE L*a*b* color space; hue = hue component of HSV color space; saturation = saturation component of HSV color space; value = value component of HSV color space. For the statistical analysis, data were square root transformed to ensure normality.
Table A5.
 
Self-efficacy ratings for hiking-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
Table A5.
 
Self-efficacy ratings for hiking-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
Table A6.
 
Self-efficacy ratings for urban-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
Table A6.
 
Self-efficacy ratings for urban-related terrain. We asked participants: “for each type of terrain, please indicate how confident (or certain) you are of walking across it without losing balance, as though you had to step on it in real life outside. Please use a scale of 1 to 10, where 1 is not at all confident and 10 is extremely confident.”
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×