Journal of Vision Cover Image for Volume 25, Issue 8
July 2025
Volume 25, Issue 8
Open Access
Article  |   July 2025
The effects of simulated central and peripheral vision loss on naturalistic search
Author Affiliations
  • Kirsten Veerkamp
    Amsterdam Movement Sciences & Institute for Brain and Behavior Amsterdam, Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    [email protected]
  • Daniel Müller
    Amsterdam Movement Sciences & Institute for Brain and Behavior Amsterdam, Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    [email protected]
  • Gwyneth A. Pechler
    Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    [email protected]
  • David L. Mann
    Amsterdam Movement Sciences & Institute for Brain and Behavior Amsterdam, Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    [email protected]
  • Christian N. L. Olivers
    Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    [email protected]
Journal of Vision July 2025, Vol.25, 6. doi:https://doi.org/10.1167/jov.25.8.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kirsten Veerkamp, Daniel Müller, Gwyneth A. Pechler, David L. Mann, Christian N. L. Olivers; The effects of simulated central and peripheral vision loss on naturalistic search. Journal of Vision 2025;25(8):6. https://doi.org/10.1167/jov.25.8.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Worldwide, millions of people experience central or peripheral vision loss. The consequences on daily visual functioning are not completely known, in particular because previous studies lacked real-life representativeness. Our aim was to examine the effects of simulated central or peripheral impairment on a range of measures underlying performance in a naturalistic visual search task in a three-dimensional (3D) environment. The task was performed in a 3D virtual reality (VR) supermarket environment while being seated in a swivel chair. We used gaze-contingent masks to simulate vision loss. Participants were allocated to one of three conditions: full vision, central vision loss (a 6° mask), or peripheral vision loss (a 6° aperture) in a between-subject design. Each participant performed four search sequences, each consisting of four target products from a memorized shopping list, under varying contrast levels. Besides search time and accuracy, we tracked navigational, oculomotor, head and torso movements to assess which cognitive and motor components contributed to performance differences. Results showed increased task completion times with simulated central and peripheral vision loss, but more so with peripheral loss. With central vision loss, navigation was less efficient and it took longer to verify targets. Furthermore, participants made more and shorter fixations. With peripheral vision loss, navigation was even less efficient, and it took longer to find and verify a target. Additionally, saccadic amplitudes were reduced. Low contrast particularly affected search with peripheral vision loss. Memory failure, indicating cognitive load, did not differ between conditions. Thus we demonstrate that simulations of central and peripheral vision loss lead to differential search profiles in a naturalistic 3D environment.

Introduction
Many of the approximately 300 million people in the world with vision impairment (World Report on Vision, 2019) face significant challenges in their daily lives. The majority with vision impairment (∼80%) are partially sighted, rather than fully blind, and must make the most of their remaining vision. When part of the visual field is intact, functionality will differ according to the area of the visual field that is spared (Bishop, 1996). The central part of the visual field has the highest resolution, allowing for regions of interest to be analyzed in fine detail (see Tuten & Harmening (2021) for an overview). The central field can be damaged for instance as a result of macular degeneration. In contrast, the functionality of the peripheral visual field remains much less clear. It is characterized by reduced acuity, increased crowding, and poorer color perception, but more suited for rapid scene gist recognition, motion detection, and spatial awareness (Rosenholtz, 2016; Vater, Wolfe, & Rosenholtz, 2022). Yet, we are often unaware, at least at a conscious level, of the limitations of the peripheral field, including the reduced acuity and color acuity. The peripheral field can be damaged in conditions such as retinitis pigmentosa and glaucoma. Although central and peripheral vision loss can lead to different consequences, both can affect general daily functioning (Bookwala & Lawson, 2011; Kempen, Ballemans, Ranchor, Van Rens, & Zijlstra, 2012; Lamoureux, Hassell, & Keeffe, 2004), such as when navigating a busy train station, or finding a product in a supermarket. However, there is little detailed knowledge about how central and peripheral impairments affect visual behavior in naturalistic settings. Our longer-term aim is to develop a visual task that is more predictive of daily visual functioning than standard clinical tests of acuity and visual field. Those tests may fail to pick up on potentially crucial aspects of vision that impact daily functioning, such as simulating the complexity and structural richness that comes with naturalistic visual environments, and the dynamic interactions between the observer and that environment as fostered by people's ability to move their eyes, head and body. Here our primary objective was to provide a first test of such a naturalistic environment by assessing whether it allows for meaningful differences between different types of simulated vision loss. 
One significant consequence of vision loss is a reduced ability to find relevant objects. Previous studies have compared the effects of simulated central and peripheral vision loss on visual search using setups in which the stimuli were presented on standard two-dimensional (2D) monitors (Cornelissen, Bruin, & Kooijman, 2005; Nuthmann, 2014; Nuthmann & Canas-Bajo, 2022). The general finding is that search accuracy decreases and search time increases with either central or peripheral vision loss, with the latter typically having the strongest effect. Furthermore, eye movements are also affected. Both central and peripheral vision loss generally result in longer fixation durations, which has been suggested to be caused by increased task difficulty for central vision loss, and by a prolonged duration for saccade planning with peripheral vision loss (Cornelissen et al., 2005). At the same time, central vision loss has been found to lead to larger saccade amplitudes, while peripheral vision loss results in smaller saccade amplitudes (Cornelissen et al., 2005; Nuthmann, 2014; Nuthmann & Canas-Bajo, 2022). Cornelissen et al., (2005) argued that central vision loss results in less systematic search behavior, with saccades becoming larger with increases in the size of the central scotoma, and consecutive saccades showing larger changes in direction, while peripheral vision loss results in more systematic search behavior, with smaller saccades and smaller changes in the direction of consecutive saccades. Additionally, vision loss not only affects processing at the eye-level, but may also affect subsequent sensory, cognitive and motor processes. For example, vision loss has been shown to impede the implicit learning of repeated spatial context in visual arrays, which then in turn affects the efficient allocation of attention (Geringswald, Baumgartner, & Pollmann, 2012; Geringswald & Pollmann, 2015). Moreover, also for non-impaired observers, the relative reliance on central versus peripheral vision depends on the task and discriminability of the relevant visual features across the visual field (Hulleman & Olivers, 2015; Rosenholtz, Huang, & Ehinger, 2012; Rosenholtz, Huang, Raj, Balas, & Ilie, 2012), and it is likely that this balance changes with visual impairment. 
Although screen-based studies have been useful for understanding behavior in two-dimensional tasks, they may not be fully representative of, and thus have limited generalizability to, visual behavior in tasks involving large-scale three dimensional environments. For example, there is evidence from eye tracking studies in sports that gaze behavior is different when football goalkeepers attempt to “save” penalties against a real kicker as opposed to when keeping against a kicker seen on a video screen (Dicks, Button, & Davids, 2010). Similarly, gaze behavior may differ between real-life and on-screen situations also under conditions of vision impairment. A limitation of on-screen studies of visual impairment has been that head and body movements have typically been restricted, while these movements may provide effective ways of compensating for vision loss in naturalistic three-dimensional (3D) environments (e.g., Nieboer, Svensen, van Paridon, van Biesen, & Mann, 2025). For example, it has been shown that people with peripheral vision loss who passed a driving test made more head movements than people who did not pass the test (Coeckelbergh, Cornelissen, Brouwer, & Kooijman, 2002). Furthermore, daily life 3D search tasks often involve navigation through, and learning of, an environment, but these aspects have rarely been considered as part of on-screen tasks. Last but not least, on-screen tasks typically only cover a very limited part of the visual field (up to a diameter of 40° horizontally and 30° vertically [Coeckelbergh et al., 2002]), and findings may thus potentially underestimate the role of peripheral vision in daily life. 
Virtual reality (VR) offers a way to mitigate many of the constraints of 2D on-screen tasks. Even though differences exist between behavior in real-life environments and in VR, real-life environments allow for limited experimental control over stimuli and conditions, make eye-tracking challenging, and mimicking gaze-contingent vision loss even more. Instead, by allowing free viewing and having a wider field of view, while simultaneously still allowing for considerable control over stimuli and conditions, VR enables a setting to evaluate the consequences of vision loss in 3D naturalistic tasks. Recently, David, Beitner, and Võ (2020) and David, Beitner, and Võ (2021) examined the roles of central and peripheral vision in a VR search task. In their study, participants were instructed to find an item in virtual everyday rooms, while either central or peripheral vision was masked. Interestingly, search performance was only slightly affected by central vision loss, leading the authors to conclude that central vision loss had a smaller impact in naturalistic 3D settings than in on-screen situations. In contrast, peripheral vision loss was suggested to have more serious consequences than considered so far based on on-screen studies, as the field of view needed to normally perform the task in VR is likely to be larger than on-screen (David et al., 2021; Nuthmann, 2013). These results emphasize the need for naturalistic tasks and environments to better understand the consequences of vision loss on everyday life. However, in the David et al., (2021) task they focused on eye and head movements in their naturalistic search task, and it remains unclear to what degree the simulations of central and peripheral vision loss might impact wider navigational behavior including the nature of the path traveled and obstacle collisions. Moreover, it remains unclear to what degree central and peripheral vision loss might be differentially impacted by low-level factors such as contrast, and higher-level factors such as cognitive load and attentional filtering. 
The aim of our study was to investigate the effects of simulated central and peripheral vision loss on visual-motor search and navigational behavior in a naturalistic visual search task in a 3D environment. Specifically, we sought to replicate and extend the findings of David et al., 2020, David et al., 2021 by providing a detailed analysis of visual-motor behavior underlying impairment-related differences in the performance of a task in which observers were asked to find products in a 3D virtual supermarket. The combination of task and environment not only allowed for the study of gaze behavior, but also of navigation behavior, and of the potential contribution of low-level factors such as contrast, and higher-level factors such as cognitive load and attentional filtering. To this end, participants were asked to find products from a memorized shopping list in our VR supermarket (see Figure  1) while being allocated to one of three conditions: full vision, central vision loss (simulated by a gaze-contingent central mask), or peripheral vision loss (simulated by a mask with a gaze-contingent aperture). On the basis of previous studies, we hypothesized that central and peripheral vision loss would result in worse overall search performance, but would do so through different mechanisms. In the central vision loss condition, not being able to use high-acuity vision should affect target verification when product details are important. For the same reason, we expected longer fixations, as previously also found in David et al., (David et al., 2020). In contrast, peripheral vision loss was expected to weaken performance primarily by producing less efficient navigation towards target-relevant areas in the supermarket due to reduced awareness of one's position in the environment (Vater et al., 2022). Based on the results of an earlier VR study (David et al., 2020), we also expected altered gaze behavior in the form of shorter saccades. In addition to the motor parameters, we also assessed the feature selectivity of the gaze, to assess when during the search process observers started to look for target-related features (specifically color and shape). Note that it is adaptive to only start looking for the specific product after having navigated to the correct area within the supermarket, and the moment of this transition from navigation to local search and inspection may change with vision loss. Furthermore, in all conditions, participants were asked to remember a shopping list consisting of the target products, and they were allowed to check the list again when unsure about what product to search for next. We intentionally included this re-checking feature as a potential measure of cognitive load, such that under high mental strain as induced by the visual loss, observers may more often forget what they were looking for. Finally, we manipulated contrast levels between searches to provide a simulation of high and low contrast search. Given that contrast sensitivity reduces with retinal eccentricity (Ashraf, Mantiuk, & Chapiro, 2024; Virsu & Rovamo, 1979), particularly at high spatial frequencies, we expected that central vision would be more sensitive to changes in contrast and therefore that performance would be worse in the peripheral vision loss condition where only central vision remained. Conversely, the remaining peripheral vision in the central vision loss condition was expected to be less sensitive to and therefore less affected by changes in contrast. 
Figure 1.
 
(A) The VR supermarket environment seen from the starting position. (B) A participant while performing the experiment. (C) The VR supermarket environment while central vision loss is simulated by a central mask. (D) The VR supermarket environment while peripheral vision loss is simulated by a peripheral mask. (E) An overview of the different phases within the search sequence, and the outcome measures obtained from each product search.
Figure 1.
 
(A) The VR supermarket environment seen from the starting position. (B) A participant while performing the experiment. (C) The VR supermarket environment while central vision loss is simulated by a central mask. (D) The VR supermarket environment while peripheral vision loss is simulated by a peripheral mask. (E) An overview of the different phases within the search sequence, and the outcome measures obtained from each product search.
Figure 2.
 
Overall task performance across the three vision simulation conditions. Task completion time differed significantly between conditions. Here and in all other plots, the asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 2.
 
Overall task performance across the three vision simulation conditions. Task completion time differed significantly between conditions. Here and in all other plots, the asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Method
Participants
A total of 84 participants took part in exchange for course credits or money. The protocol was approved by the Scientific and Ethics Review Board of the Faculty of Behaviour and Movement Sciences at the Vrije Universiteit Amsterdam. Exclusion criteria were color blindness, flash-induced epilepsy, and known VR-related motion sickness. Each participant had normal or corrected-to-normal vision (10 wore glasses, 18 wore contact lenses). Nine participants were excluded due to either a technical issue (n = 6), low percentage of valid gaze data (n = 2), or not being familiar with a target product that they were required to search for in our environment (n = 1). Hence, a sample of 75 participants remained (mean age: 21 years (18 to 30); 52 females), of which 25 were assigned to each of the three simulated vision conditions: full vision, central vision loss, or peripheral vision loss
VR environment
Virtual supermarket
The virtual supermarket was developed in Unity (version 2023.2.1f1) to resemble a local supermarket from a popular Dutch chain (Figure  1A). We modified the VirtuMart as developed and kindly provided by van der Laan, Papies, Ly, and Smeets (2022). We added more products, resulting in a total of 719 different products. After arranging products in a structured way in the supermarket, small randomized deviations in position and rotation were added to each individual product to increase realism. Shelves were placed following a typical supermarket layout incorporating areas and aisles with certain themes (e.g., fruit and vegetables, bread, soft drinks, dairy products, cleaning products). The two ends of each aisle contained a variety of products on discount (i.e., special offer), not linked to the aisle's theme, and which were thus placed out of context (though still consistent with the general context of a supermarket). Avatars pivoting around their position were placed more or less randomly around the supermarket in addition to trolleys and boxes to increase realism. 
A Pico Neo 3 Pro Eye headset (Pico Interactive, Beijing, China) with integrated Tobii eye tracking (VR4 Platform; Tobii Technology, Stockholm, Sweden) was used to display the virtual environment. The headset has a binocular field of view of 98°. Participants were seated on a swivel chair while wearing the headset and holding one controller (Figure  1B). Locomotion was provided by means of stepwise teleportation to prevent motion sickness. Pressing the trigger button on the back of the controller resulted in moving 0.5 m in the horizontal plane in the direction in which the controller was pointing. If the movement caused the participant to collide with an object, the participant was moved 0.5 m backward, and haptic feedback was provided by a controller vibration. (Note that the supermarket was too large to have participants walk around in real space. Furthermore, we planned to use the same task and environment with elderly or otherwise less mobile people). A product could be selected by pointing toward it while pressing and holding the trigger button on the controller for three seconds. Another controller was strapped to the chest of the participant to register body rotation independent of head rotation. 
Test protocol
The test protocol consisted of three steps: calibration; familiarization; and testing (which itself consisted of four search sequences). First, the eye tracker was calibrated using a custom calibration created using Tobii Ocumen SDK to make it suitable for testing people with and without vision impairment. The background color was set to black, and the moving calibration stimulus consisted of a large grey dot and a large bright “X” spanning the entire screen. Calibration quality was deemed successful according to the standard Tobii algorithm criteria. A calibration check was performed using five points before each search sequence. In case of failure, or if the headset had been taken off in the meantime, the calibration was repeated. 
Next, participants were made familiar with the VR supermarket with a layout different from that used in the experiment proper and with empty shelves. In this environment, participants were free to explore and become familiar with the means of locomotion and for selecting products. To this end we also gave the assignment to find a checkered ball somewhere in the empty supermarket. Furthermore, the participant was shown one shelf containing all the target products that they would be required to search for in the experiment proper to allow participants to become familiar with the products (but not where they would be). 
During the experiment itself, participants performed four search sequences in the virtual supermarket. At the start of each sequence participants were instructed to search for four target products in the order as specified on the shopping list (so 16 searches in total; see Table 1 for an overview). The shopping list was provided through a verbal recording at the beginning of each sequence, and triggered by pressing a button on the controller. Participants were encouraged to memorize the shopping list. However, if needed, the shopping list could be recalled by pressing the same button during the search. Each sequence started at the supermarket entrance and ended with participants moving to the check-out register area. In between, participants searched for the designated products. An individual product search ended only when the correct target product was selected. If the wrong product was selected, auditory feedback was provided and the search continued. When an individual product search took longer than five minutes, the participant was guided toward the correct product by the experimenter. Eight different target products were distributed over the first two and the last two sequences; hence, each product needed to be searched for twice. Each sequence consisted of one product that was on discount (i.e., placed out-of-context). Also, half of the searches were performed at normal (high) contrast, and half at low contrast as imposed by means of a “smoky” mask (RGBA: [48 48 48 100]) displayed in front of the camera. These conditions were altered systematically across searches. The supermarket layout and product placement remained the same throughout the four sequences. 
Table 1.
 
An overview of all four search sequences and their set-up.
Table 1.
 
An overview of all four search sequences and their set-up.
Vision loss simulations
Vision loss was simulated for each eye separately using a gaze-contingent mask. Central vision loss was modeled as a circular mask of six-degree radius in front of each eye (Figure  1C). Peripheral vision loss was modeled as a mask with a circular aperture of 6° radius (Figure  1D). For both simulations the edge was softened with a 1.5° fade. The size of the 6° mark/window was consistent with what has been used in previous VR studies as a suitable trade-off between central and peripheral vision loss (David et al., 2020; David et al., 2021). Moreover, around 50% of the visual cortex is devoted to the central 6° of the visual field, whereas the other 50% covers the remaining peripheral part of the visual field (Horton & Hoyt, 1991), resulting in an equal distribution of cortical coverage in our case. 
A limitation of gaze-contingent simulations is a potential delay between when gaze changes and when the gaze-contingent change follows. Although we did everything to keep the delay as short as possible, we had no means to estimate its exact value. Also, because of some complexities in the VR computations that we later simplified, the frame rate was 23 Hz on average for the first 35 participants, whereas it was 67 Hz for the remaining 40 participants. This probably also affected the gaze-contingent delay. Importantly, participants with the lower frame rate were evenly distributed across conditions (full vision: n = 12; central mask: n = 12; peripheral mask: n = 11). 
Data analyses and statistics
Data were measured at the same frequency as the frame rate. Participants who received the lower frame rate were evenly distributed across all three conditions, and we conducted post-hoc checks that confirmed that the conclusions were not affected by frame rate (Supplementary material A). 
Overall task performance
Overall task performance was assessed through task completion time, frequency of wrong product selection, and frequency of failed searches. Task completion time for each sequence was defined as the time elapsed from the moment of the first teleport (i.e., the initial instance in which a participant presses the trigger button to change their virtual position) after memorizing the shopping list to the moment the participant reached the register after having found and selected all four search targets. Frequency of wrong product selection was defined as the number of product searches in which the hand was more than 2 m away from the center of the target product while holding the selection button for the first time. Initially, a threshold of 50 cm was set for the experiment to accept a selection. However, sometimes participants had clearly found the target product and tried to already select it from further away. When that happened, they needed to get closer to the product and select it again while being close enough to terminate the search. We did not want outcomes to be affected by this incorrect handling, and therefore, in the analysis, decided to terminate searches when a target selection was performed within 2 m from the center of the target, and count these as correct. Ninety-five percent of selection responses were within 2 m. The remaining responses (made at larger distances) were labeled as incorrect. Product searches with incorrect product selection were removed from further analysis, because dependent variables were influenced by the incorrect product search and not by condition. Frequency of failed searches was counted as the number of product searches that took longer than five minutes and so required the assistance of the experimenter. We then analyzed several aspects of the search process across the different search phases (Figure  1) to assess their contribution to the overall task performance. 
Navigation
Navigation efficiency was assessed throughout the whole search sequence and was measured by quantifying the movement speed, traveled path efficiency, and frequency of obstacle collisions. Movement speed was calculated by dividing the Euclidean distance traveled by the body within the virtual world during each search by the search completion time (i.e., the duration of the exploration and homing in phase for the first search of each sequence, the duration of initiation, exploration and homing in phase for the second through fourth searches, and duration of the complete search back to the register). Travelled path efficiency was calculated by dividing the length of the traveled path by the shortest path possible. The shortest path was obtained by exporting the supermarket's navigation mesh and target product center positions from Unity, and using an A* algorithm in MATLAB. This was done for each product search, and separately for going back to the register. The frequency of obstacle collisions was defined as the number of collisions with trolleys or boxes in the supermarket, divided by the traveled distance (i.e., collisions per meter). 
Initiation
The search initiation phase spanned the period from the moment the shopping list was presented, or the moment the previous product was found, until the first teleport for the next search was made. Here we measured the time to start and number of shopping list retrievals. Both outcome measures were distinguished for the first search of each sequence, in which the shopping list was memorized, and for all three next searches within each sequence (second to fourth product searches). 
Exploration
The exploration phase spanned the moment from the first teleport until the first fixation on the target product. Here we measured time to target fixation, target detection distance, number of shopping list retrievals, gaze behavior (i.e., fixation rate, fixation duration and saccade amplitude), other orienting behavior (i.e., body rotation, head rotation, eye rotation and gaze rotation), and feature selectivity (i.e., shape similarity, color similarity). The first target fixation was defined as when the gaze vector first intersected the target product for longer than 80 ms. The time to target fixation was calculated from the first teleport up to the first target fixation. The target detection distance was calculated as the distance of the body (as registered by the chest-strapped controller) to the target product at first fixation. The number of shopping list retrievals was also quantified in the exploration phase. To analyze gaze behavior, eye and gaze data (i.e., head + eye direction in world) were converted to degrees, and differentiated to obtain velocity over time, and this signal was smoothened by a Savitzky-Golay filter. Fixations were identified as samples below 3° position change or below 30°/sec velocity, for longer than 80 ms. All other samples were classified as saccades. Fixation rate (i.e., number of fixations per second) and fixation duration were obtained from the gaze data; saccade amplitude was obtained from the eye data. Other orienting behavior was quantified by converting the body, head, eye and gaze direction unit vectors into angular changes over time, summing these rotation time series data over time, and dividing this summed rotation by the search completion time. This approach resulted in body rotation normalized for time, head rotation normalized for time, eye rotation normalized for time, and gaze rotation normalized for time. 
To assess feature selectivity, which indicated how selectively participants were looking for the target product features, we quantified for each time point the shape and color similarity between any product the participant was fixating on and the target product. Time points in which the participant was not looking at a product were excluded. For shape similarity, the 3D mesh of each product was extracted, and its vertices were scaled along the largest dimension such that each mesh would have a similar size by fitting within a unit cube. Subsequently, the scaled mesh was converted to a point cloud capturing the product's surface geometry using Matlab's pointCloud function. The scaled point clouds of the object fixated on and the target were registered to each other using an Iterative Closest Point algorithm, which aligns point clouds by iteratively minimizing the distance between pairs of points from the two point clouds that are closest to each other. To ensure robustness, registration was performed in both directions (product-to-target and target-to-product). The registration with the smaller registration error was used for further analysis, in which the Euclidean distance between each point in one point cloud and its nearest neighbor in the other point cloud was calculated. The root mean square error was calculated across all distances of these point pairs, providing a measure for shape similarity between the surfaces of the product fixated on and the target product. For color similarity, the pixel values (RGB) of each product's texture image were converted to CIELAB space. A k-means cluster analysis was then performed to extract the most prominent colors, defined by the number of clusters capturing more than 50% of the variance in color distribution (resulting in two or three clusters for each product). The color clusters were then compared between the product fixated on and the target product by calculating the pairwise Euclidean distances in CIELAB space between the cluster centroids of each product. For each cluster, the closest color match was identified by finding the minimum distance in the pairwise comparisons. This process was performed bidirectionally (product-to-target and target-to-product). The color similarity between the two products was quantified as the average of the mean minimum distances in both directions, providing a measure of how closely the prominent colors in the product fixated on matched those in the target product. For both shape and color similarity, the average time point of the first fixation on the target product (i.e., end of exploration phase and start of homing in phase) was also obtained. 
Homing in
The homing in phase spanned from the first target fixation until the target product was selected using a correct button press. The time after first target fixation, gaze behavior, other orienting behavior, and feature selectivity were again calculated as they were in the exploration phase. 
Each of the outcome variables was also evaluated to identify any contrast effects. Contrast effects were analyzed by comparing outcomes between the high and low contrast searches. Outcomes for low contrast searches were divided by outcomes for high contrast searches to obtain ratios. 
Outcome measures were averaged across all searches and across all sequences, except for overall task performance measures which were summed across searches and sequences. Outliers were identified at participant-level as values less than the first quartile minus 1.5 times the interquartile range, or greater than the third quartile plus 1.5 times the interquartile range. These outliers were excluded from the statistical analyses, but displayed and indicated in the figures for transparency. Outcomes were compared between the three vision conditions using ANOVA (Matlab 2024a; MathWorks, Inc., Natick, MA, USA), and, when significant, followed by pairwise comparisons using two-tailed t-tests. Feature selectivity time series were compared by statistical parametric mapping (Pataky, 2012). Contrast effect ratios were also compared between the three conditions using ANOVA, followed by pairwise comparisons when significant. Bonferroni corrections were applied at the outcome level to correct for multiple comparisons. Alpha was set at 0.05. 
Results
Table 2 includes the results for all outcome variables. In the subsequent analyses we first report overall task performance across all four search sequences for each impairment condition. Subsequently, the factors potentially contributing to differences in overall task performance are reported, which are navigation during the task, cognitive load, gaze behavior, other orienting behavior, and feature selectivity across different search phases (i.e., initiation, exploration, and homing in phase). Finally, the effects of the contrast manipulation on these different measures are reported. 
Table 2.
 
An overview of the results for all outcome variables, averaged across participants per condition. Outliers were excluded from the reported means and standard deviations.
Table 2.
 
An overview of the results for all outcome variables, averaged across participants per condition. Outliers were excluded from the reported means and standard deviations.
Overall task performance
Overall task completion time was longer for the simulated vision loss conditions than for full vision (F(2,48) = 49.83, mean squared error (MSE) = 14034.1, p < 0.001, Table 2Figure 2). Specifically, task completion time was significantly longer with a central mask (mean ± standard deviation = 646.1 ± 100.1 sec) than with full vision (527.1 ± 75.4 sec, t(48) = −2.87, p < 0.01), and significantly longer with a peripheral mask (913.1 ± 160.3) than with full vision (t(48) = −9.77, p < 0.001) and a central mask (t(48) = −6.45, p < 0.001). 
Participants very rarely selected the wrong product (2.6% of product searches) and this did not differ significantly between conditions (F(2,71) = 1.0, MSE = 0.4, p = 0.36, Table 2). There was only one failed search during the entire experiment, with a search excluded from analysis because the participant searched for the wrong product (statistical tests could not be performed; Table 2). 
Navigation
Navigation was less efficient in the simulated vision loss conditions, but particularly so with a peripheral mask. Movement speed was significantly different between groups (F(2,68) = 33.9, MSE = 0.007, p < 0.001, Table 2Figure  3A). With a peripheral mask, participants moved significantly slower (0.6 ± 0.1 m/sec) than they did in the full vision condition (0.8 ± 0.1 m/sec, t(68) = −7.5, p < 0.001) and in the central mask condition (0.8 ± 0.1 m/sec, t(68) = −6.8, p < 0.001). Movement speed in the central mask condition did not differ significantly from what it was in the habitual full vision condition (t(68) = 0.8, p = 1.0). 
Figure 3.
 
Navigation performance, with (A) movement speed, (B) traveled path efficiency for the product searches, (C) traveled path efficiency back to the register, and the (D) frequency of obstacle collisions. Each variable differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 3.
 
Navigation performance, with (A) movement speed, (B) traveled path efficiency for the product searches, (C) traveled path efficiency back to the register, and the (D) frequency of obstacle collisions. Each variable differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
The traveled path efficiency for product searches also differed between conditions (F(2,69) = 12.7, MSE = 0.2, p < 0.001, Table 2Figure  3B). The routes participants took were less efficient with a peripheral mask (2.2 ± 0.5) than with full vision (1.6 ± 0.2, t(69) = −5.0, p < 0.001). Efficiency was also poorer with a central mask (1.9 ± 0.4) than it was with full vision (t(69) = −2.83, p = 0.02). Path efficiency did not differ between a central mask and a peripheral mask (t(69) = −2.29, p = 0.08). 
Travelled paths back to the register also differed significantly between conditions (F(2,68) = 13.3, MSE = 0.01, p < 0.001, Table 2Figure  3C). Participants with a peripheral mask again used a less efficient route back to the register (1.26 ± 0.19) than they did in both other conditions (full vision: 1.1 ± 0.07, t(68) = −5.0, p < 0.001; central mask: 1.1 ± 0.06, t(68) = −3.6, p < 0.01). In the central mask condition, path integration back to the register was not significantly worse than in the full vision condition (t(68) = −1.4, p = 0.5). 
Finally, the frequency of obstacle collisions per meter differed significantly between conditions (F(2,72) = 14.6, MSE = 0.00003, p < 0.001, Table 2Figure  3D). Participants with a peripheral mask collided with more obstacles (0.01 ± 0.01 collisions per meter) than participants with full vision (0.00 ± 0.00, t(72) = −4.8, p < 0.001) and with a central mask (0.00 ± 0.00, t(72) = −4.6, p < 0.001), with no difference between the latter two conditions (t(72) = −0.25, p = 1.0). 
Initiation phase
At the start of a sequence participants heard and memorized the shopping list and then started the first search. The time to start the first search did not differ across conditions (F(2,70) = 2.1, MSE = 57.7, p = 0.13, Table 2Figure  4A). For the subsequent searches, a new search started at the moment the participant moved after they had successfully selected the preceding target. This did differ across conditions (F(2,71) = 42.4, MSE = 1.6, p < 0.001, Table 2Figure  4B). The time to initiate subsequent searches was significantly longer with a peripheral mask (6.2 ± 1.8) than with full vision (3.3 ± 0.9, t(71) = −7.9, p < 0.001) and with a central mask (3.3 ± 0.9, t(71) = −8.06, p < 0.001). There was no difference in the time between the full vision and central mask conditions (t(71) = 0.1, p = 1.0). This time increase was not caused by more frequent shopping list retrievals, because this did not differ significantly between conditions either for the first search (F(2,69) = 1.1, MSE = 0.4, p = 0.33, Table 2), or for the subsequent searches (no statistical tests could be performed; this occurred so rarely that all instances were considered outliers, Table 2). 
Figure 4.
 
Initiation phase, with (A) the time to start the first search of each sequence, in which the shopping list was memorized, did not differ between conditions, however, (B) the time to start the second, third and fourth search did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 4.
 
Initiation phase, with (A) the time to start the first search of each sequence, in which the shopping list was memorized, did not differ between conditions, however, (B) the time to start the second, third and fourth search did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Exploration phase
The exploration phased spanned the period from the first teleport up to the first fixation on the target product. Here, both the timing and distance of the first fixation on the target product differed between conditions (F(2,71) = 44.5, MSE = 43.6, p < 0.001; F(2, 71) = 11.5, MSE = 0.6, p < 0.001, respectively, Table 2Figures  5A and 5B). The time to first fixation on the target product was significantly longer with a peripheral mask (37.1 ± 8.2 sec) than with full vision (20.1 ± 4.0, t(71) = −9.0, p < 0.001) and a central mask (24.3 ± 6.9, t(71) = −6.9, p < 0.001). The time to first target fixation did not differ significantly between the central mask condition and the full vision condition (t(71) = −2.2, p = 0.10). Furthermore, the distance from which the target product was first seen was closer with a central mask (3.9 ± 0.8 m) compared to full vision (4.6 ± 0.7, t(71) = 3.0, p = 0.010) and a peripheral mask (5.0 ± 0.8, t(71) = −4.7, p < 0.001). The distance of first target fixation did not differ significantly between the full vision and peripheral mask conditions (t(71) = −1.7, p = 0.26). The number of shopping list retrievals did not differ between conditions during this phase (p = 0.44, Table 2), largely because they were so rare. 
Figure 5.
 
Exploration phase, with (A) time to target fixation and (B) the target detection distance, which both differed significantly between conditions. The asterisk sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 5.
 
Exploration phase, with (A) time to target fixation and (B) the target detection distance, which both differed significantly between conditions. The asterisk sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Gaze behavior during the exploration phase differed between the vision loss conditions, with central and peripheral masks resulting in unique gaze profiles. While a central mask resulted in a gaze profile with more but briefer fixations, a peripheral mask resulted in smaller saccades. In detail, the fixation rate differed between conditions (F(2, 70 = 20.7, MSE = 0.08, p < 0.001, Table 2Figure  6A), with the fixation rate significantly higher with a central mask (2.5 ± 0.3 fixations per second) than it was with full vision (2.1 ± 0.3 , t(70) = −6.3, p < 0.001) and a peripheral mask (2.1 ± 0.3 , t(70) = 5.9, p < 0.001), with no difference between the peripheral mask and full vision conditions (t(70) = 0.55, p = 1.0). This coincided with differences in fixation duration (F(2,70) = 17.8, MSE = 0.009, p < 0.001, Table 2Figure  6B), with participants with a central mask having significantly shorter fixation durations (0.3 ± 0.1 sec) than participants with full vision (0.5 ± 0.1, t(70) = 4.5, p < 0.001) and a peripheral mask (0.5 ± 0.1, t(70) = −5.7, p < 0.001), and no difference between the peripheral mask and full vision conditions (t(70) = −1.1, p = 1.0). Conversely, saccade amplitude differed between conditions (F(2,70) = 63.8, MSE = 5.9, p < 0.001, Table 2Figure  6C), as participants with a peripheral mask used significantly shorter saccades (i.e., a smaller saccade amplitude; 7.8° ± 1.5°) than participants with full vision (14.1° ± 2.6°, t(70) = 9.0, p < 0.001) and a central mask (15.3° ± 2.9°, t(70) = 10.5, p < 0.001), but with no difference between the central mask and full vision conditions (t(70) = °1.6, p = 0.34). 
Figure 6.
 
Exploration phase, with (A) fixation rate and (B) duration, as well as (C) saccade amplitude, differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 6.
 
Exploration phase, with (A) fixation rate and (B) duration, as well as (C) saccade amplitude, differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Regarding other orienting behavior, during the exploration phase the time-normalized rotations differed across conditions for the eyes and gaze (F(2, 72) = 21.8, MSE = 109.5, p < 0.001; F(2,72) = 19.8, MSE = 115.7, p < 0.001, respectively, Table 2Figures  7A and 7B), but not for the body or head (F(2,71) = 0.8, MSE = 13.3, p = 0.46; F(2,72) = 0.6, MSE = 17.9, p = 0.54, respectively, Table 2). The eyes rotated more with a central mask (62.6°/sec ± 11.8°/sec) than with full vision (53.6°/sec ± 10.0°/sec, t(72) = °3.0, p < 0.01). The eyes rotated less with a peripheral mask (43.1°/sec ± 9.4°/sec) than with full vision (t(72) = 3.6, p < 0.01) and with a central mask (t(72) = 6.6, p < 0.001), consistent with the observation of smaller saccades in the peripheral mask condition in the previous analysis. Gaze rotation was subsequently lower with a peripheral mask (42.6°/sec ± 9.4°/sec) than it was with full vision (52.5°/sec ± 10.6°/sec, t(72) = 6.3, p < 0.01) yet was higher with a central mask (62.6°/sec ± 11.8°/sec) than it was for both the full vision (t(72) = 3.1, p < 0.01) and peripheral mask conditions (t(72) = 3.2, p < 0.001). 
Figure 7.
 
Exploration phase, with (A) body rotation and (B) head rotation did not differ between conditions. (C) Eye rotation and (D) gaze (head + eye) rotation did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 7.
 
Exploration phase, with (A) body rotation and (B) head rotation did not differ between conditions. (C) Eye rotation and (D) gaze (head + eye) rotation did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Next, we assessed feature selectivity across the exploration phase. Feature selectivity was computed as the similarity between the colors and the shape of the target product and the products fixated at during the search. For color similarity, the products’ most prominent colors were compared, and for shape similarity, the products’ meshes were compared. For both outcomes, displayed in Figure  8, a smaller value indicates fixation on a product with a larger similarity to the target product. Similarity did not differ significantly between conditions at any moment across the exploration phase for either color (Figure  8A) or for shape (Figure  8B). 
Figure 8.
 
Feature selectivity for (A) color and (B) shape. Products fixated at during the product search were compared to the target product, with lower values indicating higher similarity to the target product. Time was normalized from the first teleport until the first fixation on the target (i.e., exploration phase), and then from first fixation on the target to the target product selection (i.e., homing in phase). The dashed vertical line indicates the average time point of the first fixation on the target product (i.e., end of exploration phase and start of homing in phase).
Figure 8.
 
Feature selectivity for (A) color and (B) shape. Products fixated at during the product search were compared to the target product, with lower values indicating higher similarity to the target product. Time was normalized from the first teleport until the first fixation on the target (i.e., exploration phase), and then from first fixation on the target to the target product selection (i.e., homing in phase). The dashed vertical line indicates the average time point of the first fixation on the target product (i.e., end of exploration phase and start of homing in phase).
Homing in phase
Next, we turned to the homing in phase, which was the time period between first fixation on the target product and actually selecting that product with the controller. The duration of this phase differed between conditions (F(2, 70) = 33.8, MSE = 6.3, p < 0.001, Table 2Figure  9). The total time after the first target fixation was significantly longer with a central mask (9.8 ± 2.5 sec) than with full vision (8.0 ± 1.2, t(70) = −2.5, p = 0.04), and longer with a peripheral mask than with full vision (13.8 ± 3.3, t(70) = −8.0, p < 0.001) and a central mask (t(70) = −5.6, p < 0.001). 
Figure 9.
 
Homing in phase duration, as measured by the time after the first target fixation, differed between conditions. The plus sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 9.
 
Homing in phase duration, as measured by the time after the first target fixation, differed between conditions. The plus sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Gaze behavior also differed between the three conditions in the homing in phase, with central and peripheral masks still each resulting in unique gaze profiles. With a central mask, in addition to more and briefer fixations, saccades were also larger. With a peripheral mask, besides having smaller saccades, the number of fixations was less but fixations lasted longer. Fixation rate differed (F(2, 70) = 59.1, MSE = 0.08, p < 0.001, Table 2Figure  10A), with the fixation rate significantly higher with a central mask (2.2 ± 0.2 fixations/sec) than with full vision (1.8 ± 0.3, t(70) = −5.9, p < 0.001) and a peripheral mask (1.4 ± 0.3, t(70) = 10.9, p < 0.001), and the fixation rate was significantly lower with a peripheral mask than with full vision (t(70) = 5.1, p < 0.001). Subsequently, the fixation duration differed across conditions (F(2,72) = 35.3, MSE = 0.05, p < 0.001, Table 2Figure  10B), as participants with a central mask (0.4 ± 0.1 sec) used significantly shorter fixation durations than participants with full vision (0.6 ± 0.2, t(72) = 3.3, p < 0.01) and a peripheral mask (0.9 ± 0.3, t(72) = −8.3, p < 0.001), whereas participants with a peripheral mask used significantly longer fixation durations than those with full vision (t(72) = −5.1, p < 0.001). Saccade amplitudes also differed between conditions (F(2,70) = 18.9, MSE = 3.0, p < 0.001, Table 2Figure  10C). Participants with a central mask (11.0° ± 1.7°) used larger saccades than participants with full vision (9.6° ± 1.7°, t(70) = −2.8, p = 0.02) and with a peripheral mask (7.9° ± 1.7°, t(70) = 6.1, p < 0.001), and participants with a peripheral mask used saccade amplitudes that were smaller than those with full vision (t(70) = 3.4, p < 0.01). 
Figure 10.
 
Homing in phase, with (A) fixation rate, (B) fixation duration, and (C) saccade amplitude, each differing significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 10.
 
Homing in phase, with (A) fixation rate, (B) fixation duration, and (C) saccade amplitude, each differing significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Other orienting behavior in the homing in phase did not only differ for the eyes (F(2,70) = 18.3, MSE = 59.6, p < 0.001, Table 2Figure  11C) and gaze (F(2,69) = 16.4, MSE = 60.8, p < 0.001, Table 2Figure  11D), but also for the head (F(2, 70) = 10.6, MSE = 8.7, p < 0.001, Table 2Figure  11B). The rate of body rotation did not differ between the three conditions (F(2,72) = 0.60, MSE = 7.4, p = 0.56, Table 2Figure  11A). The head was rotated more with a central mask (14.3 ± 2.9 deg/sec), as well as with a peripheral mask (16.1 ± 3.6), when compared with full vision (12.2 ± 2.1, t(70) = −2.5, p = 0.049; t(70) = −4.6, p < 0.001, respectively). A central mask did not result in a significant difference in head rotation compared to a peripheral mask (t(70) = −2.2, p = 0.10). The eyes rotated more with a central mask (47.8 ± 9.6) than with full vision (39.8 ± 6.0, t(70) = −3.6, p < 0.01) and with a peripheral mask (34.5 ± 7.2, t(70) = 6.0, p < 0.001, consistent with the use of larger saccades). With a peripheral mask the eye rotation was not significantly less than with full vision, although there was a trend in that direction (t(70) = 2.4, p = 0.054, consistent with the use of larger saccades). Subsequently, gaze rotation was higher with a central mask (47.3°/sec ± 9.9°/sec) than for both full vision (40.2°/sec ± 5.6°/sec, t(69) = −3.1, p < 0.01) and a peripheral mask (34.6°/sec ± 7.2°/sec, t(69) = 5.7, p < 0.001). Gaze rotation was significantly lower with a peripheral mask than for full vision (t(69) = 2.5, p = 0.04). 
Figure 11.
 
Homing in phase, with (A) body rotation, (B) head rotation, (C) eye rotation, and (D) gaze (head + eye) rotation each normalized by time. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 11.
 
Homing in phase, with (A) body rotation, (B) head rotation, (C) eye rotation, and (D) gaze (head + eye) rotation each normalized by time. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
In the homing in phase, simulated vision loss did not affect feature selectivity. The products fixated on did not differ in similarity to the target product's color (Figure  8A) nor shape (Figure  8B) between conditions. For shape similarity, SPM did detect two clusters in which there was a significant difference, but these clusters each lasted less than 1% of the total time, and were therefore considered negligible. 
Contrast effects
Finally, the outcome measures were compared between searches that were performed in the low and high contrast conditions by calculating the ratio of each variable in the low contrast and high contrast conditions. Regarding overall task performance, the ratio for task completion time differed between conditions (F(2,67) = 6.8, MSE = 0.08, p < 0.01, Figure  12A). The ratio was significantly higher with a peripheral mask (1.2 ± 0.3) when compared to full vision (1.0 ± 0.3, t(78) = −2.9, p = 0.02) and to a central mask (0.9 ± 0.2, t(67) = −3.4, p < 0.01), indicating that low contrast impacted task completion time more for a peripheral mask than it did for the other two vision conditions. The frequency of wrong product selection and frequency of failed searches could not be compared between conditions (due to division by zero). 
Figure 12.
 
Contrast effects, where different outcomes were compared by calculating ratios between performance under low contrast and under high contrast, for (A) task completion time; for the navigation outcomes: (B) the movement speed and (C) the traveled path efficiency for product searches; for the initiation phase: (D) the time to start the second to fourth search; for the exploration phase: (E) the time to target fixation, and (F) saccade amplitude; and for the homing in phase: (G) the time after the first target fixation. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 12.
 
Contrast effects, where different outcomes were compared by calculating ratios between performance under low contrast and under high contrast, for (A) task completion time; for the navigation outcomes: (B) the movement speed and (C) the traveled path efficiency for product searches; for the initiation phase: (D) the time to start the second to fourth search; for the exploration phase: (E) the time to target fixation, and (F) saccade amplitude; and for the homing in phase: (G) the time after the first target fixation. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
From here on we only report outcome measures that revealed a significant contrast effect. These are displayed in Figure  12. Supplementary material B shows all contrast-related outcomes and their ratios. 
Regarding navigation, the ratio for the movement speed and the traveled path efficiency for product searches differed between conditions (F(2,69) = 4.3, MSE = 0.005, p = 0.02, Figure  12B; F(2,68) = 7.0, MSE = 0.1, p < 0.01, Figure  12C, respectively). The ratio for the movement speed was significantly lower with a peripheral mask (0.9 ± 0.1) than with a central mask (1.0 ± 0.1, t(69) = 2.9, p = 0.02). The ratio for traveled path efficiency for product searches was significantly higher with a peripheral mask (1.3 ± 0.5) than with full vision (1.0 ± 0.3, t(68) = −3.37, p < 0.01) and with a central mask (1.0 ± 0.2, t(68) = −3.26, p < 0.01). 
In the initiation phase, the ratio for initiation time for the second through fourth search differed significantly between conditions (F(2,70) = 4.5, MSE = 0.2, p = 0.01, Figure  12D). The ratio for the time to start for the second through fourth search was significantly higher with a peripheral mask (1.7 ± 0.5) than with a central mask (1.3 ± 0.3, t(69) = 2.9, p = 0.02). 
In the exploration phase, the ratio for the time to target fixation differed between conditions (F(2,70) = 4.5, MSE = 0.2, p = 0.01, Figure  12E), as well as the saccade amplitude (F(2,71) = 3.1, MSE = 0.02, p = 0.05, Figure  12F). The ratio for time to target fixation was higher with a peripheral mask (1.2 ± 0.5) than with a central mask (0.9 ± 0.3, p = 0.02). The ratio for saccade amplitude was significantly larger with a peripheral mask (1.0 ± 0.1) than with full vision (0.9 ± 0.2, t(71) = −2.5, p = 0.05) 
In the homing in phase, only the ratio for the time after first target fixation differed between conditions (F(2, 63) = 4.2, MSE = 0.1, p = 0.02, Figure  12G). The ratio was higher with a peripheral mask (1.3 ± 0.5) than with full vision (1.1 ± 0.2, t(63) = −2.86, p = 0.02). To summarize, reduced contrast selectively affected the peripheral mask condition and did so during all phases of the search. 
Discussion
The aim of this study was to investigate the effects of simulated central and peripheral vision loss on visual-motor search and navigational behavior in a naturalistic visual search task in a 3D environment. Both central and peripheral vision loss, simulated by gaze-contingent masks, decreased overall task performance, with peripheral vision loss generally having a more severe effect. The contributions of navigation, cognitive load, various orienting behaviors (eye, head, and body), attentional selectivity, and the effects of contrast to these performance differences were assessed. 
Behavior was more affected with peripheral vision loss than with central vision loss. Navigation was less efficient, because participants with peripheral vision loss moved more slowly, took longer paths, and collided with more obstacles. Also, the initiation phase, in which navigation did not play a role, lasted longer with peripheral vision loss than with central vision loss. One explanation could be that under the peripheral vision loss simulation a larger area of the visual field was masked than under the central vision loss simulation. Although we could have attempted to match for overall area, we chose not to for several reasons. For one, here we focused on the functional division between central and peripheral vision, and by its very nature this division is not matched for visual field area. It is simply a fact that foveal vision covers a smaller area of the retina than extrafoveal vision. Instead, we chose to match approximately for cortical area, thus taking into account cortical magnification (Horton & Hoyt, 1991). Second, and related, matching the masked areas would not be representative for the typical vision impairment conditions, particularly for central vision loss. For example, in age-related macular degeneration, the scotoma size is typically between 10 and 20 degrees in diameter (Cheung & Legge, 2005), and the scotoma size in our experiment (12° in diameter) falls within that range. For peripheral vision loss conditions such as retinitis pigmentosa, the scotoma size can span a large range, depending on disease progression, also including the size used in our experiment (a remaining window of 12 degrees diameter). In advanced disease stages, the remaining visual field can get even smaller when the scotoma also starts to invade central vision (Hartong, Berson, & Dryja, 2006; Hirakawa, Iijima, Gohdo, Imai, & Tsukahara, 1999). Third, properly matching for visual field area would require intensive individual field measurements beyond the 30 degrees eccentricity that typical perimetry allows for (Broadway, 2012). Hence, even though the two types of vision loss cannot be matched at every level, they do occur in reality and may require different interventions. Therefore comparing them still provides valuable information about the role of each type of vision for the current task. The more severe effects of peripheral vision loss are likely the result of peripheral vision being more important for the current search task than central vision. In particular, in our study we show the significantly greater effects that peripheral vision loss has on navigational behavior. Conversely, the loss of central vision may be more manageable in a task such as ours. Navigation was not affected as much with central vision loss as with peripheral vision loss, allowing a faster task completion. Furthermore, besides navigation, the visual search process may also be affected less with central vision loss than with peripheral vision loss. In a structured environment like a supermarket, the presence of only peripheral vision may be sufficient, particularly given that the location of products can be relatively predictable based on the products around them (Loschky, Nuthmann, Fortenbaugh, & Levi, 2017; Loschky, Szaffarczyk, Beugnet, Young, & Boucart, 2019). Additionally, the target products were all commonly known and familiar products, so participants were not required to read the product labels with their central vision, and identifying target product colors and shapes through peripheral vision may have been sufficient to find it. Hence, our results emphasize how helpful peripheral vision can be in a naturalistic 3D environment, and, conversely, how damaging it is when unavailable. Nevertheless, we need to be careful when trying to directly compare the importance of central and peripheral vision loss. In other naturalistic tasks central vision loss may be more detrimental, warranting further research that implements a wider range of everyday tasks. Also, in our experiment we only assessed one fixed scotoma size. It would be interesting to further explore how varying scotoma sizes affect task performance by performing the same experiment for a wider range of sizes. Scotoma sizes (and shapes) are highly variable across persons with vision impairment, and are progressive in many cases (Bertera, 1988; Grover, Fishman, & Brown, 1998). First results from a virtual reality study have suggested that even modest changes in scotoma size and location affect search times when looking for a product in a supermarket aisle (Reddingius, Crabb, & Jones, 2024). Such experiments might help identify what types of scotoma impact daily life, providing guidance for when help should be offered. 
Several changes in behavior were identified that help to explain the reduced task performance with peripheral vision loss compared to full vision, with most largely consistent with reduced scene/navigational guidance. Movement speed was lower, as has also been observed in persons with retinitis pigmentosa (Authié, Berthoz, Sahel, & Safran, 2017), and is likely caused by reduced situational awareness and therefore increased caution (Jansen, Toet, & Werkhoven, 2011). However, despite this slowness, navigation remained impaired, reflected by a higher number of obstacle collisions and less efficient traveled paths during product searches and when going back to the register. Search initiation also took longer, probably because participants with peripheral vision loss needed more time to orient themselves within the supermarket before starting the next search. Interestingly, the homing-in phase also took longer with peripheral vision loss than with full vision, even though it may be expected that the intact central vision would be sufficient to discriminate the target in this phase. However, the homing in phase was initiated by the first target fixation, and participants still had to move on average 4-5 m by then. Additionally, sometimes participants did not realize they had seen the target and continued exploring the environment. This remaining navigation combined with the slower movement speed with peripheral vision loss can explain the longer duration of the homing in phase for this condition. Not having peripheral vision also resulted in smaller saccades in both the exploration and homing in phases, which aligns with our hypothesis and with the findings from a previous VR study (David et al., 2020), indicating that this effect generalizes across different environments and tasks. The small remaining area of central vision necessitates smaller saccades in sampling a scene, most likely because it is unclear where to fixate next (as this information is provided by the periphery (Ryu, Abernethy, Mann, & Poolton, 2015; Ryu, Cooke, Bellomo, & Woodman, 2020; Ryu, Mann, Abernethy, & Poolton, 2016; Vater et al., 2022) and so saccadic selection is impaired. This in turn limits the integration of scene information across larger areas (Vater et al., 2022). Interestingly, with peripheral vision loss, task completion times were further increased in low contrast compared to high contrast to a higher extent than with full vision. In the peripheral vision loss condition, the remaining central vision is sensitive to contrast and so the reduction in contrast may have selectively impaired central vision. In particular, it is likely that the reduction in contrast selectively impacted sensitivity at higher spatial frequencies (Ashraf et al., 2024; Virsu & Rovamo, 1979), making central vision particularly susceptible to reductions in contrast. In the end, the low-contrast search in the absence of peripheral vision resulted in a significantly higher increase in homing in phase duration, and a significantly higher drop in traveled path efficiency compared to full vision. Indeed, for persons with retinitis pigmentosa for instance it is a known problem that they have more trouble in low contrast conditions, also in mobility tasks (Black et al., 1997). 
Performance and behavior were also altered under central vision loss, but in different ways. Overall navigation was less efficient than for the full vision group, indicating that the ability to integrate information from the environment was reduced here too. The movement speed remained unchanged despite the lack of central vision, and there was no increase in the number of collisions with other objects. Moreover, the duration of the initiation and exploration phases remained unchanged. Likely, the remaining peripheral vision provided sufficient information to navigate towards the right area. However, even though the exploration phase itself did not take longer, participants with central vision loss stood closer to the target upon first target fixation, probably due to reduced ability to discriminate it. The homing in phase clearly did take longer than with full vision, indicating the trouble in verifying a product without high-acuity vision. Fixation durations were shorter with central vision loss in both the exploration and homing in phases, likely because fixating on an object meant that that object was masked by the simulated scotoma. Without a stimulus to fixate on, the eyes would be drawn more quickly towards the periphery, as it is more salient (Henderson, Mcclure, Pierce, & Schrock, 1997). For the homing in phase, the finding of reduced fixation durations aligns with shorter fixations in the 'verification’ phase of a previous VR search task (David et al., 2020). However, David et al., found longer fixation durations during their “scanning” phase, and thus our finding of shorter fixations in the exploration phase was unexpected. Perhaps, the delay in the gaze-contingent mask plays a role in this finding, as quickly changing fixation points could theoretically allow for a brief preview of the item before it was masked. However, the system used by David et al., was also described as having a delay. Instead, we speculate that this discrepancy may be explained by differences in the experimental environment. The environment used in our experiment occupied a larger area within a single supermarket, enhancing the need to build up a spatial map to navigate efficiently towards the target area. The remaining peripheral vision may have been sufficient for this spatial mapping. By additionally increasing fixation rate (also reflected in increased eye and gaze rotations over time), the environment could still be sampled adequately (as reflected by no increase in time until the first target fixation with central vision loss). In contrast, the environment used by David et al., was smaller, consisting of a discrete everyday room for each search, and navigation was not as important in the scanning phase. In their study, fixation durations may have been increased in the scanning phase to enhance the processing of information necessary to first detect the target through peripheral vision. These findings highlight how visual search behavior can be adapted to task demands and environmental constraints. 
Additionally, we included the need to memories a shopping list as an index of cognitive load to test any interaction between vision loss and working memory failures, as participants could retrieve the shopping list during the task whenever they needed. However, memory failures did not differ between conditions, even though it has previously been found that navigation with degraded vision requires more attention (Rand, Creem-Regehr, & Thompson, 2015), and that compensating limited visual input comes with cognitive load (Schakel et al., 2017). It could be that the cognitive load added by dealing with the vision loss simulations was not enough to reduce working memory resources attributed to remembering the shopping list. 
We also assessed when observers started looking for target-related features. Interestingly, during the exploration phase, participants fixated on products that were increasingly similar to the target product's shape, but not so much to the target's color (see Figure  8). This likely reflects the fact that product families similar in shape, but different in color, are grouped together in the store, such as different brands of candy bar, different fragrances of washing up liquid, and more. After having first fixated the target, fixations were driven by both color and shape similarity. One may expect perfect similarity to the target's color and shape in the homing in phase, as participants had already fixated the target product once. However, this similarity was not perfect, reflecting that participants still needed to verify they had found the correct target product by comparing it to other products. Nevertheless, this feature selectivity did not differ significantly at any (time-normalized) time point between vision conditions, either in the exploration phase or in the homing in phase. This may reflect that the search strategies in terms of looking for products of similar color or shape as the target product did not alter due to vision loss. However, it needs to be noted that in this analysis only products fixated on were compared to the target product. The role of covert attention was not considered, which allows processing of information outside of fixation (Posner, 1980). The direction of covert attention can be measured alternatively through the direction of microsaccades (Hafed & Clark, 2002), but, unfortunately, the sampling rate of our eye tracker was not sufficient to measure microsaccades. Therefore, we could only consider the products fixated on, and assume that these products are the ones that were being processed. Additionally, for participants with a central mask, products fixated on are not reflecting actual fixations on these products, as the mask covers the product fixated on. Nevertheless, we assume that the fixations are directed toward these products as they were detected to be most relevant based on the remaining peripheral vision in the previous fixation, and, therefore, still indicate attentional selectivity. One important contribution of using an extended 3D space such as in our VR environment is that navigation can be evaluated. It allows a broader evaluation of how vision loss affects functioning, on top of altered gaze behavior that has been evaluated in previous 2D on-screen studies. On-screen experiments often involve static displays watched from a static position, while the use of our VR environment requires the user to integrate scenes while freely moving to create a model of the environment. Hence, with free navigation in 3D, the integration of scene information may be performed differently than when doing a task on a 2D screen. Indeed, some gaze outcomes deviated from typical on-screen experiments (see also David et al., (2020)). On-screen studies previously showed an increased fixation duration and higher saccade amplitude with a central mask compared to full vision (Bertera, 1988; Cornelissen et al., 2005; Nuthmann, 2014; Nuthmann & Canas-Bajo, 2022), but in our study fixation duration was reduced, and saccadic amplitudes were unaltered. With a peripheral mask, previous studies showed an increased fixation duration (Cornelissen et al., 2005; Nuthmann, 2014; Nuthmann & Canas-Bajo, 2022), which was unaltered in our study. Moreover, head rotation increased during the homing in phase for both central and peripheral vision loss, reflecting potential adaptations in behavior that might be missed using on-screen studies. This increased head rotation is particularly relevant given that overall gaze rotation increased with central vision loss but decreased with peripheral vision loss. This speaks to the coordinative and compensatory role of eye and head movements in real-life visually guided tasks (e.g., Nieboer et al., (2025)). Apparently, visual attention is guided differently when navigating freely, while having a larger field of view and being allowed unconstrained head motion as well. The body was also free to move, but body rotation did not differ between conditions, indicating that mostly head and eye movements were used to compensate for vision loss. 
In our analyses, we distinguished between the exploration phase and homing in phase of the task, allowing us to also compare gaze and other orienting behavior across task phases. Regarding gaze behavior, participants in each of the vision conditions made fewer and longer fixations in the homing in phase than in the exploration phase. In the homing in phase, participants likely used longer fixations to gain more detailed information from the target product and its surrounding products, to verify they were going to select the correct product. Additionally, saccades were smaller in the homing in phase than in the exploration phase, but only with full vision and central vision loss. In the homing in phase, participants did not need to scan around as much anymore, as they were already close to the correct shelf and the correct target product. Saccade amplitude did not reduce in the same way with peripheral vision loss, likely reflecting a floor effect, as saccades were already relatively small in the exploration phase. Furthermore, body, head, eye, and gaze all showed less rotations in the homing in phase than in the exploration phase, for each of the vision conditions. This can be explained by the fact that, in the homing in phase, participants had already fixated the target product once and only needed to confirm it was the correct product, not requiring as much scanning as when looking for the correct shelf as in the exploration phase. These findings confirm that distinguishing the exploration and homing in phases was important, as each of these phases reflected different behavior. 
Even though VR technology enables experiments that are more naturalistic for 3D tasks than on-screen experiments, it still comes with limitations compared to the real world. For example, the visual field used in VR is still only about 50% of the real visual field. Moreover, in our study, body movements were still partly restricted, as the experiment was performed seated. From previous research we know that navigational behavior and gait differ between real-life and VR (Hollman, Brey, Robb, Bang, & Kaufman, 2006; Kalantari, Mostafavi, Xu, Lee, & Yang, 2024). Real walking through our VR environment would have been more realistic, but this would have required too large a space for a search task of this extent. Furthermore, we seek to expand this task to elderly populations with real vision loss, for whom actual walking could be a challenge and even a risk. Another option could have been to perform the study in a real supermarket, but this makes gaze-contingent vision loss a considerable challenge, plus limits standardization and manipulation of the environment, as well as potential future diagnostic applications. Importantly, we were not interested in actual supermarket behavior, but more in the effects of vision impairment on behavior. Simulating vision loss in VR has been shown to result in comparable performance as with an actual impairment (Jones, Somoskeöy, Chow-Wing-Bom, & Crabb, 2020; Neugebauer et al., 2024). Even though the behavior between VR and real-life behavior differs, this does not mean that the effects of vision impairment would not go in the same direction for both cases (Jones et al., 2020; Neugebauer et al., 2024). Further studies could expand upon this work to better understand the consequences of different types of vision loss on daily life. We are currently testing the same environment on people with actual vision impairment. It is unknown how the current findings with simulated vision loss generalize to people with vision impairment, also because these people are often older than the currently tested student population. Moreover, although the current sample was new to having their vision affected, people with real vision impairment have usually developed long-term strategies to deal with it. Therefore further testing in people with vision impairment is needed to gain further insights into the daily life consequences of vision loss. 
We have shown that simulations of central and peripheral vision loss both decrease task performance in a naturalistic search task in a 3D environment, with dissociable profiles in terms of navigational and orienting movements, thus contributing towards enhanced understanding of behavioral adaptations caused by vision impairment. The dissociable results underpin the distinct roles of central and peripheral vision in navigation and visual search. 
Acknowledgments
The authors thank Solon Lappas for his help in data collection. 
Supported by James S McDonnell Foundation grant https://doi.org/10.37717/2022-3889 awarded to CNLO and DLM. 
Commercial relationships: none. 
Corresponding author: Kirsten Veerkamp. 
Address: Department of Experimental and Applied Psychology & Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, 1081 HV, The Netherlands. 
References
Ashraf, M., Mantiuk, R. K., & Chapiro, A. (2024). castleCSF —A contrast sensitivity function of color, area, spatiotemporal frequency, luminance and eccentricity. Journal of Vision, 24(4), 1–38, https://doi.org/10.1167/JOV.24.4.5.
Authié, C. N., Berthoz, A., Sahel, J. A., & Safran, A. (2017). Adaptive gaze strategies for locomotion with constricted visual field. Frontiers in Human Neuroscience, 11, 387, https://doi.org/10.3389/fnhum.2017.00387. [PubMed]
Bertera, J. H. (1988). The effect of simulated scotomas on visual search in normal subjects. Investigative Ophthalmology & Visual Science, 29(3), 470–475. [PubMed]
Bishop, V. (1996). Causes and functional implications of visual impairment. In Corn, A. L. & Koenig, A. J. (Eds.), Foundations of low vision: Clinical and functional perspectives (pp. 86–114). New York: AFB Press.
Black, A., Lovie-Kitchin, J. E., Woods, R. L., Arnold, N., Byrnes, J., & Murrish, J. (1997). Mobility performance with retinitis pigmentosa. Clinical and Experimental Optometry, 80(1), 1–12, https://doi.org/10.1111/j.1444-0938.1997.tb04841.x.
Bookwala, J., & Lawson, B. (2011). Poor vision, functioning, and depressive symptoms: A test of the activity restriction model. Gerontologist, 51(6), 798–808, https://doi.org/10.1093/geront/gnr051. [PubMed]
Broadway, D. C. (2012). Visual field testing for glaucoma - a practical guide. Community Eye Health Journal, 25(79 & 80), 66–70.
Cheung, S. H., & Legge, G. E. (2005). Functional and cortical adaptations to central vision loss. Visual Neuroscience, 22(2), 187–201, https://doi.org/10.1017/S0952523805222071. [PubMed]
Coeckelbergh, T. R. M., Cornelissen, F. W., Brouwer, W. H., & Kooijman, A. C. (2002). The effect of visual field defects on eye movements and practical fitness to drive. Vision Research, 42, 669–677. www.elsevier.com/locate/visres. [PubMed]
Cornelissen, F. W., Bruin, K. J., & Kooijman, A. C. (2005). The Influence of Artificial Scotomas on Eye Movements during Visual Search. Optometry and Vision Science, 42, 2735. http://journals.lww.com/optvissci.
David, E. J., Beitner, J., & Võ, M. L. (2021). The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. Journal of Vision, 21(7), 1–17, https://doi.org/10.1167/jov.21.7.3.
David, E. J., Beitner, J., & Võ, M. L.-H. (2020). Effects of transient loss of vision on head and eye movements during visual search in a virtual environment. Brain Sciences, 10(11), 1–26, https://doi.org/10.3390/brainsci10110841.
Dicks, M., Button, C., & Davids, K. (2010). Examination of gaze behaviors under in situ and video simulation task constraints reveals differences in information pickup for perception and action. Attention, Perception, and Psychophysics, 72(3), 706–720, https://doi.org/10.3758/APP.72.3.706.
Geringswald, F., Baumgartner, F., & Pollmann, S. (2012). Simulated loss of foveal vision eliminates visual search advantage in repeated displays. Frontiers in Human Neuroscience, 6, 134, https://doi.org/10.3389/fnhum.2012.00134. [PubMed]
Geringswald, F., & Pollmann, S. (2015). Central and peripheral vision loss differentially affects contextual cueing in visual search. Journal of Experimental Psychology: Learning Memory and Cognition, 41(5), 1485–1496, https://doi.org/10.1037/xlm0000117. [PubMed]
Grover, S., Fishman, G. A., & Brown, J. (1998). Patterns of Visual Field Progression in Patients with Retinitis Pigmentosa. Ophthalmology, 105(6), 1069–1075. [PubMed]
Hafed, Z. M., & Clark, J. J. (2002). Microsaccades as an overt measure of covert attention shifts. Vision Research. 42(22), 2533–2545, https://doi.org/10.1016/S0042-6989(02)00263-8. [PubMed]
Hartong, D. T., Berson, E. L., & Dryja, T. P. (2006). Retinitis pigmentosa. Lancet, 368, (9549), 1795–1809, https://doi.org/10.1016/S0140-6736(06)69740-7. [PubMed]
Henderson, J. M., Mcclure, K. K., Pierce, S., & Schrock, G. (1997). Object identification without foveal vision: Evidence from an artificial scotoma paradigm. Perception & Psychophysics , 59(3), 323–346. [PubMed]
Hirakawa, H., Iijima, H., Gohdo, T., Imai, M., & Tsukahara, S. (1999). Progression of Defects in the Central 10-Degree Visual Field of Patients With Retinitis Pigmentosa and Choroideremia. American Journal of Ophthalmology, 127(4), 436–442. [PubMed]
Hollman, J. H., Brey, R. H., Robb, R. A., Bang, T. J., & Kaufman, K. R. (2006). Spatiotemporal gait deviations in a virtual reality environment. Gait and Posture, 23(4), 441–444, https://doi.org/10.1016/j.gaitpost.2005.05.005.
Horton, J. C., & Hoyt, W. F. (1991). The Representation of the Visual Field in Human Striate Cortex A Revision of the Classic Holmes Map. Archives of Ophthalmological Research, 109, 816–824.
Hulleman, J., & Olivers, C. N. L. (2015). The impending demise of the item in visual search. Behavioral and Brain Sciences, 40, e132, https://doi.org/10.1017/S0140525×15002794.
Jansen, S. E. M., Toet, A., & Werkhoven, P. J. (2011). Human locomotion through a multiple obstacle environment: Strategy changes as a result of visual field limitation. Experimental Brain Research, 212(3), 449–456, https://doi.org/10.1007/s00221-011-2757-1. [PubMed]
Jones, P. R., Somoskeöy, T., Chow-Wing-Bom, H., & Crabb, D. P. (2020). Seeing other perspectives: evaluating the use of virtual and augmented reality to simulate visual impairments (OpenVisSim). Npj Digital Medicine, 3(1), 32, https://doi.org/10.1038/s41746-020-0242-6. [PubMed]
Kalantari, S., Mostafavi, A., Xu, T. B., Lee, A. S., & Yang, Q. (2024). Comparing spatial navigation in a virtual environment vs. an identical real environment across the adult lifespan. Computers in Human Behavior, 157, 108210, https://doi.org/10.1016/j.chb.2024.108210.
Kempen, G. I. J. M., Ballemans, J., Ranchor, A. V., Van Rens, G. H. M. B., & Zijlstra, G. A. R. (2012). The impact of low vision on activities of daily living, symptoms of depression, feelings of anxiety and social support in community-living older adults seeking vision rehabilitation services. Quality of Life Research, 21(8), 1405–1411, https://doi.org/10.1007/s11136-011-0061-y.
Lamoureux, E. L., Hassell, J. B., & Keeffe, J. E. (2004). The determinants of participation in activities of daily living in people with impaired vision. American Journal of Ophthalmology, 137(2), 265–270, https://doi.org/10.1016/j.ajo.2003.08.003. [PubMed]
Loschky, L. C., Nuthmann, A., Fortenbaugh, F. C., & Levi, D. M. (2017). Scene perception from central to peripheral vision. Journal of Vision, 17(1), 6–6, https://doi.org/10.1167/17.1.6. [PubMed]
Loschky, L. C., Szaffarczyk, S., Beugnet, C., Young, M. E., & Boucart, M. (2019). The contributions of central and peripheral vision to scenegist recognition with a 180° visual field. Journal of Vision, 19(5), 1–21, https://doi.org/10.1167/19.5.15.
Neugebauer, A., Castner, N., Severitt, B., Stingl, K., Ivanov, I., & Wahl, S. (2024). Simulating vision impairment in virtual reality: a comparison of visual task performance with real and simulated tunnel vision. Virtual Reality, 28(2), 97, https://doi.org/10.1007/s10055-024-00987-0.
Nieboer, W., Svensen, C. M., van Paridon, K., van Biesen, D., & Mann, D. L. (2025). How people with vision impairment use their gaze to hit a ball. Translational Vision Science and Technology, 14(1), 1–1.
Nuthmann, A. (2013). On the visual span during object search in real-world scenes. Visual Cognition, 21(7), 803–837, https://doi.org/10.1080/13506285.2013.832449.
Nuthmann, A. (2014). How Do the Regions of the Visual Field Contribute to Object Search in Real-World Scenes? Evidence From Eye Movements. Journal of Experimental Psychology: Human Perception and Performance, 40(1), 342–360, https://doi.org/10.1037/a0033854.supp. [PubMed]
Nuthmann, A., & Canas-Bajo, T. (2022). Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. Journal of Vision, 22(1), https://doi.org/10.1167/JOV.22.1.10.
Pataky, T. C. (2012). One-dimensional statistical parametric mapping in Python. Computer Methods in Biomechanics and Biomedical Engineering, 15(3), 295–301, https://doi.org/10.1080/10255842.2010.527837. [PubMed]
Posner, M. I. (1980). Orienting of attention. The Quarterly Journal of Experimental Psychology, 32(1), 3–25, https://doi.org/10.1080/00335558008248231. [PubMed]
Rand, K. M., Creem-Regehr, S. H., & Thompson, W. B. (2015). Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 649–664, https://doi.org/10.1037/xhp0000040. [PubMed]
Reddingius, P., Crabb, D. P., & Jones, P. (2024, June). Shopping with sight loss: Using a virtual reality shopping task to systematically quantify the impact of a scotoma size and eccentricity on everyday quality of life. Investigative Ophthalmology & Visual Science, 65(7), 1849–1849.
Rosenholtz, R. (2016). Capabilities and Limitations of Peripheral Vision. Annual Review of Vision Science ,2, 437–457, https://doi.org/10.1146/annurev-vision-082114-035733. [PubMed]
Rosenholtz, R., Huang, J., & Ehinger, K. A. (2012). Rethinking the role of top-down attention in vision: Effects attributable to a lossy representation in peripheral vision. Frontiers in Psychology, 3, 13, https://doi.org/10.3389/fpsyg.2012.00013. [PubMed]
Rosenholtz, R., Huang, J., Raj, A., Balas, B. J., & Ilie, L. (2012). A summary statistic representation in peripheral vision explains visual search. Journal of Vision, 12(4), 1–17, https://doi.org/10.1167/12.4.14.
Ryu, D., Abernethy, B., Mann, D. L., & Poolton, J. M. (2015). The contributions of central and peripheral vision to expertise in basketball: How blur helps to provide a clearer picture. Journal of Experimental Psychology: Human Perception and Performance, 41(1), 167–185, https://doi.org/10.1037/a0038306. [PubMed]
Ryu, D., Cooke, A., Bellomo, E., & Woodman, T. (2020). Watch out for the hazard! Blurring peripheral vision facilitates hazard perception in driving. Accident Analysis and Prevention, 146, 105755, https://doi.org/10.1016/j.aap.2020.105755. [PubMed]
Ryu, D., Mann, D. L., Abernethy, B., & Poolton, J. M. (2016). Gaze-contingent training enhances perceptual skill acquisition. Journal of Vision, 16(2), 1–21, https://doi.org/10.1167/16.2.2.
Schakel, W., Bode, C., Van Der Aa, H. P. A., Hulshof, C. T. J., Bosmans, J. E., Van Rens, G. H. M. B., ... Van Nispen, R. M. A. (2017). Exploring the patient perspective of fatigue in adults with visual impairment: A qualitative study. BMJ Open, 7(8), e015023, https://doi.org/10.1136/bmjopen-2016-015023. [PubMed]
Tuten, W. S., & Harmening, W. M. (2021). Foveal vision. Current Biology, 31(11), R701–R703, https://doi.org/10.1016/j.cub.2021.03.097.
van der Laan, L. N., Papies, E. K., Ly, A., & Smeets, P. A. M. (2022). Examining the neural correlates of goal priming with the NeuroShop, a novel virtual reality fMRI paradigm. Appetite, 170, 105901, https://doi.org/10.1016/j.appet.2021.105901. [PubMed]
Vater, C., Wolfe, B., & Rosenholtz, R. (2022). Peripheral vision in real-world tasks: A systematic review. Psychonomic Bulletin and Review, 29(5), 1531–1557, https://doi.org/10.3758/s13423-022-02117-w. [PubMed]
Virsu, V., & Rovamo, J. (1979). Visual Resolution, Contrast Sensitivity, and the Cortical Magnification Factor. In Experimental brain research, 37(3), 475–494.
World Health Organization. World report on vision. Geneva; World Health Organization. (2019).
Figure 1.
 
(A) The VR supermarket environment seen from the starting position. (B) A participant while performing the experiment. (C) The VR supermarket environment while central vision loss is simulated by a central mask. (D) The VR supermarket environment while peripheral vision loss is simulated by a peripheral mask. (E) An overview of the different phases within the search sequence, and the outcome measures obtained from each product search.
Figure 1.
 
(A) The VR supermarket environment seen from the starting position. (B) A participant while performing the experiment. (C) The VR supermarket environment while central vision loss is simulated by a central mask. (D) The VR supermarket environment while peripheral vision loss is simulated by a peripheral mask. (E) An overview of the different phases within the search sequence, and the outcome measures obtained from each product search.
Figure 2.
 
Overall task performance across the three vision simulation conditions. Task completion time differed significantly between conditions. Here and in all other plots, the asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 2.
 
Overall task performance across the three vision simulation conditions. Task completion time differed significantly between conditions. Here and in all other plots, the asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 3.
 
Navigation performance, with (A) movement speed, (B) traveled path efficiency for the product searches, (C) traveled path efficiency back to the register, and the (D) frequency of obstacle collisions. Each variable differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 3.
 
Navigation performance, with (A) movement speed, (B) traveled path efficiency for the product searches, (C) traveled path efficiency back to the register, and the (D) frequency of obstacle collisions. Each variable differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 4.
 
Initiation phase, with (A) the time to start the first search of each sequence, in which the shopping list was memorized, did not differ between conditions, however, (B) the time to start the second, third and fourth search did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 4.
 
Initiation phase, with (A) the time to start the first search of each sequence, in which the shopping list was memorized, did not differ between conditions, however, (B) the time to start the second, third and fourth search did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 5.
 
Exploration phase, with (A) time to target fixation and (B) the target detection distance, which both differed significantly between conditions. The asterisk sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 5.
 
Exploration phase, with (A) time to target fixation and (B) the target detection distance, which both differed significantly between conditions. The asterisk sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 6.
 
Exploration phase, with (A) fixation rate and (B) duration, as well as (C) saccade amplitude, differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 6.
 
Exploration phase, with (A) fixation rate and (B) duration, as well as (C) saccade amplitude, differed significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 7.
 
Exploration phase, with (A) body rotation and (B) head rotation did not differ between conditions. (C) Eye rotation and (D) gaze (head + eye) rotation did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 7.
 
Exploration phase, with (A) body rotation and (B) head rotation did not differ between conditions. (C) Eye rotation and (D) gaze (head + eye) rotation did differ significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 8.
 
Feature selectivity for (A) color and (B) shape. Products fixated at during the product search were compared to the target product, with lower values indicating higher similarity to the target product. Time was normalized from the first teleport until the first fixation on the target (i.e., exploration phase), and then from first fixation on the target to the target product selection (i.e., homing in phase). The dashed vertical line indicates the average time point of the first fixation on the target product (i.e., end of exploration phase and start of homing in phase).
Figure 8.
 
Feature selectivity for (A) color and (B) shape. Products fixated at during the product search were compared to the target product, with lower values indicating higher similarity to the target product. Time was normalized from the first teleport until the first fixation on the target (i.e., exploration phase), and then from first fixation on the target to the target product selection (i.e., homing in phase). The dashed vertical line indicates the average time point of the first fixation on the target product (i.e., end of exploration phase and start of homing in phase).
Figure 9.
 
Homing in phase duration, as measured by the time after the first target fixation, differed between conditions. The plus sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 9.
 
Homing in phase duration, as measured by the time after the first target fixation, differed between conditions. The plus sign indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 10.
 
Homing in phase, with (A) fixation rate, (B) fixation duration, and (C) saccade amplitude, each differing significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 10.
 
Homing in phase, with (A) fixation rate, (B) fixation duration, and (C) saccade amplitude, each differing significantly between conditions. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 11.
 
Homing in phase, with (A) body rotation, (B) head rotation, (C) eye rotation, and (D) gaze (head + eye) rotation each normalized by time. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 11.
 
Homing in phase, with (A) body rotation, (B) head rotation, (C) eye rotation, and (D) gaze (head + eye) rotation each normalized by time. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 12.
 
Contrast effects, where different outcomes were compared by calculating ratios between performance under low contrast and under high contrast, for (A) task completion time; for the navigation outcomes: (B) the movement speed and (C) the traveled path efficiency for product searches; for the initiation phase: (D) the time to start the second to fourth search; for the exploration phase: (E) the time to target fixation, and (F) saccade amplitude; and for the homing in phase: (G) the time after the first target fixation. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Figure 12.
 
Contrast effects, where different outcomes were compared by calculating ratios between performance under low contrast and under high contrast, for (A) task completion time; for the navigation outcomes: (B) the movement speed and (C) the traveled path efficiency for product searches; for the initiation phase: (D) the time to start the second to fourth search; for the exploration phase: (E) the time to target fixation, and (F) saccade amplitude; and for the homing in phase: (G) the time after the first target fixation. The asterisk indicates a significant omnibus effect (see Table 2 for actual p values). The horizontal black lines indicate significant pairwise effects (again, see Table 2 for details). Boxplots are displayed for each condition, with the central line indicating the median, the edges indicating the 25th and 75th percentile, and the whiskers extending to the minimum and maximum values not considered to be outliers. Each dot represents an individual participant, black plus signs indicate outliers. Some of the earlier participants were tested with a lower frame rate; they are indicated by a lighter color shade.
Table 1.
 
An overview of all four search sequences and their set-up.
Table 1.
 
An overview of all four search sequences and their set-up.
Table 2.
 
An overview of the results for all outcome variables, averaged across participants per condition. Outliers were excluded from the reported means and standard deviations.
Table 2.
 
An overview of the results for all outcome variables, averaged across participants per condition. Outliers were excluded from the reported means and standard deviations.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×