Open Access
Article  |   December 2019
Predicting artificial visual field losses: A gaze-based inference study
Author Affiliations
Journal of Vision December 2019, Vol.19, 22. doi:https://doi.org/10.1167/19.14.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Erwan Joël David, Pierre Lebranchu, Matthieu Perreira Da Silva, Patrick Le Callet; Predicting artificial visual field losses: A gaze-based inference study. Journal of Vision 2019;19(14):22. doi: https://doi.org/10.1167/19.14.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual field defects are a world-wide concern, and the proportion of the population experiencing vision loss is ever increasing. Macular degeneration and glaucoma are among the four leading causes of permanent vision loss. Identifying and characterizing visual field losses from gaze alone could prove crucial in the future for screening tests, rehabilitation therapies, and monitoring. In this experiment, 54 participants took part in a free-viewing task of visual scenes while experiencing artificial scotomas (central and peripheral) of varying radii in a gaze-contingent paradigm. We studied the importance of a set of gaze features as predictors to best differentiate between artificial scotoma conditions. Linear mixed models were utilized to measure differences between scotoma conditions. Correlation and factorial analyses revealed redundancies in our data. Finally, hidden Markov models and recurrent neural networks were implemented as classifiers in order to measure the predictive usefulness of gaze features. The results show separate saccade direction biases depending on scotoma type. We demonstrate that the saccade relative angle, amplitude, and peak velocity of saccades are the best features on the basis of which to distinguish between artificial scotomas in a free-viewing task. Finally, we discuss the usefulness of our protocol and analyses as a gaze-feature identifier tool that discriminates between artificial scotomas of different types and sizes.

Introduction
Visual field loss, which affects predominantly older people, is growing in prevalence. The Eye Diseases Prevalence Research Group (Friedman et al., 2004) reported that macular degeneration affected 1.47% of the U.S. population. Tham et al. (2014) conducted a meta-analyses and reported that glaucoma affected 3.54% of individuals aged between 40 and 80 worldwide. Finally, the Rotterdam study (Skenduli-Bala et al., 2005) examined 3,761 individuals and reported the leading causes of visual field loss to be open-angle glaucoma, stroke, and macular degeneration. As life span increases, the number of elderly people is increasing significantly and we are observing a dramatic increase in visual field loss. 
Glaucoma affects peripheral vision, leaving only central vision untouched in the most advanced cases (Weinreb, Aung, & Medeiros, 2014). Age-related macular degeneration (AMD; Jager, Mieler, & Miller, 2008) induces central vision loss. These diseases are observed predominantly in older populations (Coleman, Chan, Ferris, & Chew, 2008; King, Azuara-Blanco, & Tuulonen, 2013). Glaucoma and macular degeneration can remain unnoticed and develop for a while before vision losses become apparent enough for a person to seek medical attention. Unfortunately, the progress of such diseases can be slowed or stopped but not reversed; therein lies the importance of detecting them as early as possible. As they concern predominantly older people, it is important to detect and treat visual field loss in order to maintain individual independence and an optimum quality of life for as long as possible (Mitchell & Bradley, 2006; Fea, Hengerer, Lavia, & Au, 2017). 
In this context we are interested in developing a set of analyses suitable for identifying which eye-movement characteristics are best to differentiate and study populations with visual field losses. We wish to design a methodology comprised of a visual task and a set of analyses. This is meant to produce results suited for monitoring the progress of scotomas, coping mechanisms, or visual therapies (S. T. Chung, 2011; Seiple, Grant, & Szlyk, 2011; Sabel & Gudlin, 2014; Livengood & Baker, 2015), or for detecting the early onset of visual field defects. For example, it was observed that patients with glaucoma performed better when they voluntarily looked toward areas masked by their scotoma before a saccade (Luo, Vargas-Martin, & Peli, 2008; Kasneci et al., 2014; Sippel et al., 2014), and that patients with macular degeneration are able to develop superior pseudofoveas (Nilsson, Frennesson, & Nilsson, 2003). Both of these behaviors can be trained in order to improve one's quality of life. The literature lacks formal measures based on gaze movements in regards to the four aforementioned applications. We believe that this methodology would prove useful in characterizing and comparing gaze movements, and therefore help in the development of these applications. 
We ran an experiment with normal participants in order to gather gaze data. Participants experienced visual field defects via online simulation of central and peripheral scotoma (localized visual field loss) of varying radii during free viewing of real-world scenes. The experimental results were used to extract gaze features useful for differentiating between different types of scotomas. We performed various analyses in order to report measures of “usefulness.” 
Inference from gaze data
In 1967, Yarbus demonstrated that scene exploration could differ greatly according to the task at hand (Tatler, Wade, Kwan, Findlay, & Velichkovsky, 2010). He concluded that a visual task has an effect on someone's gaze. Indeed, cognitive processes can be observed through eye movements and offer a wealth of information related to internal processes (Itti, 2015; Coutrot, Hsiao, & Chan, 2018). Inference from gaze data consists in deducing subjective characteristics solely from ocular data, such as age (Le Meur et al., 2017b), gender (Coutrot, Binetti, Harrison, Mareschal, & Johnston, 2016; Sammaknejad, Pouretemad, Eslahchi, Salahirad, & Alinejad, 2017), mental states and traits (Liao, Zhang, Zhu, & Ji, 2005; Hoppe, Loetscher, Morey, & Bulling, 2015; Yamada & Kobayashi, 2017; Hoppe, Loetscher, Morey, & Bulling, 2018), expertise and skill proficiency (Eivazi & Bednarik, 2011; Boccignone, Ferraro, Crespi, Robino, & de'Sperati, 2014; Tien et al., 2014; Kolodziej, Majkowski, Francuz, Rak, & Augustynowicz, 2018), and neurological disorders (Kupas, Harangi, Czifra, & Andrassy, 2017; Terao, Fukuda, & Hikosaka, 2017).It has proven useful in identifying autism spectrum disorder (Pierce et al., 2016), fetal alcohol spectrum disorder (Tseng, Paolozza, Munoz, Reynolds, & Itti, 2013), dementia (Zhang et al., 2016; Beltrán, García-Vázquez, Benois-Pineau, Gutierrez-Robledo, & Dartigues, 2018), dyslexia (Benfatto et al., 2016), anxiety (Abbott, Shirali, Haws, & Lack, 2017), mental fatigue (Yamada & Kobayashi, 2017), and other disorders. It has also been applied to task detection (Borji & Itti, 2014; Haji-Abolhassani & Clark, 2014; Kanan, Ray, Bseiso, Hsiao, & Cottrell, 2014; Boisvert & Bruce, 2016). In addition gaze is also utilized as a biometric clue (Holland & Komogortsev, 2011; Cantoni, Galdi, Nappi, Porta, & Riccio, 2015). 
The current state of research shows that patients with visual field defects behave abnormally during visual tasks. Compared with subjects of the same age group, patients suffering from glaucoma (i.e., peripheral vision loss) appear to conserve fixation stability (Longhin et al., 2013), but in a visual-search task (N. D. Smith, Glen, & Crabb, 2012) they show a decreased number of fixations. In a simulated car driving task (Crabb et al., 2010), patients seem to compensate for their defects by an increased number of saccades and a reduced duration of fixation, perhaps in an effort to cover a wider range of the scene. Inversely, in a free-viewing task, N. D. Smith, Crabb, Glen, Burton, and Garway-Heath (2012) rationalized a reduced number of saccades and an increased duration of fixations as a coping mechanism. 
Finally, glaucoma patients engaged in a visual search task (Wiecek, Pasquale, Fiser, Dakin, & Bex, 2012) did not behave differently from an age-matched control group in relation to saccade amplitudes and fixation durations. The different outcomes among these experiments may be due to the different tasks, but also to the high intersubject variability that arises when studying glaucoma. Several factors are at play here, such as the small sample sizes, the progress and nonhomogeneity of retinal lesions, the availability and cognitive idiosyncrasies of elderly subjects, and the uneven evolution of visual coping mechanisms. 
Patients suffering from macular degeneration, on the other hand, are left relying on their peripheral vision to perform tasks and explore their surroundings. Because of the lack of foveal vision, they learn to rely on pseudofoveas or preferred retinal loci (Cheung & Legge, 2005; Crossland, Engel, & Legge, 2011). Pseudofoveas are areas outside of the defective central region that are used in lieu of the fovea. Extra-foveal points of fixation are the reason why this population shows more unstable fixations, seemingly as a result of poor motor control (Macedo, Crossland, & Rubin, 2011). In scene-categorization tasks, AMD patients demonstrated good performance (above 75%), yet below the performance of healthy participants (Tran, Rambaud, Despretz, & Boucart, 2010). In an object and scene identification task (Thibaut, Delerue, Boucart, & Tran, 2016), AMD patients exhibited reduced performances, as well as an increased number of saccades and shorter fixation duration. 
Few authors applied gaze-based inference to visual field losses. Crabb, Smith, and Zhu (2014) reported encouraging results in a detection task: identifying glaucoma patients among healthy subjects from eye movement data obtained while participants watched videos. After applying a novel method of scanpath processing, they used kernel principal component analysis to extract a set of features. These features were input into a naive Bayes linear classification algorithm tasked with learning to separate control from glaucoma patients. Plotting a receiver operating characteristic curve, the authors reported a 76% sensitivity at 90% specificity. Kübler, Rothe, Schiefer, Rosenstiel, and Kasneci (2017) obtained above-chance–level results when separating patients (glaucoma and hemianopia) who failed at a driving task from patients who succeeded at the same task, as well as a healthy control group. 
In a review of the literature, Coutrot et al. (2018) reported 17 gaze features that can be extracted from ocular data—four related to fixations: duration, dispersion, location and clusters, and five related to saccades: amplitude, duration, latency, direction and velocity. Microsaccades are also potentially informative and have the same features as saccades. Finally, one can measure pupil dilation as well as blink frequency and duration. With regards to the gaze features of individuals suffering from visual field loss, there is no agreement on which gaze characteristics are best suited to the study of their ocular movements and patterns. This is why we elected to study differences between gaze features during a free-viewing task with artificial scotoma as a first step in that direction. 
Saccade direction biases during scene exploration
Numerous oculomotor biases drive the exploration of natural scenes (Tatler & Vincent, 2009; Clarke & Tatler, 2014). In this study, we were particularly interested in three biases related to saccadic directions. The horizontal bias describes a tendency to produce more saccades parallel to the horizontal than to the vertical axis (Foulsham, Kingstone, & Underwood, 2008; Tatler & Vincent, 2009). Foulsham et al. (2008) argued that this effect is dependent on the orientation statistics of the scene. The authors also mentioned that this effect may originate from a bias in eye tracking experiments: Stimuli are often presented on computer monitors in landscape orientation, which displays more information horizontally than vertically. Nonetheless, one could argue that humans' natural environment is similarly asymmetric. Le Meur and Coutrot (2016b) demonstrated that this bias is dependent on the content of the scene (e.g., natural scenes, webpages, dynamic landscapes, and conversational videos). 
Another bias, the “saccadic momentum” (T. J. Smith & Henderson, 2009, 2011), refers to an inclination to plan saccades in the same direction and of approximately the same distance as the previous saccade (forward saccades). Finally, T. J. Smith and Henderson (2009) defined “facilitation of return” as a mechanism that encourages the planning of saccades toward the previous point of fixation (backward saccades), in spite of the inhibition of return (Posner & Cohen, 1984). That does not mean that inhibition of return is ruled out as a mechanism in scene viewing; for example, Engbert, Trukenbrod, Barthelmé, and Wichmann (2015) and Rothkegel, Trukenbrod, Schütt, Wichmann, and Engbert (2016) showed that inhibitory tagging is a valid and necessary mechanism for explaining scanpath dynamics. 
Gaze-contingent paradigm
In the present experiment, we chose to rely on a gaze-contingent paradigm to simulate artificial scotomas in normal participants with the purposes of studying ocular behaviors in cases where central or peripheral vision is unavailable. A gaze-contingent protocol (Duchowski, Cournia, & Murphy, 2004; Aguilar & Castet, 2011) is an experimental protocol in which a stimulus displayed on a screen is updated in real time according to gaze data sent from an eye tracker. Such a paradigm has been used to study both central and peripheral visions (e.g., Foulsham, Teszka, & Kingstone, 2011; Cajar, Engbert, & Laubrock, 2016). Central masking prevents sampling of the scene with central vision. Here a mask modifies a stimuli on screen where a participant is fixating; it was first used by Rayner and Bertera (1979) in a reading task. Peripheral masking only allows sampling of stimuli by central vision. In this configuration, a mask centered on a participant's gaze is modifying information peripherally, leaving the center field of view intact, which was first implemented by McConkie and Rayner (1975) in a reading task. The nature of the mask can include, for instance, total obstruction of the visual field or low/high-pass frequency filters. Low-pass filtering stimuli will preserve low spatial frequencies (i.e., coarse-grained information) while attenuating high spatial frequencies (i.e., fine-grained information); high-pass filters, on the other hand, preserve high spatial frequencies and attenuate low spatial frequencies. The size and shape of the mask can be varied by the experimenter to assess the size and shape of the perceptual span of the task at hand (Foulsham et al., 2011; Nuthmann, 2013). Finally, it is possible to create a gaze-contingent mask inspired by natural scotomas from the measurement of naturally defective visual fields (Glen, Smith, Jones, & Crabb, 2016). The principal shortcomings of this paradigm are related to eye tracker data quality (Reingold, 2014) and latency (Aguilar & Castet, 2011; see Arabadzhiyska, Tursun, Myszkowski, Seidel, & Didyk, 2017, for a review pertaining to this issue applied to saccade landing position prediction). 
Artificial scotomas: Simulation of retinal defects
Simulating peripheral and central visual field defects in visual tasks shows interesting results. Saccades are directed toward areas where information is preserved at the time of planning (Foulsham et al., 2011). As such, saccade amplitude will increase when masking central vision, while masking the peripheral field of view will tend to reduce saccade amplitudes. These effects have particularly been verified during visual search and exploration of natural scenes, and with complete removal of visual information (van Diepen & d'Ydewalle, 2003), as well as with low-pass (Loschky & McConkie, 2002; Loschky, McConkie, Yang, & Miller, 2005; Foulsham et al., 2011; Laubrock, Cajar, & Engbert, 2013; Nuthmann, 2013, 2014; Cajar, Engbert, & Laubrock, 2016; Cajar, Schneeweiß, Engbert, & Laubrock, 2016) and high-pass filtering of scenes (Laubrock et al., 2013; Nuthmann, 2013, 2014; Cajar, Engbert, & Laubrock, 2016; Cajar, Schneeweiß, et al., 2016). Nuthmann and Malcolm (2016) demonstrated this effect by removing colors from scenes. This effect shows a clear correlation with mask sizes (Loschky & McConkie, 2002; Loschky et al., 2005; Foulsham et al., 2011; Nuthmann, 2013, 2014; Cajar, Engbert & Laubrock, 2016; Geringswald, Porracin, & Pollmann, 2016). In an object identification task with central scotomas, Henderson, McClure, Pierce, and Schrock (1997) observed an increased number of saccades toward objects of interest, but shorter fixations once said objects, now masked, were foveated. In a visual search task, Cornelissen, Bruin, and Kooijman (2005) also reported an increase in return saccades with central scotomas. Conversely, when studying peripheral scotomas, they noticed a decrease in return saccades. 
In the present study, we examined ocular behaviors by removing all peripheral or central information using a gaze-contingent paradigm. We also varied the size of the masks in order to measure the effects of the amount of available visual information on ocular patterns. We first present our experimental set-up and method, followed by a series of analyses dedicated to extracting a measure of overall suitability for using gaze features to differentiate between artificial scotomas. Finally, we discuss our general interpretations before presenting our conclusions. 
We recruited healthy participants instead of patients suffering from visual fields defects in order to first validate our methodology on results replicated consistently in the literature. A free-viewing task was chosen to confirm that strong results could be produced with low attentional and communicative requirements. This task is easy to understand and requires little concentration and no input from the observers. We chose this task because the methodology described in this article should be applicable to any population (young children, patients with dementia, noncooperating individuals, etc.). 
Eight eye movement measures were selected as scanpath features. These are readily available from a saccade/fixation-identifying algorithm and require little data processing. With regards to fixations, duration, dispersion (mean gaze dispersion to fixation centroid), and location (two-dimensional position) were selected. 
As observed in previous studies (Cornelissen et al., 2005; Laubrock et al., 2013; Nuthmann, 2014; Cajar, Schneeweiß, et al., 2016; Nuthmann & Malcolm, 2016), we expect artificial scotomas to increase fixation durations, since participants take longer to analyze foveated contents and plan for new saccades in the presence of a scotoma. To our knowledge, no gaze-contingent study undertaken to date has reported on the effect of artificial scotomas on fixation dispersion. We believe it to be important in this context, since several studies have reported an increase in fixation instability in patients with glaucoma (Henson, Evans, Chauhan, & Lane, 1996; Shi, Liu, Wang, Zhang, & Huang, 2013) and macular degeneration (Macedo et al., 2011). We expect fixation dispersion to increase with central scotoma to reflect a lack of foveal stimulation on which to properly fixate. 
Saccades are characterized by amplitude (the distance traveled by the gaze on screen during a saccade), peak velocity (the maximum velocity observed during a saccade), peak acceleration (the maximum acceleration observed during a saccade), absolute angle (the angle between the saccade and the horizontal axis), and relative angle (the angle between the direction of two consecutive saccades). 
With regard to the data from the literature, we expect artificial scotomas to have the most pronounced impact on saccade amplitudes, which will increase with central scotomas and decrease with peripheral ones. Saccade peak velocity and acceleration will follow the same patterns as saccade amplitude owing to the strong relationship between these variables (Bahill, Clark, & Stark, 1975). 
In relation to saccade direction biases, we expect the horizontal bias to be preserved despite the presence of scotomas. We expect an increase in return saccades when exploring with a central scotoma (Henderson et al., 1997; Cornelissen et al., 2005). We found no prior results regarding the effects of gaze-contingent masking on forward saccades. In relation to the previous hypothesis, we expect a reduction in forward saccades when central perception is lacking. Regarding forward and backward saccades with peripheral masking, we expect less return saccades as was observed by Cornelissen et al. (2005). We expect a decrease in return saccades to increase the rate of forward saccades. 
Method
Participants
In total 60 participants were recruited (39 women, mean age: 28 years old, minimum: 19, maximum: 48) via a mailing list reaching mostly students of Nantes University. All participants were compensated for their time. Normal or corrected-to-normal vision was asserted by a Monoyer test and normal color discrimination using the Ishihara color blindness test. Individuals wearing glasses were excluded from the study because of the difficulty eye trackers can encounter in tracking their gaze. The dominant eye was also determined. All participants gave their written consent before beginning the experiment. Trials from six participants were removed because of eye tracking–related difficulties. Our analyses are based on the 54 remaining individuals. This experiment conformed to the Declaration of Helsinki and was approved by the Ethics Committee of the French Society of Ophthalmology (IRB 00008855 Société Française d'Ophtalmologie IRB#1). 
Apparatus
Stimuli were displayed on a screen of 1920 × 1080 pixels (23-in., 144 Hz). In order to measure participants' gaze position in real time we relied on a SensoMotoric Instruments Gmbh (Teltow, Germany) eye tracker (Hi-Speed, 500 Hz). Data acquisition is binocular, but only the positions of the dominant eye are involved in updating the stimuli in the present paradigm. This experimental set-up requires two independent computers. The first is operated by the experimenter; it is linked to the eye tracker to retrieve and process gaze data. The second one displays stimuli to the participants, updating stimuli according to online data sent from the first computer. The display computer runs an NVIDIA GTX 1080 GPU and an Intel E5-1650 CPU. We designed this set-up to reduce latency between gaze movements and the update of on-screen stimulus, achieving a maximum latency of 13 ms. 
Stimuli
Twenty-one pictures of indoor and outdoor scenes were used as stimuli. All photographs are licensed under Creative Commons, colored and high definition (1920 × 1080 pixels, 31.2° × 17.7° of visual field). Six images were set aside for a training phase. Stimuli were selected for their varied characteristics (scene complexity, number of objects of interest). 
Experimental design
A gaze-contingent paradigm was implemented to study ocular movements pertaining to central and peripheral vision. Stimuli showed to participants were updated dynamically with gaze data from the eye tracker. We created a masking experimental condition comprising two modalities: a central mask centered on gaze, preventing sampling of the scene with central vision, and a peripheral mask preventing sampling of the scene with peripheral vision. In order to reduce the effect of sharp mask edges on visual attention (Reingold & Loschky, 2002) the periphery of the mask was smoothed with a linear gradient (10 arcmin) blending the mask with the stimuli. 
In order to study the relationship between the size of the scotoma and ocular behaviors, a second condition varied the radius of the masks (1.5°, 2.5°, 3.5°, 4.5°; Figure 1) from approximately the size of the fovea to the size of the perifovea. In a control condition, participants saw the original stimuli without gaze-contingent masking. Masking and size conditions were varied randomly across participants. The participants were divided into four groups, with each group experiencing one of the four different mask sizes. 
Figure 1
 
Presentation of mask types (top row: central masks, bottom row: peripheral masks) and radii (columns from left to right: 1.5°, 2.5°, 3.5°, 4.5°). Mask radii depicted here are proportionally accurate for stimuli encompassing 31.2° by 17.7° of visual field.
Figure 1
 
Presentation of mask types (top row: central masks, bottom row: peripheral masks) and radii (columns from left to right: 1.5°, 2.5°, 3.5°, 4.5°). Mask radii depicted here are proportionally accurate for stimuli encompassing 31.2° by 17.7° of visual field.
Procedure
Participants were told to explore each image for a free-viewing task. Following a training phase, 15 images were displayed in three consecutive runs separated by breaks of at least 1 min. Each image was displayed once per run in one of the two masking conditions or the control condition (see Figure 2; the order of the images was unique to each participant). A calibration of the eye tracker was scheduled at each run start. The calibration was validated before each trial, triggering a new calibration if the mean Euclidean distance from gaze points to validation points exceeded 2.5°, the average human fixation stability (Longhin et al., 2013), taking into account the precision of the eye tracker and the size of the validation dots. Successful validation phases showed an acceptable mean degrees of dispersion over all three validation dots (95% CI [1.03, 1.04]). Participants were told to properly fixate on a cross at the center of the screen for at least 500 ms for a stimulus to appear, ensuring that all participants started exploring via a central fixation. Failing this fixation check would trigger a calibration phase. If fixation was successful a stimulus was displayed for 10 s, then disappeared before a 1.5-s intertrial rest. 
Figure 2
 
Progress of a trial. Beginning with a set of validation points to check the eye tracker's accuracy. A fixation cross then appears, disappearing after approximately 1.5 s. An image appears next for 10 s in one of the three mask type conditions and one of the three mask size conditions (not represented here). A trial ends with a resting period of 1.5 s.
Figure 2
 
Progress of a trial. Beginning with a set of validation points to check the eye tracker's accuracy. A fixation cross then appears, disappearing after approximately 1.5 s. An image appears next for 10 s in one of the three mask type conditions and one of the three mask size conditions (not represented here). A trial ends with a resting period of 1.5 s.
Data preparation
Data were acquired and saved in unfiltered form. A denoising filter was applied to raw data in order to obtain a better segmentation into fixations, saccades, and blinks using a velocity-based parsing algorithm (Salvucci & Goldberg, 2000). A total of 2,190 fixations were removed for lasting less than 80 ms, the approximate minimum amount of time required to process foveal information, plan, and proceed with a new saccade (Salthouse & Ellis, 1980; Manor & Gordon, 2003; Leigh & Zee, 2015). A further 579 fixations were removed for lasting longer than 1.3 s (4 SDs away from mean duration). A total of 3,894 blinks were identified and removed. Seven saccades of an amplitude longer than the diagonal of the screen were deemed aberrant and removed from our data set. The following analyses are based on the remaining 55,912 fixations (95.3%) and 52,427 saccades (89.3%). 
Analyses and discussion
In order to determine the most significant and distinguishing scanpath characteristics in relation to the comparison and classification of scotomas, we devised a set of analyses. First, we used linear mixed models (LMMs) to show which variables report the largest mean differences between experimental conditions. Secondly, correlation and principal axis factoring analyses were produced to assess variance redundancy between variables. In a third step, a series of hidden Markov models (HMMs) and recurrent artificial neural networks (RNNs) were built as classifiers of scotoma types and/or sizes in order to select the features most capable of categorizing the scotoma condition of a scanpath. 
Saccade directions are calculated as follows: Considering saccades as Euclidean vectors on a two-dimensional plane, an absolute angle is the angle between a vector and the horizontal axis, while a relative angle is the angle between two consecutive vectors (Figure 3; see also David, Perreira Da Silva, Lebranchu, & Le Callet, 2018). Angles are obtained with the following equation (output values between −π and π):  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}a = \arctan 2\left( {u \times v,u \cdot v} \right){\rm ,}\end{equation}
where u and v are vectors: u = [fi,xfi−1,x, fi,yfi−1,y]; v = [fi−1,xfi−2,x, fi−1,yfi−2,y] to obtain relative angles and v = [1, 0] for absolute angles. fi,x and fi,y are positions on the x- and y-axes of fixation i. Absolute and relative angles relationship is determined by the following equation:  
\begin{equation}\tag{2}{\gamma _t} = abs\left( {{\alpha _t} - {\alpha _{t - 1}}} \right){\rm ,}\end{equation}
where γ is a relative angle and α an absolute angle. γ must be reversed (i.e., 2πγ) according to the circular quadrant of the absolute angles. Because relative angles are calculated between two saccades, a scanpath with N saccades will show N absolute angles and N – 1 relative angles.  
Figure 3
 
(a) Absolute angles are shown in green between a saccade direction (black arrows) and the horizontal axis (orange dashed lines). The HSR reports the proportion of left-directed saccades within horizontal saccades; the HVR measures the proportion of horizontal saccades among all saccades observed. (b) Relative angles (green arcs) are angles between two saccades directions (black arrows and orange dashed lines). The BSR informs about the proportion of backward-directed saccades among backward and forward saccades. The SRR (Asfaw et al., 2018) measures the number of backward saccades falling between 170° and 190° as a proportion of the total amount of saccades.
Figure 3
 
(a) Absolute angles are shown in green between a saccade direction (black arrows) and the horizontal axis (orange dashed lines). The HSR reports the proportion of left-directed saccades within horizontal saccades; the HVR measures the proportion of horizontal saccades among all saccades observed. (b) Relative angles (green arcs) are angles between two saccades directions (black arrows and orange dashed lines). The BSR informs about the proportion of backward-directed saccades among backward and forward saccades. The SRR (Asfaw et al., 2018) measures the number of backward saccades falling between 170° and 190° as a proportion of the total amount of saccades.
Throughout the following series of analyses, the variables must be transformed to better suit the constraints of the analyses. In particular, circular data (relative and absolute angles) as regressors in LMMs are challenging (Jammalamadaka & Sengupta, 2001); additionally, relative and absolute angles are multimodal (Figures 6 and 7) and would not fit a von Mises distribution, nor a Gaussian one if sin or cos-transformed. For these reasons, absolute and relative angle variables are transformed into four ratios (Figure 3). The first angle ratio is the proportion of horizontal saccades over all saccades (horizontal to vertical saccades ratio [HVR]). Horizontal saccades have an angle difference to the horizontal axis below 45°, and vertical saccades have an angle difference to the vertical axis of a maximum of 45°. On the other hand, the horizontal saccades ratio (HSR) reports the number of saccades directed between 135° and 235° (directed to the left of the screen for absolute angle) divided by the sum of angles in the following angle intervals: 135°–235° (left-directed saccades), and 315°–45° (right-directed saccades). Oculomotor biases while exploring natural scenes have been thoroughly explored (Tatler & Vincent, 2009; Clarke & Tatler, 2014; Le Meur & Liu, 2015). In particular, the horizontal bias describes a preference for left- and right-directed saccades over vertical saccades (this effect is horizontally symmetric). Therefore, when exploring stimuli without masks, we expect participants to explore predominantly via horizontal saccades, with no left or right bias. Moreover, we aim to check if horizontal dominance is preserved by measuring horizontal to vertical saccade ratios. With horizontal saccades ratios, we mean to check whether the horizontal bias remains horizontally symmetric. As for relative angles, backward saccade ratios (BSR) represent the proportion of backward saccades (directed in the general direction of the preceding fixation: 135°–235°) as a proportion of the number of backward and forward saccades (135°–235° and 315°–45°). Relative angles in the north and south quadrants (neither backward nor forward) are not integral to our analyses as they are angled in directions in which we have no interest in the context of scene exploration. We are also interested in the saccadic reversal rate (SRR) described by Asfaw, Jones, Mönter, Smith, and Crabb (2018). The authors defined SRR as the proportion of relative angles occurring between 170° and 190°. That is to say, the number of saccades in a 20° backward window, which are therefore directed precisely toward a previous fixation's location. This count is then divided by the total number of saccades in a scanpath. Asfaw et al. created this measure as a proxy for temporal exploratory patterns. BSR and SRR are computed over each scanpath. 
Linear mixed model analyses
To measure the significance of each variable in revealing the differences between experimental conditions, we first relied on LMMs (R package lme4; R Core Team, 2018; Bates, Mächler, Bolker, & Walker, 2014) following the methods detailed in Nuthmann and Malcolm (2016) and Cajar, Schneeweiß, et al. (2016). We chose to study two types of models (with stimuli and subjects as random factors): (a) the primary effect of scotoma type and (b) the interaction effect between scotoma type and size. We report in Supplementary Table S1 the coefficient of determination for each model, calculated using Nakagawa and Schielzeth's (2013) method (conditional R2: proportion of variance explained by fixed and random factors). Comparisons between experimental groups are achieved with contrasts, using dummy variable coding: b values reported below indicate the estimated difference between means. Because the significance of p values applied to LMMs is problematic we report b values (and their standard errors, SEs), as well as t values; absolute t values exceeding 1.96 can be considered significant (Baayen, Davidson, & Bates, 2008), though we report p values calculated via the Satterthwaite method (R package lmerTest; Kuznetsova, Brockhoff, & Christensen, 2017; see Supplementary Tables S3 and S4). Finally, because statistical significance is not necessarily informative “with a large enough sample, n, the null hypothesis will always be rejected” (Demidenko, 2016, p.33), we provide the effect size of comparisons via Cohen's d (difference in means divided by the pooled standard deviation; Cohen, 1988; Cumming, 2008). An absolute Cohen's d value below 0.01 is deemed very small, below 0.2 small, below 0.5 medium, below 0.8 large, below 1.2 very large, and below 2 huge (Sawilowsky, 2009). Full result tables are set out in Supplementary Tables S3 and S4. Means and confidence intervals of scotoma conditions per gaze features are reported in Figure 4
Figure 4
 
Subplots (a) through (i) show mean and 95% CI of variables involved in LMM analyses. Log-transformed variables are presented on a logarithmic scale. Control mean is shown as a solid black line (dashed lines report 95% CI). Data obtained with central masks are in red and peripheral masks in blue. The legend is located at the bottom of the figure: Scotoma radii are presented according to the decreasing amount of information left by the central (first) and peripheral (second) masks.
Figure 4
 
Subplots (a) through (i) show mean and 95% CI of variables involved in LMM analyses. Log-transformed variables are presented on a logarithmic scale. Control mean is shown as a solid black line (dashed lines report 95% CI). Data obtained with central masks are in red and peripheral masks in blue. The legend is located at the bottom of the figure: Scotoma radii are presented according to the decreasing amount of information left by the central (first) and peripheral (second) masks.
After assessing each model's residuals with Q-Q plots and their density distributions, we chose to log-transform all variables aside from saccade angle ratios in order to improve the normality of the residuals. However, this assumption is relaxed for large sample sizes (Lumley, Diehr, Emerson, & Chen, 2002; Schmidt & Finan, 2018). 
In order to test whether scotoma size has an indirect effect on control trials, we conducted LMM analyses between control trials from participants belonging to different size conditions, across all variables tested here. We report no sizeable effect of size on control data, as well as a negligible Cohen's d mean (M = 0.08, SD = 0.08). 
Bias in our analyses due to age was investigated by splitting our data set into two groups: below 27 years old (34 subjects, 23 women, mean age: 22.6 years old); equal or above 27 years old (20 subjects, 12 women, mean age: 35 years old). We used LMMs to compare the two age groups for each scotoma subcondition. After applying the Bonferroni correction we report a significant mean difference between age groups only in relation to the effect of peripheral masks on fixation duration (Supplementary Figure S2). 
Saccade amplitude, saccade peak velocity, and acceleration
Saccade amplitudes increase when a central scotoma is simulated (b = 0.23, SE = 0.01, t = 31.78, d = −0.32) and decrease when simulating peripheral scotomas (b = −0.41, SE = 0.01, t = −54.07, d = 0.56). An estimated mean difference of 0.65° of amplitude on the log scale is observed between central and peripheral masks, in favor of central masking (b = −0.65, SE = 0.01, t = −87.95, d = 0.94). We report significant differences between all type and radius modalities apart from the two smallest peripheral masks, as well as between the two largest peripheral masks (Supplementary Table S4). 
Figure 5 illustrates the saccade amplitude distributions, which are positively skewed. In the case of central masking, we observe that as mask size increases, a reduction of skewness and kurtosis as the mode of the distributions remains outside of the mask. As for peripheral masking, the high kurtosis displayed and mode localization place a high density of saccades very close to the edge of the mask for mask radii 1.5° and 2.5°, and well within the visible area of the scene for larger radii. In essence, we notice that distribution modes fall within the available information or very close to the mask in the case of the smallest peripheral masks. It is possible that when experiencing a peripheral loss of information, participants chose to maximize data sampling by planning for saccades to target locations at the edge of the mask. Removing central information will increase saccade amplitude as a function of the radius of the mask. On the other hand, a peripheral loss of visual data will set the general upper boundary of a saccade amplitude. Indeed, in Figures 4a and 5, we observe a strong dependency between the types and radii of scotomas in relation to saccade amplitude. As predicted, saccades are directed within areas where information is available at the time of saccade planning (Foulsham et al., 2011; Nuthmann, 2014). 
Figure 5
 
Probability density functions of saccade amplitude (degrees) as a function of mask types and sizes (central masking red, peripheral masking blue, control data green). Mask radii are displayed as black dashed lines.
Figure 5
 
Probability density functions of saccade amplitude (degrees) as a function of mask types and sizes (central masking red, peripheral masking blue, control data green). Mask radii are displayed as black dashed lines.
As expected from the literature on visual perception with artificial scotomas, we notice a modulation of saccade amplitude according to mask types and sizes (Foulsham et al., 2011; Figure 4b and c). Central masks elicit longer saccades, with saccade destinations usually falling outside of masked areas; peripheral masking induces shorter saccades, which also predominantly fall within unmasked areas. This information-availability effect in saccade target planning has been demonstrated numerous times in visual search (Loschky & McConkie, 2002; Nuthmann, 2013, 2014; Geringswald et al., 2016; Nuthmann & Malcolm, 2016) and scene exploration (Foulsham et al., 2011; Laubrock et al., 2013; Cajar, Engbert, & Laubrock, 2016; Cajar, Schneeweiß, et al., 2016) tasks with natural images. Studies implementing varying mask sizes have also reported saccade amplitude modulated according to the mask size (Loschky & McConkie, 2002; Nuthmann, 2013, 2014; Cajar, Engbert, & Laubrock, 2016). Harris and Wolpert (2006) showed that the distance traveled by the eyes during a saccade and the velocity or acceleration of the saccade are strongly correlated (Bahill et al., 1975). This is why, with regard to saccade velocity and acceleration peaks, we report the same main and interaction effects as for saccade amplitude (Supplementary Tables S3 and S4). 
Fixation duration
Fixation duration reflects visual attention processing (Nuthmann, Smith, Engbert, & Henderson, 2010). It is influenced by central information while sampling with our fovea (van Diepen & d'Ydewalle, 2003; Nuthmann & Malcolm, 2016), but also by peripheral information when analyzing a peripheral scene in order to plan a saccade (Cajar, Engbert, & Laubrock, 2016). Compared to control conditions, the presence of a mask caused shorter fixations (central: b = −0.1, SE = 0.01, t = −18.81, d = 0.2; peripheral: b = −0.03, SE = 0.01, t = −6.11, d = 0.08). A peripheral mask elicited longer fixation than a central mask (b = 0.07, SE = 0.01, t = 12.48, d = −0.11). Although we observed shorter fixation durations compared to control data, with the shortest durations for central masking, these effects are small (Figure 4d). Comparison results within type and radius interactions are presented in Supplementary Tables S3 and S4. Notably, we report no significant differences between peripheral mask of radius 3.5° with all central mask size conditions. Moreover, peripheral mask of radius 4.5° shows no difference with central masks 1.5° and 2.5°. The largest effect sizes are observed between the two smallest peripheral masks, resulting in longer fixations compared to the two largest central masks. 
We observed longer fixation durations than the control group only in the case of the peripheral mask of radius 1.5° (least amount of information remaining), though we expected from the literature (e.g., Cornelissen et al., 2005; Nuthmann, 2014; Cajar, Engbert, & Laubrock, 2016; Cajar, Schneeweiß, et al., 2016) to observe this effect across scotoma types and radii. This effect is meant to increase as a function of the reduction of visual information (Nuthmann, 2014). Studies have shown an expected increase in fixation duration with central and peripheral scotomas larger and smaller than those used this experiment; therefore, we cannot explain differences in effects observed here as caused by smaller or bigger scotoma sizes compared to literature. Henderson et al. (1997) reported shorter fixation duration with a central mask. Although that may be an effect of the visual search task, top-down requirements may demand more time to analyze a scene before relocating overt attention. Studies using a scene exploration task also reported such an increased fixation duration effect, albeit less pronounced. See Laubrock et al. (2013) with a central high-pass filter mask, Cajar, Schneeweiß, et al. (2016) and Cajar, Engbert, and Laubrock (2016) with central low- and high-pass filter masks. As a matter of interest, we notice in Laubrock et al. (2013) and Cajar, Schneeweiß, et al. (2016) that scene exploration trials end with questions aimed at testing a participant's phasic attention. This may explain the difference observed in our results: It is possible that such testing of attention to the details of a scene requires a degree of top-down control exceeding that of a pure free-viewing task. We hypothesize that participants did not feel pressured to memorize the entire scene and its details in anticipation of questions. As a result, they did not prioritize exploring as much of the scene as possible, which could have resulted in fewer return saccades. 
Fixation dispersion
Gaze dispersion while fixating was smaller than with the control condition with both types of mask (central: b = −0.11, SE = 0.01, t = −16.33, d = 0.17; peripheral: b = −0.02, SE = 0.01, t = −2.79, d = 0.04). Peripheral masks resulted in larger dispersions compared to central masks (b = 0.09, SE = 0.01, t = 13.43, d = −0.13). Within the central mask radius condition, no differences are reported among all four mask sizes. There are no differences among the largest peripheral mask and all central mask sizes. Finally, we observed no difference in dispersion between a peripheral mask of 2.5° and the control data. Peripheral masks measuring 1.5° resulted in fixation dispersion exceeding that found in the control group. In Figure 4g, we observe the same difference patterns (to control data) in terms of fixation duration and fixation dispersion. In the section entitled “Correlation and principal factor analysis,” we also show a strong correlation between fixation duration and dispersion. Unfortunately, few studies have explored the link between these two gaze features. The shared effects and variance could be moderated by microsaccadic movements (Otero-Millan, Troncoso, Macknik, Serrano-Pedraza, & Martinez-Conde, 2008; Mergenthaler & Engbert, 2010). We observe, with central masks, a reduction in fixation dispersion, although we would expect higher fixation dispersion on account of fixation instability due to the lack of central information to sample. The fact that no particular effect pattern of central mask sizes emerges here hints at a single effect related to fixation duration. Shorter fixations translate to fewer fixational eye movements (e.g., microsaccades), and consequently less opportunity for eye movements to drift and increase the fixation radius. 
Horizontal to vertical saccades ratio and horizontal saccades ratio
Absolute angles describe gaze exploration patterns according to a horizontal axis. In free-viewing explorations of natural scenes, we observed an oculomotor bias resulting in a characteristically larger proportion of horizontal saccades (Foulsham et al., 2008; Tatler & Vincent, 2009; Le Meur & Liu, 2015). In this experiment, the proportion of horizontal saccades (HVR) was higher with a central mask (b = 0.04, SE = 0.01, t = 4.86, d = −0.21) and lower with a peripheral mask (b = −0.03, SE = 0.01, t = −4.49, d = 0.18) compared to control data (Figure 4e and h). Therefore, we saw globally fewer horizontal saccades in peripheral mask than in central mask conditions (b = −0.07, SE = 0.01, t = −9.35, d = 0.38). We observed a noteworthy interaction effect between mask type and radius related to central scotomas: The smallest radii (1.5° and 2.5°) do not differ from control data. Interestingly, as central mask radius increases to 3.5° and 4.5°, so does the proportion of horizontal saccades. Regarding peripheral scotomas, we observe a singular effect related to the smallest scotoma radius (1.5°). While HVR results are close to control data in the case of the three largest peripheral masks, the smallest one resulted in fewer horizontal saccades. As can be seen in Figure 6, a high density of saccades are directed downward just outside of the mask limit. 
Figure 6
 
Density probability distributions of absolute angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
Figure 6
 
Density probability distributions of absolute angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
In relation to the horizontal saccadic symmetry (HSR), no significant effects can be reported between the control condition and central masks (b = 0, SE = 0.01, t = −0.56, d = 0.04), between the control and peripheral masks (b = 0, SE = 0.01, t = 0.53, d = −0.02), and between central and peripheral masks (b = 0.01, SE = 0.01, t = 1.08, d = −0.05). Only small mean differences are observed (95% CI of absolute Cohen's d values [0.11, 0.12]) within the interaction effect of scotoma types and sizes. 
The results show an increase in horizontal saccades as central masks increase in size. We observe in Figure 6 that the control data show vertical saccade modes positioned at approximately 2.5° amplitude. We hypothesize that, in addition to rarely planning saccades within masked areas, participants do not adapt their horizontal oculomotor bias in accordance with mask sizes. Therefore, as central scotomas increase in size, masks cover part of the scene where vertical saccades would be directed (as in the control condition). Instead of planning longer vertical saccades, observers in our study appeared to produce fewer. Finally, it appears that horizontal saccadic symmetry while exploring static natural scenes is generally preserved in spite of scotoma conditions. 
Backward saccades ratio and saccadic reversal rate
We expected differences to play out in exploratory patterns at more than a single saccade/fixation level; relative angle is a first step in that direction, as it aggregates information about two successive saccades (three fixations). Simulating a central scotoma resulted in moderately more backward saccades (BSR) compared with control data (b = 0.04, SE = 0.01, t = 4.43, d = −0.18); from a peripheral scotoma ensued significantly fewer backward saccades, yet more forward saccades than in the control condition (b = −0.32, SE = 0.01, t = −38.7, d = 1.9; Figures 4f and i and 7). Compared to a central mask, experiencing a peripheral mask produced substantially more forward saccades (b = −0.36, SE = 0.01, t = −43.15, d = 1.86). The largest effect sizes are observed when comparing all peripheral mask radius conditions with control and central mask data. No difference in backward saccade rate is revealed between control data and central masking of radii 1.5° and 2.5°. No differences are observed between central mask radii 1.5° and 2.5°. As the central mask radius increases, so does the number of backward saccades. A peripheral mask of 1.5° radius elicited marginally fewer forward saccades than peripheral masks of sizes 2.5° and 3.5°. 
The saccadic reversal rate, another soft spatiotemporal indicator related to saccadic behavior, indicates how often participants gaze back precisely toward the previous fixation. We report an increase in SRR with a central mask (b = 0.05, SE = 0, t = 9.87, d = −0.39) and a decrease with a peripheral mask (b = −0.08, SE = 0, t = −16.77, d = 0.99). Between masking conditions, a peripheral mask will result in a substantial reduction of SRR (b = −0.13, SE = 0, t = −26.65, d = 1.2). 
Relative angles obtained from control data replicate the saccade direction bias reported by T. J. Smith and Henderson (2009, 2011) and Bays and Husain (2012). In Figure 7 we observe a clear forward saccadic bias: the saccadic momentum. The facilitation of return bias is present in the opposite direction in a narrower angular spread. 
Figure 7
 
Density probability distributions of relative angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
Figure 7
 
Density probability distributions of relative angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
As was observed in BSR and SRR, with an increase in mask, a central mask will provoke more backward saccades (+5% on average and +15% with the largest scotoma; see Supplementary Table S2, presumably in a persistent effort to sample a region of interest that could not be analyzed because of a lack of information owing to the mask. On the other hand, masking peripheral information is at the origin of a particular pattern of exploration: an increase in saccades directed away from the previous landing point (+27% on average). Participants essentially explored the scenes presented to them in a procedural and sequential manner, via a sequence of saccades bound by the artificial scotoma radius each probing further away from the previous fixation (see Supplementary Movies S1 and S2 for scanpath examples of participants with central and peripheral masks, respectively). 
With the relative angle measures, we replicated the results of studies by Henderson et al. (1997) and Cornelissen et al. (2005), demonstrating an impact of mask types on saccade direction. To be precise, we observed a sharp increase in backward saccades during central masking trials. Our proposed interpretation of this observation is that participants fail to sample salient areas of interest with their fovea and repeatedly attempt new fixations. As Henderson et al. (1997) revealed with central artificial scotomas, once an object of interest was fixated on, participants seldom made additional fixations on that (now hidden) object, preferring to fixate outside of the mask before producing a return saccade to the object of interest. On this topic, we note a marked declined in the number of backward saccades (angle interval 135°–235°) during peripheral masking trials. The average proportion of backward saccades in peripheral masking conditions was 16.4% (95% CI [15.3, 17.5]) compared with 48.8% (95% CI [47.5, 50]) in the control condition and 52.6% (95% CI [51, 54.1]) in the central condition. 
Correlation and principal factor analysis
Absolute and relative angle ratio metrics cannot be analyzed here because they characterize scanpaths contrary to other features that represent a single element of a scanpath. Notably, we observe very strong positive correlations between saccade amplitude, peak velocity, and peak acceleration (Table 1). This is unsurprising, as the saccade main sequence (Bahill et al., 1975; Harris & Wolpert, 2006) dictates a power law relationship between saccade amplitude and peak velocity. Fixation duration and dispersion are strongly positively correlated, which might be explained by the observation that gazing at a region of interest for a longer period of time increases the likelihood of one's gaze drifting; therefore, both variables reflect the same cognitive processes (Blignaut & Beelders, 2009). 
Table 1
 
Correlation coefficient and significance levels are reported between all gaze features. Notes: Variable names prepended with “S” derive from saccades, “F” derive from fixations. Significance levels: Ø p ≥ 0.05; · p < 0.05; * p < 0.01; ** p < 0.001; *** p < 0.0001.
Table 1
 
Correlation coefficient and significance levels are reported between all gaze features. Notes: Variable names prepended with “S” derive from saccades, “F” derive from fixations. Significance levels: Ø p ≥ 0.05; · p < 0.05; * p < 0.01; ** p < 0.001; *** p < 0.0001.
We ran a factorial analysis to study variance redundancy between the same seven variables. We report four factors (Factor I: saccade amplitude, peak velocity, and peak acceleration; Factor II: fixation duration and dispersion; Factor III: relative angle; Factor IV: absolute angle), a number necessary to achieve more than 80% of cumulative variance in the model (83%). As expected from the previous correlation analysis, two factors separate the two strong correlations presented above from two factors containing the remaining variables. Because they share little variance, combining them in classifier models might become useful as they show little redundancy and possibly account for different facets of eye movements during scene exploration. 
We clearly identify four groups of features that are strongly uncorrelated. These denote different aspects of gaze movement. The first group (saccade amplitude, peak velocity, and peak acceleration) refers to the general distance between two fixations. The second group is related to fixation dynamics (duration and dispersion), and the last two groups are connected to spatiotemporal exploration. 
Classifier models
Probabilistic models have demonstrated their usefulness in gaze modeling, because scanpaths can be modeled as stochastic processes (Boccignone, 2015; Coutrot et al., 2018). For instance, such a model can predict the probability of a sequence (here, a scanpath) belonging to a certain experimental group (e.g., Voisin, Yoon, Tourassi, Morin-Ducote, & Hudson, 2013). It can be used to generate scanpaths (e.g., saccadic models: Le Meur & Coutrot, 2016a; Le Meur et al., 2017a), and it can also prove useful in studying bottom-up and top-down visual attention processes (Rai, Le Callet, & Cheung, 2016; Coutrot et al., 2018). In this section we take on the task of classifying the types (three classes; Model Type 1), types and sizes (nine classes; Type 2) and sizes of scotomas (four classes; Type 3). This is done as a proof of concept to evaluate the raw potential of predicting the nature of a scotoma from the scanpath features extracted from a 10-s free-viewing task. The seven scanpath characteristics used in the factorial analysis are used in this task, although we add fixation location (X, Y positions on screen). HMMs and RNNs are implemented to receive a scanpath as input. The features of said sequence are a combination of the eight selected scanpath features. Because we want a measure of which feature (and combination of features) is the best predictor, we create HMMs and RNNs as classifiers for each combination of said features (255 combinations, from a single to all eight gaze features within the same model). 
We selected these two types of models for their dissimilarities as classifiers, which may lead us to learn separate information about our experimental data. Markov models have shown their effectiveness when applied to gaze data (e.g., Simola, Salojärvi, & Kojo, 2008; Kanan, Bseiso, Ray, Hsiao, & Cottrell, 2015; Coutrot et al., 2016; Rai et al., 2016; Sammaknejad et al., 2017). They have been extensively used for modeling time series in general (Camastra & Vinciarelli, 2008). Parameters and outputs of an HMM are easily interpreted. On the other hand, RNNs are fairly new models that have shown very promising performances (Gamboa, 2017), but have not yet been applied to gaze data as far as we know from the literature. They are more flexible and powerful than HMMs, but require more data samples to learn efficiently. In the Appendix (Figures A1 and A2) accuracy (M and 95% CI) is displayed for all 255 classifiers, by order of increasing accuracy within increasing order of feature count in models. To complement the descriptive statistics, we determine two measures meant to characterize the performances of a feature. 
The first measure is the rank of a feature when alone in a model (ranked according to mean accuracy; first height models in Appendix, Figures A1 and A2). The second metric measures the mean improvement observed by adding feature F to a model. To obtain this measure, we retrieve the difference between model X, containing feature F, and model X′, which includes all features present in X aside from feature F. The final improvement score is the arithmetic mean of the six values obtained (difference between count Subgroups 1 and 2, up to the difference between count Subgroups 7 and 8). 
Angle ratio measures (HSR, HVR, BSR, and SRR) cannot be modeled here, as they characterize an entire scanpath rather than only a part thereof (i.e., fixation or saccade). We chose to transform relative and absolute angles to their sin and cos counterparts to circumvent this issue. 
Hidden Markov models
HMMs are probabilistic models. Through a forward-backward procedure HMMs are made to model a set of input data (sequences) until convergence. Data are modeled by a set number of hidden states, their emission probabilities, their transition probabilities, and their prior probability. As a Markovian process, HMMs assume a one-element dependency between samples of a sequence. That is to say, a sequence element Si is assumed to depend on Si−1, thus, HMMs cannot model complex dependencies spanning more than two samples. Each classifier (related to a combination of features) is a model comprised of n HMMs (where n is the number of scanpaths in each class, plus the number of classes). 
HMMs are created with the Python library pomegranate (Schreiber, 2017). We empirically chose eight states modeled by Gaussian or gamma (fixation duration and dispersion) distributions. This choice of number of states was made to reduce the complexity of the analyses, though a variational approach to finding the ideal number of hidden states is advised (McGrory & Titterington, 2009; Coutrot et al., 2018). The HMMs are implemented to be ergodic: All states are linked together and one can transition from one to any state. Gaussian distributions are initialized with μ = 0, σ2 = 1, and gamma distributions with α = 2, β = 2. Prior and transition matrices are initialized with uniform probabilities and left to train along with states distributions (Gaussian/gamma emission densities). 
Procedure
For each combination of features and prediction classes, a set of HMMs is trained in a leave-one-out process. Such a model predicts the probability of a sequence of gaze feature(s) belonging to each class. The “winning” class is the one with the highest output probability (Boccignone, 2015; Coutrot et al., 2018). A comprehensive description of the procedure is reported in the Supplementary File S1 (“HMM procedure”). The mean performance of a classifier model is obtained by dividing the number of successful classifications by the total number of scanpaths. 
Results
In Figure A1 we notice that, in the prediction of a scotoma's type and size, the best overall model achieves a mean accuracy of 56.34% (95% CI [54.52, 58.17]; chance level = 11.11%) and contains all features but saccade amplitude. The model that best predicts the type of scotoma achieves a mean accuracy of 65.77% (95% CI [63.93, 67.61]; chance level = 33.33%). In light of the two aforementioned measures (rank and mean improvement), we can report (Table 2, Figure A1) that relative angles, peak velocities, and saccade amplitudes are the most useful features, both alone and in combination with one another. 
Table 2
 
Rank, accuracy, and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) types of scotoma and (b) hidden Markov models classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 2
 
Rank, accuracy, and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) types of scotoma and (b) hidden Markov models classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Tasked with predicting a scotoma's type at the origin of a scanpath, the best overall model implements six features (missing: fixation amplitude and saccade peak acceleration) for a mean accuracy of 76.37% (SE: 95% CI [75.21, 77.52]; chance level = 33.33%). The accuracy increases significantly when predicting only the type of scotomas, mostly because the confusion between scotomas of the same type, but of different radii, no longer exists. There is an evident advantage to models including information regarding the relative angle, which results in a mean ranking position of 76% and a mean improvement in accuracy of 8.42%. Surprisingly, saccade amplitude, peak velocity, and peak acceleration appear slightly decorrelated here. In predicting type and size, mean ranking and improvements for these features are different. Despite their highly correlated nature, saccade peak velocity sees an increase in mean improvement of 3.67% compared to saccade amplitude. 
When predicting scotoma radii alone (Model Type 3), relative angle, absolute angle, and fixation position emerge as the best three features (Table 3). Best models achieve, for central 48.57% (SE: 95% CI [46.99, 50.15]; chance level = 25%) and for peripheral scotomas 50.46% (SE: 95% CI [49.06, 51.85]). In Figure 4 we can see that saccade angles measures (predominantly related to relative angles) vary significantly as mask radii increase. Fixation position in HMMs converges to hidden states modeling areas of interest (with ellipses) where fixations fall the most frequently. This feature may therefore be redundant with relative angle as it may separate based on backward saccades rates. HMMs predicting peripheral mask sizes again leverage mainly relative angles and fixation position. Both Type 3 models appear to do better with the spatiotemporal dynamics of the scanpaths. 
Table 3
 
Mean rank and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) central mask (b) peripheral mask radii. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 3
 
Mean rank and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) central mask (b) peripheral mask radii. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Recurrent neural networks
RNNs are a subtype of artificial neural networks that implement data feedbacks. A simple three-layer recurrent network (Elman, 1990) injects data processed during previous steps as input alongside information from new steps to be processed. RNNs are meant to extract and propagate information relevant to accomplishing a task by processing dependency through an entire sequence and outputting information such as the probability of belonging to a set of classes, the highest probability being the “winning” class. The Supplementary File S1 (“RNN specification”) and Supplementary Figure S1 detail the full specifications of the models implemented in this experiment. 
Procedure
A model's average accuracy is determined in a cross-validation process, where, for each image stimulus, two scanpaths are chosen at random to be removed from training and used solely for testing purposes (resulting in 30 test scanpaths per class). This process is repeated in 15 runs to obtain average performances for each combination of features in predicting classes of Model Types 1 and 2. A run is defined as the training and testing of a network through 4,000 epochs (forward and backward passes of the entire data set); model parameters are reset before a new training phase begins. In a training phase, the network is updated via a stochastic gradient descent method (Adam optimizer, Kingma & Ba, 2014; learning rate = 0.01). Updating takes place after each batch from the training data set (the data set is shuffled at the start of every epoch). A performance value of a network is output by a forward pass on the testing data set every 100 epochs. The best accuracy for a network is determined by the epoch producing the most accurate categorization (averaged over all classes). 
Results
In Figure A2 we notice that, tasked with predicting a scotoma's type and size, the best overall model achieves a mean accuracy of 41.01% (95% CI [40.25, 41.93]) and contains all scanpaths features. Compared with HMMs, confusion between classes appears to be located predominantly within type modalities rather than between them, as we observe that the model that best predicts types of scotoma achieves a mean accuracy of 81.60% (95% CI [80.94, 82.27]). We report that the best features are related to saccade amplitudes and directions. 
Tasked with predicting a scotoma's type at the origin of a scanpath, the best overall model implements six features (missing: absolute angle and saccade peak acceleration) for a mean accuracy of 82.59% (95% CI [81.95, 83.24]). Accuracy increases significantly when predicting only the type of scotomas, again thanks to the absence of confusion between scotomas of the same type but of different radii. In Table 4, we observe a clear advantage of models including saccade amplitude, resulting in a mean ranking position of 75%, and a mean improvement in accuracy of 5.29%. 
Table 4
 
Rank, accuracy, and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) types of scotoma and (b) recurrent neural networks classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 4
 
Rank, accuracy, and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) types of scotoma and (b) recurrent neural networks classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Type 3 RNN models (Table 5) appear to rely predominantly on saccade amplitudes even with peripheral masks where the difference in amplitudes is subtler (Figure 4). The best models reach 45.56% (95% CI [44.42, 46.7]) accuracy when predicting central masks and 54.58% (95% CI [53.44, 55.72]) accuracy in relation to peripheral masks. 
Table 5
 
Mean rank and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) central mask and (b) peripheral mask sizes. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 5
 
Mean rank and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) central mask and (b) peripheral mask sizes. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
General discussion
Scanpath dynamics
While experiencing peripheral masks, participants preferred to explore scenes through a series of saccades of amplitudes bound by the mask radius and directed largely in the approximate direction of the previous saccades (away from previously fixated areas), in line with the saccadic momentum phenomenon described by T. J. Smith and Henderson (2009, 2011). This resulted in a succession of short saccades in a scanning pattern that we hypothesize is an effort at sampling most of the scene surface in the allocated trial duration. When experiencing central masks, participants were much more prone to look back toward an area of interest where their attention had been directed immediately prior. We noticed in some participants a strong, sometimes irresistible, urge to look back and forth between areas of interest that could not be properly sampled on account of the scotoma. In effect, the saccadic momentum appears nullified when the expected foveal information is masked. More experimentation is required to understand the interplay between the facilitation of return and the saccadic momentum mechanisms. 
Henderson et al. (1997) reported a similar increase in return saccades. In that study, participants had part of their central field of view masked by an artificial scotoma. The authors reported that after fixating on an object of interest (masked), observers would quickly fixate outside of the mask before gazing back at the object: “data suggests that the absence of foveal information leads the eyes to move along to a new (currently extrafoveal) source of information as quickly as possible” (p. 334). According to Henderson et al. (1997), participants looked away from the masked region of interest, not exactly because there was no information to sample, but because there were areas outside of the mask which were more salient in comparison and drew the eyes. We see in Figure 5 that a saccade amplitude is predominantly bound by the information at hand during a fixation, as was expected from the literature describing how visual information shapes saccades (Foulsham et al., 2011; Nuthmann, 2014). Saccades appear to target regions of the field of view containing the best visual information available, as was demonstrated when modifying visual data instead of outright removing it, for instance by removing color (Nuthmann & Malcolm, 2016), or using high- or low-pass spatial frequency filtering (Loschky & McConkie, 2002; Nuthmann, 2013, 2014; Cajar, Engbert, & Laubrock, 2016; Cajar, Schneeweiß, et al., 2016). Therefore, the effect of artificial scotomas on saccade targets is not due to an absolute loss of information (as in our experiment) but rather to a perceptual disparity of visual information quality within one's visual field. These effects may be observable in patients suffering from glaucoma and macular degeneration as well, when masking their better eye in order to impede coping and compensation mechanisms with binocular vision. 
Henderson et al. (1997)'s experiment explains shorter fixation durations observed with central masks as a desire to quickly transition to an area with better information to sample. We observe the same reduction in fixation duration with larger peripheral masking. To explain this second outcome, it bears repeating that the effect sizes pertaining to fixation duration are small; this family of effects may not deserve much attention here. Nonetheless, we hypothesize that participants internalized the trial duration constraint and, as a result, produced shorter fixations in an effort to sample as much as possible of the scene in the allocated time. In the case of peripheral masks leaving a very small area of information available (1.5° and 2.5° masks), it is possible that these masks did not leave sufficient data to analyze (which should result in shorter fixation), but did not leave any peripheral information to process either, which might be the main reason why we observe this increase in duration (uncertainty linked to saccade planning). 
Concerning absolute angles, we observe an effect with central masking: The proportion of vertical saccades decreases with an increase in scotoma size. Although we did not observe an effect related to peripheral scotomas, it must be noted that Crabb et al. (2014) managed to separate glaucoma patients from control participants on the basis of two-dimensional histograms of absolute angles. 
Saccade exploration biases
Polar plots of relative angles and saccade amplitudes indicate clear oculomotor biases on a par with the horizontal bias observed via absolute angles (Foulsham et al., 2008; Tatler & Vincent, 2009; Le Meur & Liu, 2015). In particular, we present such a bias as a function of the type of scotoma. A peripheral scotoma results in a clear bias toward forward saccades, whereas central masks of 3.5° and 4.5° radii resulted in an explicit backward saccade bias. 
We show the same saccadic direction biases as (T. J. Smith & Henderson, 2009, 2011) originally did, here in a mask-free free exploration of natural scenes: The main mode of the density distribution shows a bias in favor of forward saccades (the saccadic momentum) located within a 45° angle arc centered at 0°, peaking at 4° of amplitude. We hypothesize that this first mode accounts for a particular exploratory behavior: Coupling horizontal biases from absolute and relative angles, it appears that participants explore a scene mainly via small forward saccades parallel to the horizontal axis. The second mode falls within backward saccades in a 30° angle arc centered at 180°, peaking at 4.5° of amplitude. This hints at a behavior where an individual will swiftly look back at an object of interest in order to study it further (the facilitation of return). 
Feature identification
Across all analyses produced here, the most important features are related to saccades rather than fixations. Information about trajectory patterns in relation to stimuli appears more relevant. In particular, relative angles characterize very well an exploratory pattern with peripheral scotomas. Though it is not sufficient to adequately separate smaller central scotomas from control data, indeed 1.5° and 2.5° central scotomas are very similar to control data in terms of relative angles and amplitude. We propose two nonexclusive hypotheses in order to explain this effect: (a) these radii are so small that they allow sufficient sampling from a single fixation—as a result the fixated region is not sufficiently masked to hinder foveal sampling significantly and to make peripheral information more salient—and (b) the protocol is at fault—the latency between a gaze movement and the update of a mask onscreen is too high. In the first hypothesis, central mask radii are roughly the size of the fovea, leaving enough information to be sampled from the rest of the macula. In addition, since the position of a mask is set with the gaze position of the dominant eye and since disparity exists between the left and right gazes, it is possible that the nondominant eye was able to sample a significant portion of what was hidden to the other eye. To understand the second hypothesis, let us remember that latency is a critical component of gaze-contingent protocols: Too high, and a participant will be able to sample information before a central mask catches up with their gaze. This effect is particularly important as central masks decrease in size (McConkie & Loschky, 2002; Loschky & Wolverton, 2007). We achieved a maximum latency of 13 ms (worst-case scenario) and a mean latency of approximately 6 ms, which, although quite low for a gaze-contingent protocol, may not be enough for central masks of radii 2.5° and below. 
Classifiers
HMM and RNN modeling analyses show that categorizing scotoma types is substantially easier than categorizing both types and sizes. This is due to confusions between masks of the same types but of proximate radii, as well as between central masking and control data, for reasons explained in the previous paragraph. RNNs are superior to HMMs when categorizing scotoma types (three classes). RNNs show better results with fewer features combined, even when incorporating needless features, whereas HMMs will attempt to fit to all features provided even when disadvantageous. HMMs cannot model temporal information past the previous fixation. Because of this constraint, it may miss behaviors or patterns spanning more than two elements in a scanpath. Modern recurrent neural networks do not suffer from this limitation. RNNs show poorer performance than HMMs when classifying sizes because of a general shortcoming of artificial neural networks: They require large amounts of data to work well. In this case, we divided the data set into nine subsets, simultaneously reducing the number of examples per class. 
HMMs made more classifying errors between scotoma type conditions, whereas RNNs mistook masks of the same type but of different. As a result, classifier models do not perform well enough to predict a scotoma's type and size at a satisfactory level; nonetheless, they achieve better-than-chance-level performance. 
In the context of a screening test where classifying performances matter the most, a model should be fine-tuned to a particular combination of features. Here, we need models whose meta-parameters (architecture specifications, etc.) permit generally acceptable performances irrespective of the combination of gaze features we provide. 
Effectiveness of the methodology
The methodology deployed in this article was able to identify core features necessary for studying and identifying scotomas. However, according to the target population, the visual task should be chosen with care. For example, we report that observers directed their gaze toward areas where salient information was located at the time of planning their saccades. It is possible that this effect is strong here because no scotoma training was used, and because the task was a free-viewing task that, in contrast to visual search, did not require participants to direct their attention to particular parts of the scenes. In a stimuli comparison task, Janssen and Verghese (2015) showed that subjects looked in the direction of relevant information masked by an artificial scotoma instead of a less relevant, yet observable, stimulus. Likewise, it has been reported that patients with peripheral field loss purposely looked toward objects hidden by their visual field defect before the saccade, in order to sample missing information. This effect was observed in real-life shopping (Sippel et al., 2014), walking (Luo et al., 2008), and driving (Kasneci et al., 2014) tasks. However, the inverse effect (patients avoiding planning saccades within visual field defects) was reported while walking (Vargas-Martín & Peli, 2006), during a visual search task (Wiecek et al., 2012), in scene viewing (N. D. Smith, Crabb, et al., 2012), and during simulated driving (Lee, Black, & Wood, 2017). However, N. D. Smith, Glen, and Crabb (2012) reported no effect in a visual search task. In this context, more demanding tasks (e.g., visual search, driving) should be studied using this article's method, when patient condition permits, because top-down mechanisms influence gaze (Luo et al., 2008). 
Our analyses are redundant in different ways. Correlation and factorial analyses do not provide any information that we did not observe before or use after. We do not recommend removing redundant features on this basis, as a classifier may leverage information that is not shared between features to improve its accuracy. 
Ranks can be interpreted as raw classification potential when a feature is considered in isolation, while mean improvements report utility of a feature when combined with others. Mean improvements being positive values and the moderate correlation between both measures (Pearson's r = .47) indicate that features deserve to be combined to reach their full potential. Moreover, although our analyses are varied and answer different aspects of the same question, a single final score should be devised. This score must be statistically based in order to offer between-feature comparisons. 
Although we singled out patients with glaucoma and macular degeneration for comparisons (because of their epidemiological prevalence) we aim for our methodology to be suitable for any disease causing visual field defects, while keeping in mind that the task should be tailored to the idiosyncrasy of the targeted population. 
The implementation of a training protocol (Kwon, Nandy, & Tjan, 2013; Liu & Kwon, 2016; Ryu, Mann, Abernethy, & Poolton, 2016) in this experiment may have enabled participants to adapt to the masks and develop coping mechanisms that could have yielded results closer to those obtained from patients suffering from real scotomas. We chose not to do so, because we wished to first validate our methodology on the abundant literature on artificial scotomas, and to avoid the confounding effects of training on the impact of masks conditions. 
Results obtained from normal adult participants may not be transferable to individuals with visual field defects; the object of future studies is to verify and adapt the methodology (tasks and stimuli) on a per population basis. 
Conclusion
This experiment is a first step toward a simple and light (in terms of cognitive and material resources) gaze-movement measuring tool with applications in screening tests, the monitoring of scotoma progress and evolution of coping mechanisms, or the measurement of the effects of visual therapies. This preliminary step comprised a series of analyses the purpose of which was to identify the best gaze characteristics in order to study and distinguish between artificial scotomas. It follows that saccade amplitude and relative angle are particularly well suited to describing the exploratory patterns connected with a scotoma. These two features are also very important for classifier models. Coincidentally, they can be determined fairly robustly from relative two-dimensional gaze positions alone, and are therefore easily obtainable with low-cost eye tracking systems (see Ooms & Krassanakis, 2018, for a state-of-the-art assessment of low-cost eye tracking) or webcam-based eye tracking (e.g., Webgazer by Papoutsaki et al., 2016). 
Acknowledgments
The work of Erwan David has been supported by RFI Atlanstic2020. 
Commercial relationships: none. 
Corresponding author: Erwan Joël David. 
Address: University of Nantes, Nantes, France. 
References
Abbott, D., Shirali, Y., Haws, J. K., & Lack, C. W. (2017). Biobehavioral assessment of the anxiety disorders: Current progress and future directions. World Journal of Psychiatry, 7 (3), 133.
Aguilar, C., & Castet, E. (2011). Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies. Vision Research, 51 (9), 997–1012.
Arabadzhiyska, E., Tursun, O. T., Myszkowski, K., Seidel, H.-P., & Didyk, P. (2017). Saccade landing position prediction for gaze-contingent rendering. ACM Transactions on Graphics, 36(4), 50. New York, NY: ACM.
Asfaw, D. S., Jones, P. R., Mönter, V. M., Smith, N. D., & Crabb, D. P. (2018). Does glaucoma alter eye movements when viewing images of natural scenes? A between-eye study. Investigative Ophthalmology & Visual Science, 59 (8), 3189–3198.
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59 (4), 390–412.
Bahill, A. T., Clark, M. R., & Stark, L. (1975). The main sequence, a tool for studying human eye movements. Mathematical Biosciences, 24 (3–4), 191–204.
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823.
Bays, P. M., & Husain, M. (2012). Active inhibition and memory promote exploration and search of natural scenes. Journal of Vision, 12 (8): 8, 1–18, https://doi.org/10.1167/12.8.8. [PubMed] [Article]
Beltrán, J., García-Vázquez, M. S., Benois-Pineau, J., Gutierrez-Robledo, L. M., & Dartigues, J.-F. (2018). Computational techniques for eye movements analysis towards supporting early diagnosis of Alzheimer's disease: A review. Computational and Mathematical Methods in Medicine, 2018: 2676409.
Benfatto, M. N., Seimyr, G. Ö., Ygge, J., Pansell, T., Rydberg, A., & Jacobson, C. (2016). Screening for dyslexia using eye tracking during reading. PLoS One, 11 (12): e0165508.
Blignaut, P., & Beelders, T. (2009). The effect of fixational eye movements on fixation identification with a dispersion-based fixation detection algorithm. Journal of Eye Movement Research, 2 (5).
Boccignone, G. (2015). Advanced statistical methods for eye movement analysis and modeling: A gentle introduction. arXiv: 1506.07194.
Boccignone, G., Ferraro, M., Crespi, S., Robino, C., & de'Sperati, C. (2014). Detecting expert's eye using a multiple-kernel relevance vector machine. Journal of Eye Movement Research, 7 (2).
Boisvert, J. F., & Bruce, N. D. (2016). Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features. Neurocomputing, 207, 653–668.
Borji, A., & Itti, L. (2014). Defending Yarbus: Eye movements reveal observers' task. Journal of Vision, 14 (3): 29, 1–22, https://doi.org/10.1167/14.3.29. [PubMed] [Article]
Cajar, A., Engbert, R., & Laubrock, J. (2016). Spatial frequency processing in the central and peripheral visual field during scene viewing. Vision Research, 127, 186–197.
Cajar, A., Schneeweiß, P., Engbert, R., & Laubrock, J. (2016). Coupling of attention and saccades when viewing scenes with central and peripheral degradation. Journal of Vision, 16 (2): 8, 1–19, https://doi.org/10.1167/16.2.8. [PubMed] [Article]
Camastra, F., & Vinciarelli, A. (2008). Markovian models for sequential data. London, UK: Springer.
Cantoni, V., Galdi, C., Nappi, M., Porta, M., & Riccio, D. (2015). Gant: Gaze analysis technique for human identification. Pattern Recognition, 48 (4), 1027–1038.
Cheung, S.-H., & Legge, G. E. (2005). Functional and cortical adaptations to central vision loss. Visual Neuroscience, 22 (2), 187–201.
Chung, S. T. (2011). Improving reading speed for people with central vision loss through perceptual learning. Investigative Ophthalmology & Visual Science, 52 (2), 1164–1170.
Clarke, A. D., & Tatler, B. W. (2014). Deriving an appropriate baseline for describing fixation behaviour. Vision Research, 102, 41–51.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Earlbaum.
Coleman, H. R., Chan, C.-C., Ferris, F. L., & Chew, E. Y. (2008). Age-related macular degeneration. The Lancet, 372 (9652), 1835–1845.
Cornelissen, F. W., Bruin, K. J., & Kooijman, A. C. (2005). The influence of artificial scotomas on eye movements during visual search. Optometry and Vision Science, 82 (1), 27–35.
Coutrot, A., Binetti, N., Harrison, C., Mareschal, I., & Johnston, A. (2016). Face exploration dynamics differentiate men and women. Journal of Vision, 16 (14): 16, 1–19, https://doi.org/10.1167/16.14.16. [PubMed] [Article]
Coutrot, A., Hsiao, J. H., & Chan, A. B. (2018). Scanpath modeling and classification with hidden Markov models. Behavior Research Methods, 50 (1), 362–379.
Crabb, D. P., Smith, N. D., Rauscher, F. G., Chisholm, C. M., Barbur, J. L., Edgar, D. F., & Garway-Heath, D. F. (2010). Exploring eye movements in patients with glaucoma when viewing a driving scene. PLoS One, 5 (3): e9710.
Crabb, D. P., Smith, N. D., & Zhu, H. (2014). What's on tv? Detecting age-related neurodegenerative eye disease using eye movement scanpaths. Frontiers in Aging Neuroscience, 6, 312.
Crossland, M. D., Engel, S. A., & Legge, G. E. (2011). The preferred retinal locus in macular disease: Toward a consensus definition. Retina, 31 (10), 2109–2114.
Cumming, G. (2008). Replication and p intervals: p values predict the future only vaguely, but confidence intervals do much better. Perspectives on Psychological Science, 3 (4), 286–300.
David, E., Perreira Da Silva, M., Lebranchu, P., & Le Callet, P. (2018). How are ocular behaviours affected by central and peripheral vision losses? A study based on artificial scotomas and gaze-contingent protocol. Electronic Imaging, 2018 (6), 1–6, https://doi.org/10.2352/ISSN.2470-1173.2018.14.HVEI-504.
Demidenko, E. (2016). The p-value you can't buy. The American Statistician, 70 (1), 33–38.
Duchowski, A. T., Cournia, N., & Murphy, H. (2004). Gaze-contingent displays: A review. CyberPsychology & Behavior, 7 (6), 621–634.
Eivazi, S., & Bednarik, R. (2011). Predicting problem-solving behavior and performance levels from visual attention data. In Proceedings of the 2nd workshop on eye gaze in intelligent human machine interaction at IUI (pp. 9–16). New York, NY: ACM.
Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14 (2), 179–211.
Engbert, R., Trukenbrod, H. A., Barthelmé, S., & Wichmann, F. A. (2015). Spatial statistics and attentional dynamics in scene viewing. Journal of Vision, 15 (1): 14, 1–17, https://doi.org/10.1167/15.1.14. [PubMed] [Article]
Fea, A. M., Hengerer, F., Lavia, C., & Au, L. (2017). Glaucoma quality of life. Journal of Ophthalmology, 2017: 4257151.
Foulsham, T., Kingstone, A., & Underwood, G. (2008). Turning the world around: Patterns in saccade direction vary with picture orientation. Vision Research, 48 (17), 1777–1790.
Foulsham, T., Teszka, R., & Kingstone, A. (2011). Saccade control in natural images is shaped by the information visible at fixation: Evidence from asymmetric gaze-contingent windows. Attention, Perception, & Psychophysics, 73 (1), 266–283.
Friedman, D. S., O'Colmain, B. J., Munoz, B., Tomany, S. C., McCarty, C., De Jong, P.,… Kempen, J. (2004). Prevalence of age-related macular degeneration in the United States. Archives of Ophthalmology, 122 (4), 564–572.
Gamboa, J. C. B. (2017). Deep learning for time-series analysis. arXiv preprint arXiv:1701.01887.
Geringswald, F., Porracin, E., & Pollmann, S. (2016). Impairment of visual memory for objects in natural scenes by simulated central scotomata. Journal of Vision, 16 (2): 6, 1–12, https://doi.org/10.1167/16.2.6. [PubMed] [Article]
Glen, F. C., Smith, N. D., Jones, L., & Crabb, D. P. (2016). ‘I didn't see that coming': Simulated visual fields and driving hazard perception test performance. Clinical and Experimental Optometry, 99 (5), 469–475.
Haji-Abolhassani, A., & Clark, J. J. (2014). An inverse Yarbus process: Predicting observers' task from eye movement patterns. Vision Research, 103, 127–142.
Harris, C. M., & Wolpert, D. M. (2006). The main sequence of saccades optimizes speed-accuracy trade-off. Biological Cybernetics, 95 (1), 21–29.
Henderson, J. M., McClure, K. K., Pierce, S., & Schrock, G. (1997). Object identification without foveal vision: Evidence from an artificial scotoma paradigm. Perception & Psychophysics, 59 (3), 323–346.
Henson, D. B., Evans, J., Chauhan, B. C., & Lane, C. (1996). Influence of fixation accuracy on threshold variability in patients with open angle glaucoma. Investigative Ophthalmology & Visual Science, 37 (2), 444–450.
Holland, C., & Komogortsev, O. V. (2011). Biometric identification via eye movement scanpaths in reading. In 2011 International joint conference on Biometrics (IJCB), (pp. 1–8). New York, NY: IEEE.
Hoppe, S., Loetscher, T., Morey, S., & Bulling, A. (2015). Recognition of curiosity using eye movement analysis. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (pp. 185–188). New York, NY: ACM.
Hoppe, S., Loetscher, T., Morey, S. A., & Bulling, A. (2018). Eye movements during everyday behavior predict personality traits. Frontiers in Human Neuroscience, 12, 105.
Itti, L. (2015). New eye-tracking techniques may revolutionize mental health screening. Neuron, 88 (3), 442–444.
Jager, R. D., Mieler, W. F., & Miller, J. W. (2008). Age-related macular degeneration. New England Journal of Medicine, 358 (24), 2606–2617.
Jammalamadaka, S. R., & Sengupta, A. (2001). Topics in circular statistics (Vol. 5). River Edge, NJ: World Scientific.
Janssen, C. P., & Verghese, P. (2015). Stop before you saccade: Looking into an artificial peripheral scotoma. Journal of Vision, 15 (5): 7, 1–19, https://doi.org/10.1167/15.5.7. [PubMed] [Article]
Kanan, C., Bseiso, D. N., Ray, N. A., Hsiao, J. H., & Cottrell, G. W. (2015). Humans have idiosyncratic and task-specific scanpaths for judging faces. Vision Research, 108, 67–76.
Kanan, C., Ray, N. A., Bseiso, D. N., Hsiao, J. H., & Cottrell, G. W. (2014). Predicting an observer's task using multi-fixation pattern analysis. In Spencer S. N. (Ed.), Proceedings of the Symposium on Eye Tracking Research and Applications (pp. 287–290). New York, NY: ACM.
Kasneci, E., Sippel, K., Aehling, K., Heister, M., Rosenstiel, W., Schiefer, U., & Papageorgiou E. (2014). Driving with binocular visual field loss? A study on a supervised on-road parcours with simultaneous eye and head tracking. PLoS One, 9 (2): e87470.
King, A., Azuara-Blanco, A., & Tuulonen, A. (2013). Authors' reply to Georgalas and colleagues. British Medical Journal, 347, f4216.
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Kolodziej, M., Majkowski, A., Francuz, P., Rak, R. J., & Augustynowicz, P. (2018). Identifying experts in the field of visual arts using oculomotor signals. Journal of Eye Movement Research, 11 (3).
Kübler, T. C., Rothe, C., Schiefer, U., Rosenstiel, W., & Kasneci, E. (2017). Subsmatch 2.0: Scanpath comparison and classification based on subsequence frequencies. Behavior Research Methods, 49 (3), 1048–1064.
Kupas, D., Harangi, B., Czifra, G., & Andrassy, G. (2017). Decision support system for the diagnosis of neurological disorders based on gaze tracking. In 10th International Symposium on Image and Signal Processing and Analysis (ispa), 2017 (pp. 37–40). New York, NY: IEEE.
Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82 (13), 1–26, https://doi.org/10.18637/jss.v082.i13
Kwon, M., Nandy, A. S., & Tjan, B. S. (2013). Rapid and persistent adaptability of human oculomotor control in response to simulated central vision loss. Current Biology, 23 (17), 1663–1669.
Laubrock, J., Cajar, A., & Engbert, R. (2013). Control of fixation duration during scene viewing by interaction of foveal and peripheral processing. Journal of Vision, 13 (12): 11, 1–20, https://doi.org/10.1167/13.12.11. [PubMed] [Article]
Le Meur, O., & Coutrot, A. (2016a). How saccadic models help predict where we look during a visual task? application to visual quality assessment. Electronic Imaging, 2016 (13), 1–7.
Le Meur, O., & Coutrot, A. (2016b). Introducing context-dependent and spatially-variant viewing biases in saccadic models. Vision Research, 121, 72–84.
Le Meur, O., Coutrot, A., Liu, Z., Rämä, P., Le Roch, A., & Helo, A. (2017a). Visual attention saccadic models learn to emulate gaze patterns from childhood to adulthood. IEEE Transactions on Image Processing, 26 (10), 4777–4789. New York, NY: IEEE.
Le Meur, O., Coutrot, A., Liu, Z., Rämä, P., Le Roch, A., & Helo, A. (2017b). Your gaze betrays your age. In 25th European Signal Processing Conference (EUSIPCO), (pp. 1892–1896).
Le Meur, O., & Liu, Z. (2015). Saccadic model of eye movements for free-viewing condition. Vision Research, 116, 152–164.
Lee, S. S.-Y., Black, A. A., & Wood, J. M. (2017). Effect of glaucoma on eye movement patterns and laboratory-based hazard detection ability. PLoS One, 12 (6): e0178876.
Leigh, R. J., & Zee, D. S. (2015). The neurology of eye movements (Vol. 90). Oxford, UK: Oxford University Press.
Liao, W., Zhang, W., Zhu, Z., & Ji, Q. (2005, September). A real-time human stress monitoring system using dynamic Bayesian network. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)-Workshops (pp. 70–70). New York, NY: IEEE.
Liu, R., & Kwon, M. (2016). Integrating oculomotor and perceptual training to induce a pseudofovea: A model system for studying central vision loss. Journal of Vision, 16 (6): 10, 1–21, https://doi.org/10.1167/16.6.10. [PubMed] [Article]
Livengood, H. M., & Baker, N. A. (2015). The role of occupational therapy in vision rehabilitation of individuals with glaucoma. Disability and Rehabilitation, 37 (13), 1202–1208.
Longhin, E., Convento, E., Pilotto, E., Bonin, G., Vujosevic, S., Kotsafti, O., … (2013). Static and dynamic retinal fixation stability in microperimetry. Canadian Journal of Ophthalmology, 48 (5), 375–380.
Loschky, L., & McConkie, G. W. (2002). Investigating spatial vision and dynamic attentional selection using a gaze-contingent multiresolutional display. Journal of Experimental Psychology: Applied, 8 (2), 99.
Loschky, L., McConkie, G., Yang, J., & Miller, M. (2005). The limits of visual resolution in natural scene viewing. Visual Cognition, 12 (6), 1057–1092.
Loschky, L., & Wolverton, G. S. (2007). How late can you update gaze-contingent multiresolutional displays without detection? ACM Transactions on Multimedia Computing, Communications, and Applications, 3(4), 7. New York, NY: ACM.
Lumley, T., Diehr, P., Emerson, S., & Chen, L. (2002). The importance of the normality assumption in large public health data sets. Annual Review of Public Health, 23 (1), 151–169.
Luo, G., Vargas-Martin, F., & Peli, E. (2008). The role of peripheral vision in saccade planning: Learning from people with tunnel vision. Journal of Vision, 8 (14): 25, 1–8, https://doi.org/10.1167/8.14.25. [PubMed] [Article]
Macedo, A. F., Crossland, M. D., & Rubin, G. S. (2011). Investigating unstable fixation in patients with macular disease. Investigative Ophthalmology & Visual Science, 52 (3), 1275–1280.
Manor, B. R., & Gordon, E. (2003). Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks. Journal of Neuroscience Methods, 128 (1–2), 85–93.
McConkie, G. W., & Loschky, L. (2002). Perception onset time during fixations in free viewing. Behavior Research Methods, 34 (4), 481–490.
McConkie, G. W., & Rayner, K. (1975). The span of the effective stimulus during a fixation in reading. Perception & Psychophysics, 17 (6), 578–586.
McGrory, C. A., & Titterington, D. (2009). Variational Bayesian analysis for hidden Markov models. Australian & New Zealand Journal of Statistics, 51 (2), 227–244.
Mergenthaler, K., & Engbert, R. (2010). Microsaccades are different from saccades in scene perception. Experimental Brain Research, 203 (4), 753–757.
Mitchell, J., & Bradley, C. (2006). Quality of life in age-related macular degeneration: A review of the literature. Health and Quality of Life Outcomes, 4 (1), 97.
Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining r2 from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4 (2), 133–142.
Nilsson, U. L., Frennesson, C., & Nilsson, S. E. G. (2003). Patients with AMD and a large absolute central scotoma can be trained successfully to use eccentric viewing, as demonstrated in a scanning laser ophthalmoscope. Vision Research, 43 (16), 1777–1787.
Nuthmann, A. (2013). On the visual span during object search in real-world scenes. Visual Cognition, 21 (7), 803–837.
Nuthmann, A. (2014). How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements. Journal of Experimental Psychology: Human Perception and Performance, 40 (1), 342.
Nuthmann, A., & Malcolm, G. L. (2016). Eye guidance during real-world scene search: The role color plays in central and peripheral vision. Journal of Vision, 16 (2): 3, 1–16, https://doi.org/10.1167/16.2.3. [PubMed] [Article]
Nuthmann, A., Smith, T. J., Engbert, R., & Henderson, J. M. (2010). Crisp: A computational model of fixation durations in scene viewing. Psychological Review, 117 (2), 382.
Ooms, K., & Krassanakis, V. (2018). Measuring the spatial noise of a low-cost eye tracker to enhance fixation detection. Journal of Imaging, 4 (8). Available from http://www.mdpi.com/2313-433X/4/8/96, https://doi.org/10.3390/jimaging4080096.
Otero-Millan, J., Troncoso, X. G., Macknik, S. L., Serrano-Pedraza, I., & Martinez-Conde, S. (2008). Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator. Journal of Vision, 8 (14): 21, 1–18, https://doi.org/10.1167/8.14.21. [PubMed] [Article]
Papoutsaki, A., Sangkloy, P., Laskey, J., Daskalova, N., Huang, J., & Hays, J. (2016). Webgazer: Scalable webcam eye tracking using user interactions. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York City, US (pp. 3839–3845). New York, NY: IEEE.
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z.,… Lerer, A. (2017). Automatic differentiation in pytorch. In Neural Information Processing Systems (NIPS) Autodiff Workshop: The Future of Gradient-based Machine Learning Software and Techniques, Long Beach, CA, US. Long Beach, CA: Curran Associates, Inc.
Pierce, K., Marinero, S., Hazin, R., McKenna, B., Barnes, C. C., & Malige, A. (2016). Eye tracking reveals abnormal visual preference for geometric images as an early biomarker of an autism spectrum disorder subtype associated with increased symptom severity. Biological Psychiatry, 79 (8), 657–666.
Posner, M. I., & Cohen, Y. (1984). Components of visual orienting. Attention and Performance X: Control of Language Processes, 32, 531–556.
R Core Team. (2018). R: A language and environment for statistical computing [Computer software manual]. Vienna, Austria: Author. Available from https://www.R–project.org/
Rai, Y., Le Callet, P., & Cheung, G. (2016). Quantifying the relation between perceived interest and visual salience during free viewing using trellis based optimization. In 12th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP) (pp. 1–5). New York, NY: IEEE.
Rayner, K., & Bertera, J. H. (1979, October 26). Reading without a fovea. Science, 206 (4417), 468–469.
Reingold, E. M. (2014). Eye tracking research and technology: Towards objective measurement of data quality. Visual Cognition, 22 (3-4), 635–652.
Reingold, E. M., & Loschky, L. C. (2002). Saliency of peripheral targets in gaze-contingent multiresolutional displays. Behavior Research Methods, Instruments, & Computers, 34 (4), 491–499.
Rothkegel, L. O., Trukenbrod, H. A., Schütt, H. H., Wichmann, F. A., & Engbert, R. (2016). Influence of initial fixation position in scene viewing. Vision Research, 129, 33–49.
Ryu, D., Mann, D. L., Abernethy, B., & Poolton, J. M. (2016). Gaze-contingent training enhances perceptual skill acquisition. Journal of Vision, 16 (2): 2, 1–21, https://doi.org/10.1167/16.2.2. [PubMed] [Article]
Sabel, B. A., & Gudlin, J. (2014). Vision restoration training for glaucoma: A randomized clinical trial. JAMA Ophthalmology, 132 (4), 381–389.
Salthouse, T. A., & Ellis, C. L. (1980). Determinants of eye-fixation duration. The American Journal of Psychology, 93 (2) 207–234.
Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (pp. 71–78). New York, NY: ACM.
Sammaknejad, N., Pouretemad, H., Eslahchi, C., Salahirad, A., & Alinejad, A. (2017). Gender classification based on eye movements: A processing effect during passive face viewing. Advances in Cognitive Psychology, 13 (3), 232.
Sawilowsky, S. S. (2009). New effect size rules of thumb. Journal of Modern Applied Statistical Methods, 8 (2), 467–474.
Schmidt, A. F., & Finan, C. (2018). Linear regression and the normality assumption. Journal of Clinical Epidemiology, 98, 146–151.
Schreiber, J. (2017). Pomegranate: Fast and flexible probabilistic modeling in Python. The Journal of Machine Learning Research, 18 (1), 5992–5997.
Seiple, W., Grant, P., & Szlyk, J. P. (2011). Reading rehabilitation of individuals with AMD: Relative effectiveness of training approaches. Investigative Ophthalmology & Visual Science, 52 (6), 2938–2944.
Shi, Y., Liu, M., Wang, X., Zhang, C., & Huang, P. (2013). Fixation behavior in primary open angle glaucoma at early and moderate stage assessed by the microperimeter mp-1. Journal of Glaucoma, 22 (2), 169–173.
Simola, J., Salojärvi, J., & Kojo, I. (2008). Using hidden Markov model to uncover processing states from eye movements in information search tasks. Cognitive Systems Research, 9 (4), 237–251.
Sippel, K., Kasneci, E., Aehling, K., Heister, M., Rosenstiel, W., Schiefer, U., & Papageorgiou, E. (2014). Binocular glaucomatous visual field loss and its impact on visual exploration-a supermarket study. PLoS One, 9 (8): e106089.
Skenduli-Bala, E., de Voogd, S., Wolfs, R. C., van Leeuwen, R., Ikram, M. K., Jonas, J. B.,… de Jong, P. T. (2005). Causes of incident visual field loss in a general elderly population: The Rotterdam study. Archives of Ophthalmology, 123 (2), 233–238.
Smith, N. D., Crabb, D. P., Glen, F. C., Burton, R., & Garway-Heath, D. F. (2012). Eye movements in patients with glaucoma when viewing images of everyday scenes. Seeing and Perceiving, 25 (5), 471–492.
Smith, N. D., Glen, F. C., & Crabb, D. P. (2012). Eye movements during visual search in patients with glaucoma. BMC Ophthalmology, 12 (1), 45.
Smith, T. J., & Henderson, J. M. (2009). Facilitation of return during scene viewing. Visual Cognition, 17 (6-7), 1083–1108.
Smith, T. J., & Henderson, J. M. (2011). Does oculomotor inhibition of return influence fixation probability during scene search? Attention, Perception, & Psychophysics, 73 (8), 2384–2398.
Tatler, B. W., & Vincent, B. T. (2009). The prominence of behavioural biases in eye guidance. Visual Cognition, 17 (6–7), 1029–1054.
Tatler, B. W., Wade, N. J., Kwan, H., Findlay, J. M., & Velichkovsky, B. M. (2010). Yarbus, eye movements, and vision. i-Perception, 1 (1), 7–27.
Terao, Y., Fukuda, H., & Hikosaka, O. (2017). What do eye movements tell us about patients with neurological disorders?—An introduction to saccade recording in the clinical setting. Proceedings of the Japan Academy, Series B, 93 (10), 772–801.
Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global prevalence of glaucoma and projections of glaucoma burden through 2040: A systematic review and meta-analysis. Ophthalmology, 121 (11), 2081–2090.
Thibaut, M., Delerue, C., Boucart, M., & Tran, T. (2016). Visual exploration of objects and scenes in patients with age-related macular degeneration. Journal Francais d'Ophtalmologie, 39 (1), 82–89.
Tien, T., Pucher, P. H., Sodergren, M. H., Sriskandarajah, K., Yang, G.-Z., & Darzi, A. (2014). Eye tracking for skills assessment and training: A systematic review. Journal of Surgical Research, 191 (1), 169–178.
Tran, T. H. C., Rambaud, C., Despretz, P., & Boucart, M. (2010). Scene perception in age-related macular degeneration. Investigative Ophthalmology & Visual Science, 51 (12), 6868–6874.
Tseng, P.-H., Paolozza, A., Munoz, D. P., Reynolds, J. N., & Itti, L. (2013). Deep learning on natural viewing behaviors to differentiate children with fetal alcohol spectrum disorder. In Yin, H. Tang, K. Gao, Y. Klawonn, F. Lee, M. Weise, T. Li, B. & Yao X. (Eds.), International Conference on Intelligent Data Engineering and Automated Learning (pp. 178–185). Hefei, China: Springer.
van Diepen, P., & d'Ydewalle, G. (2003). Early peripheral and foveal processing in fixations during scene perception. Visual Cognition, 10 (1), 79–100.
Vargas-Martín, F., & Peli, E. (2006). Eye movements of patients with tunnel vision while walking. Investigative Ophthalmology & Visual Science, 47 (12), 5295–5302.
Voisin, S., Yoon, H.-J., Tourassi, G., Morin-Ducote, G., & Hudson, K. (2013). Personalized modeling of human gaze: Exploratory investigation on mammogram readings. In Biomedical Sciences and Engineering Conference, 2013, 1–4. New York, NY: IEEE.
Weinreb, R. N., Aung, T., & Medeiros, F. A. (2014). The pathophysiology and treatment of glaucoma: A review. Journal of the American Medical Association, 311 (18), 1901–1911.
Wiecek, E. W., Pasquale, L. R., Fiser, J., Dakin, S., & Bex, P. J. (2012). Effects of peripheral visual field loss on eye movements during visual search. Frontiers in Psychology, 3, 472.
Yamada, Y., & Kobayashi, M. (2017). Detecting mental fatigue from eye-tracking data gathered while watching video. In Ten Teije, A. PopowJohn C. Holmes, H. & Sacchi L. (Eds.), Conference on Artificial Intelligence in Medicine in Europe (pp. 295–304). Vienna, Austria: Springer.
Zhang, Y., Wilcockson, T., Kim, K. I., Crawford, T., Gellersen, H., & Sawyer, P. (2016). Monitoring dementia with automatic eye movements analysis. In Czarnowski, I. Caballero, A. M. Howlett, R. J. & Jain L. C. (Eds.), Intelligent Decision Technologies 2016, Proceedings of the 8th KES International Conference on Intelligent Decision Technologies (pp. 299–309). Cham, Switzerland: Springer.
Appendix
Figure A1
 
Subplot (a) shows results for hidden Markov models classifying types of scotomas, (b) for hidden Markov models classifying types and sizes of scotomas. Subplot (a) and (b) present in the top part percentage test accuracies of all 255 models ordered by number of gaze features and increasing accuracy. The bottom subplot shows presence or absence of features in a model by displaying a colored square on the same vertical line as the model when a feature is present. Top and bottom figures are aligned: Models in the top part match directly with feature combinations beneath. Dashed gray lines show chance levels: (a) 33.33% and (b) 11.11%.
Figure A1
 
Subplot (a) shows results for hidden Markov models classifying types of scotomas, (b) for hidden Markov models classifying types and sizes of scotomas. Subplot (a) and (b) present in the top part percentage test accuracies of all 255 models ordered by number of gaze features and increasing accuracy. The bottom subplot shows presence or absence of features in a model by displaying a colored square on the same vertical line as the model when a feature is present. Top and bottom figures are aligned: Models in the top part match directly with feature combinations beneath. Dashed gray lines show chance levels: (a) 33.33% and (b) 11.11%.
Figure A2
 
Subplot (a) shows results for recurrent neural networks classifying types of scotomas, (b) for recurrent neural networkss classifying types and sizes of scotomas. Subplot (a) and (b) present in the top part percentage test accuracies of all 255 models ordered by number of gaze features and increasing accuracy. The bottom subplot shows presence or absence of features in a model by displaying a colored square on the same vertical line as the model when a feature is present. Top and bottom figures are aligned: Models in the top part match directly with feature combinations beneath. Dashed gray lines show chance levels: (a) 33.33% and (b) 11.11%.
Figure A2
 
Subplot (a) shows results for recurrent neural networks classifying types of scotomas, (b) for recurrent neural networkss classifying types and sizes of scotomas. Subplot (a) and (b) present in the top part percentage test accuracies of all 255 models ordered by number of gaze features and increasing accuracy. The bottom subplot shows presence or absence of features in a model by displaying a colored square on the same vertical line as the model when a feature is present. Top and bottom figures are aligned: Models in the top part match directly with feature combinations beneath. Dashed gray lines show chance levels: (a) 33.33% and (b) 11.11%.
 
Figure 1
 
Presentation of mask types (top row: central masks, bottom row: peripheral masks) and radii (columns from left to right: 1.5°, 2.5°, 3.5°, 4.5°). Mask radii depicted here are proportionally accurate for stimuli encompassing 31.2° by 17.7° of visual field.
Figure 1
 
Presentation of mask types (top row: central masks, bottom row: peripheral masks) and radii (columns from left to right: 1.5°, 2.5°, 3.5°, 4.5°). Mask radii depicted here are proportionally accurate for stimuli encompassing 31.2° by 17.7° of visual field.
Figure 2
 
Progress of a trial. Beginning with a set of validation points to check the eye tracker's accuracy. A fixation cross then appears, disappearing after approximately 1.5 s. An image appears next for 10 s in one of the three mask type conditions and one of the three mask size conditions (not represented here). A trial ends with a resting period of 1.5 s.
Figure 2
 
Progress of a trial. Beginning with a set of validation points to check the eye tracker's accuracy. A fixation cross then appears, disappearing after approximately 1.5 s. An image appears next for 10 s in one of the three mask type conditions and one of the three mask size conditions (not represented here). A trial ends with a resting period of 1.5 s.
Figure 3
 
(a) Absolute angles are shown in green between a saccade direction (black arrows) and the horizontal axis (orange dashed lines). The HSR reports the proportion of left-directed saccades within horizontal saccades; the HVR measures the proportion of horizontal saccades among all saccades observed. (b) Relative angles (green arcs) are angles between two saccades directions (black arrows and orange dashed lines). The BSR informs about the proportion of backward-directed saccades among backward and forward saccades. The SRR (Asfaw et al., 2018) measures the number of backward saccades falling between 170° and 190° as a proportion of the total amount of saccades.
Figure 3
 
(a) Absolute angles are shown in green between a saccade direction (black arrows) and the horizontal axis (orange dashed lines). The HSR reports the proportion of left-directed saccades within horizontal saccades; the HVR measures the proportion of horizontal saccades among all saccades observed. (b) Relative angles (green arcs) are angles between two saccades directions (black arrows and orange dashed lines). The BSR informs about the proportion of backward-directed saccades among backward and forward saccades. The SRR (Asfaw et al., 2018) measures the number of backward saccades falling between 170° and 190° as a proportion of the total amount of saccades.
Figure 4
 
Subplots (a) through (i) show mean and 95% CI of variables involved in LMM analyses. Log-transformed variables are presented on a logarithmic scale. Control mean is shown as a solid black line (dashed lines report 95% CI). Data obtained with central masks are in red and peripheral masks in blue. The legend is located at the bottom of the figure: Scotoma radii are presented according to the decreasing amount of information left by the central (first) and peripheral (second) masks.
Figure 4
 
Subplots (a) through (i) show mean and 95% CI of variables involved in LMM analyses. Log-transformed variables are presented on a logarithmic scale. Control mean is shown as a solid black line (dashed lines report 95% CI). Data obtained with central masks are in red and peripheral masks in blue. The legend is located at the bottom of the figure: Scotoma radii are presented according to the decreasing amount of information left by the central (first) and peripheral (second) masks.
Figure 5
 
Probability density functions of saccade amplitude (degrees) as a function of mask types and sizes (central masking red, peripheral masking blue, control data green). Mask radii are displayed as black dashed lines.
Figure 5
 
Probability density functions of saccade amplitude (degrees) as a function of mask types and sizes (central masking red, peripheral masking blue, control data green). Mask radii are displayed as black dashed lines.
Figure 6
 
Density probability distributions of absolute angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
Figure 6
 
Density probability distributions of absolute angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
Figure 7
 
Density probability distributions of relative angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
Figure 7
 
Density probability distributions of relative angle and saccade amplitude (degrees) represented as polar plots, as a function of mask types and sizes. Red circle at the center of plots indicates mask dimensions.
Table 1
 
Correlation coefficient and significance levels are reported between all gaze features. Notes: Variable names prepended with “S” derive from saccades, “F” derive from fixations. Significance levels: Ø p ≥ 0.05; · p < 0.05; * p < 0.01; ** p < 0.001; *** p < 0.0001.
Table 1
 
Correlation coefficient and significance levels are reported between all gaze features. Notes: Variable names prepended with “S” derive from saccades, “F” derive from fixations. Significance levels: Ø p ≥ 0.05; · p < 0.05; * p < 0.01; ** p < 0.001; *** p < 0.0001.
Table 2
 
Rank, accuracy, and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) types of scotoma and (b) hidden Markov models classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 2
 
Rank, accuracy, and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) types of scotoma and (b) hidden Markov models classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 3
 
Mean rank and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) central mask (b) peripheral mask radii. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 3
 
Mean rank and mean improvement for each gaze feature are reported for hidden Markov models classifying (a) central mask (b) peripheral mask radii. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 4
 
Rank, accuracy, and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) types of scotoma and (b) recurrent neural networks classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 4
 
Rank, accuracy, and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) types of scotoma and (b) recurrent neural networks classifying types and size. Notes: Gaze features are ordered their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 5
 
Mean rank and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) central mask and (b) peripheral mask sizes. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Table 5
 
Mean rank and mean improvement for each gaze feature are reported for recurrent neural networks classifying (a) central mask and (b) peripheral mask sizes. Notes: Gaze features are ordered by their rank (function of accuracy). Variable names prepended with “S” derive from saccades, “F” derive from fixations.
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×