Journal of Vision Cover Image for Volume 25, Issue 4
April 2025
Volume 25, Issue 4
Open Access
Article  |   April 2025
Distinctive pupil and microsaccade-rate signatures in self-recognition
Author Affiliations
  • Lisa Schwetlick
    Department of Psychology, University of Potsdam, Potsdam, Germany
    Laboratory of Psychophysics, EPFL, Lausanne, Switzerland
    [email protected]
  • Hendrik Graupner
    Bundesdruckerei GmbH, Berlin, Germany
    Hasso Plattner Institute and University of Potsdam, Potsdam, Germany
    [email protected]
  • Olaf Dimigen
    Faculty of Behavioural and Social Sciences, University of Groningen, Groningen, Netherlands
    [email protected]
  • Ralf Engbert
    Department of Psychology, University of Potsdam, Potsdam, Germany
    [email protected]
Journal of Vision April 2025, Vol.25, 16. doi:https://doi.org/10.1167/jov.25.4.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lisa Schwetlick, Hendrik Graupner, Olaf Dimigen, Ralf Engbert; Distinctive pupil and microsaccade-rate signatures in self-recognition. Journal of Vision 2025;25(4):16. https://doi.org/10.1167/jov.25.4.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Pupil dynamics and fixational eye movements are primarily involuntary processes that actively support visual perception during fixations. Both measures are known to be sensitive to ongoing cognitive and affective processing. In a visual fixation experiment (N = 116), we demonstrate that self-recognition, familiar faces, and unfamiliar faces elicit specific responses in pupil dynamics and microsaccade rate. First, the pupil response comprises an immediate pupil constriction, followed by a dilation in response to stimulus onsets. We observe attenuated constriction and greater dilation when faces are recognized compared to unknown faces. This effect is strongest for one’s own face. Second, microsaccade rates, which generally show inhibitory responses to incoming stimuli, generate stronger inhibition for familiar faces compared to unknown faces. Again, the strongest inhibition is observed in response to one’s own face. Our results imply that eye-related physiological measures expose hidden knowledge in face memory and could contribute to biometric authentication and identity validation procedures.

Introduction
Eye movements are an essential component of visual perception. Easy to observe are rapid eye movements (saccades) that generate sequences of gaze shifts and periods of relative rest (fixations) that give rise to the eye’s trajectory during scene exploration (Henderson, 2003) or everyday activities. In the laboratory, we can also investigate microscopic eye movements that occur during fixations: microsaccades and changes in pupil size. Here we investigate how both systems are influenced by face recognition. 
Microsaccades are generated by the same machinery of extraocular muscles that drive gaze shifts. These small-amplitude fixational eye movements share their kinematic relations with larger saccades (Bahill, Clark, & Stark, 1975). Microsaccades are distinguished from the larger saccades primarily by their size (but not exclusively, see Mergenthaler & Engbert, 2010), typically <1° of visual angle. During fixations, they serve vital functions for visual processing (e.g., counteracting perceptual fading; Martinez-Conde, Macknik, & Hubel, 2004) and support visual exploration at the foveal scale (Shelchkova, Tang, & Poletti, 2019). Microsaccades were initially defined as involuntary motor behavior (Ratliff & Riggs, 1950) during steady fixation, but during high-acuity tasks, microsaccades might be better described as primarily involuntary with voluntary components (Sinn & Engbert, 2016; Willeke et al., 2019). 
The second type of investigated eye movement, pupil dilation, is driven by intraocular muscles and primarily adapts pupil size to the surrounding illumination (pupillary light reflex or PLR) (Mathôt, 2018). Both microsaccades and pupil dilation are modulated by ongoing cognitive (Engbert, 2006) and emotional processing (Hess & Polt, 1960; Winterson & Collewijn, 1976). In the current study, we set out to investigate how the recognition of unfamiliar and familiar faces—including one’s own face—affects microsaccades and pupil size. 
Face recognition is tightly coupled to visual, cognitive, and affective processing. Evidence from the literature suggests that faces are processed in the visual system by unique, specialized cognitive and neural mechanisms (McKone & Kanwisher, 2005; Yin, 1969) in the fusiform gyrus of the inferior temporal (IT) cortex (Kanwisher, McDermott, & Chun, 1997) and in prefrontal cortex (Rolls, Critchley, Browning, & Inoue, 2005). Response latency for face-sensitive cells in IT cortex is reported to be between 50 ms (Sugase, Yamane, Ueno, & Kawano, 1999) and 160 ms (Perrett, Rolls, & Caan, 1982). Moreover, recent findings highlight the role of the superior colliculus (SC) in detecting objects, including faces, at very early stages of visual processing. SC neurons respond robustly to facial stimuli and face-like patterns as little as 50 ms after stimulus onset (Nguyen et al., 2012; Nguyen et al., 2014; Nguyen et al., 2016). Specifically, Bogadhi and Hafed (2023) showed that SC neurons exhibit immediate visual bursts that differentiate objects from nonobjects, pointing toward its role in rapid, coarse visual categorization. 
The recognition of a specific person, or even oneself, engages the same visual processing circuitry as a new, unfamiliar face but additionally causes memory retrieval, context-dependent surprisal, and emotional responses (Johnston & Edmonds, 2009). In order to isolate visual processing components of recognition, previous studies typically used faces that entrained during the experiment (unfamiliar face recognition) or famous faces (familiar face recognition). We expect that semantic and emotional responses will be significantly stronger for faces of real-life acquaintances and friends, as well as oneself (self-recognition). 
Pupil size is a well-established physiological measure of ongoing cognition that varies between 1.5 and 9 mm in diameter (Sirois & Brisson, 2014). Modern pupillometry distinguishes three types of pupil response: The PLR, pupil near response (PNR), and psychosensory pupil responses (Mathôt, 2018). Of these, the PLR is the strongest component, causing pupil constriction within the first 200 ms after stimulus onset; the minimum pupil size is reached between 200 ms (Mathôt, 2018) and 1600 ms (Bradley, Miccoli, Escrig, & Lang, 2008). In addition to external factors, pupil size is also determined by cognition and arousal, modulated by variables such as interest or processing load. Physiologically, pupil size is connected to brain regions that control sleep–wake rhythms and nervous system activation (Sirois & Brisson, 2014). Activity in the locus coeruleus, which is involved in memory retrieval and selective attention, is highly correlated with pupil size in monkeys (Laeng, Sirois, & Gredebäck, 2012). Early cognitive effects on pupil size (within 1 s) are related to novelty and saliency (Hess & Polt, 1960; Mathôt, 2018; Sirois & Brisson, 2014), as well as surprisal, uncertainty, and prediction errors (Larsen & Waters, 2018). The effects of (positive and negative) arousal and mental effort are associated with longer delays (see Mathôt, 2018, for a review), while emotional responses peak last (>2 s after target onset) (Kinner et al., 2017). 
For microsaccades during cognitive tasks, the baseline rate is usually around 1 per second (1 Hz). Changes in visual input, however, can strongly modulate the rate (Engbert & Kliegl, 2003; Engbert, 2012; Martin, Davis, Riesenhuber, & Thorpe, 2020). In particular, any change in the display causes a temporary decrease in the microsaccade rate (MSR). The microsaccadic inhibition is followed by a rise in MSR, often exceeding the baseline rate; 500 to 1000 ms after stimulus onset, the MSR returns to its resting state. While early research suggested that display-change-related microsaccade inhibition is generated at the level of the SC (Laubrock, Cajar, & Engbert, 2013; Rolfs, 2009), more recent evidence shows that saccadic inhibition is unaffected by SC inactivation (Hafed & Ignashchenkova, 2013). As a generating mechanism, Hafed and Ignashchenkova (2013) proposed that microsaccadic inhibition results from phase resetting of ongoing oscillatory motor programs. Furthermore, Buonocore, Dietze, and McIntosh (2021) and Buonocore and Hafed (2023) highlighted the role of brainstem mechanisms, such as omnipause neurons, in initiating the transient cessation of saccades in response to visual transients, effectively serving as a “brake” that resets motor plans. The SC receives input from various regions potentially conveying top-down information (Schall, 1995; Sparks, 2002), responds specifically to real-life objects like faces with early visual bursts (Bogadhi & Hafed, 2023; Nguyen et al., 2012; Nguyen et al., 2014; Nguyen et al., 2016), and may influence microsaccades indirectly. In summary, both previous behavioral data and our current understanding of the underlying neurophysiology suggest that pupil size and MSR could vary substantially across conditions during face recognition. 
Recent studies have explored pupil dilation and MSR in the context of face recognition, using a paradigm measuring reactions to rapid presentation of faces. Chen et al. (2023) showed streams of unknown faces, target faces, and a familiar face (the participant's mother) and found that pupil dilation is modulated by face recognition but found no effects on MSR. Rosenzweig and Bonneh (2019) specifically probed for microsaccadic inhibition using a similar rapid presentation paradigm, where they compared (a) a prelearned target face (the “terrorist”) within a set of eight unknown faces (Rosenzweig & Bonneh, 2019) or (b) a famous face among unknown faces (Rosenzweig & Bonneh, 2020). They found evidence of stronger microsaccadic inhibition related to face familiarity. 
The goal of the present work was to establish in a large-scale eye-tracking experiment that pupil dilation and microsaccade dynamics reveal face recognition and, more specifically, also self-recognition (as an extreme example of a familiar face). We manipulated face familiarity by using pictures of the participants' classmates, yielding a well-controlled design with a high statistical power. We were also interested in the influence of image repetition, individual differences, and the time course of both measures, which are highly relevant for potential applications. Compared to unfamiliar faces and prelearned faces, real-life familiar faces should be intrinsically relevant and meaningful to the observer, even in the absence of a specific task. We therefore hypothesized that recognition should produce a reliable oculomotor signature, akin to an oddball response. In an oddball paradigm, participants see a sequence of frequent (expected) stimuli, interspersed with infrequent (and therefore unexpected) but task-relevant targets (the oddball stimulus) (Sutton, Braren, Zubin, & John, 1965), while performing a task such as silently counting targets. This paradigm (see Polich, 2007, for an extensive review) has revealed that a particular event-related potential (P300, peaking around  300 ms after presentation) is strongest when the participant is engaged in target detection (Picton, 1992; Squires, Squires, & Hillyard, 1975; Sutton et al., 1965) and is modulated by the meaningfulness of stimuli (Johnson & Rosenfeld, 1992), task difficulty (Picton, 1992), and the internal arousal level and availability of attentional resources (Kok, 2001). 
The oddball task has been found to affect pupil dilation (Kamp & Donchin, 2015; Murphy, O’Connell, O’Sullivan, Robertson, & Balsters, 2014; Strauch, Koniakowsky, & Huckauf, 2020), likely via the locus coeruleus (Murphy et al., 2014), as well as MSR (Valsecchi, Betta, & Turatto, 2006; Valsecchi, Dimigen, Kliegl, Sommer, & Turatto, 2009). We expect face recognition to comprise various reflexive, cognitive, and emotional components and to be similar to the surprisal and increased attention modulated via target frequency in the oddball task. The recognition of familiar faces should be related to a reduced MSR and increased pupil dilation, compared to unfamiliar faces. Moreover, we expect recognition of one’s own face to evoke the strongest response, compared to familiar and unfamiliar faces. 
From the literature reviewed above, we conclude that both pupil size and MSR are physiological measures that are functionally driven but modulated by ongoing cognition. The aim of the present study is to investigate whether pupil dynamics and MSRs are specifically affected by face recognition processes. Reliable differences in pupil dynamics and microsaccade statistics between self-recognition and recognition of other faces are relevant to applications in the context of viewer identification or exposing viewer knowledge. 
Methods
Participants
We recruited participants from two graduating high school classes in Potsdam, Germany. The advantage of this setup is that it allows a well-controlled design where each participant personally knows a subset of the other participants. Thus, each participant was shown faces that were either their classmates’ faces, who they knew personally (Peers), or students from the other high school (Strangers) or their own (Self). Of the initial cohort of 127 students whose pictures were taken, 118 students came to the eye-tracking session. Of these, two individuals were not able to be calibrated in the eye tracker; 116 complete data sets remain. The participants were between the ages of 16 and 18 years; 55 were male, 63 female, and 1 identified as nonbinary. 
The eye-tracking data collection was followed by a further questionnaire, where each participant saw the same faces they had seen during the experiment and was asked to confirm whether they did indeed know the person in each picture. Cases where a participant’s response to the question did not match with the expectation (i.e., when they did know someone from the other class or did not know someone from their own class) were removed from the final data set. The final data set is therefore balanced in the sense that each image, in principle, appears in each condition. 
Photographs
In order to collect the required photographs of the participants, a professional photographer visited the schools. Each participating student received an anonymous code. The photographs were taken under consistent illumination and with the heads centered, in a quasi-biometric setup, in order to maximize consistency between the pictures. The photographs were coded with the anonymous code, so that the mapping between participant and photograph was possible without the necessity of saving any identifying information alongside the image. The high-resolution photographs were further cropped and scaled, to center each face inside a square image. For data-privacy reasons, the photographs of the faces were deleted upon completion of the study. In the data set, only the anonymous codes remain. 
We computed the relative image luminance for each image and found it to be normally distributed according to a Shapiro–Wilk (Shapiro & Wilk, 1965) test of normality (p = 0.24). As each image is shown in each condition (i.e., every image is seen as Peers, Strangers, and Self), differences in image statistics would affect all groups equally and not influence the main effects. 
Eye-tracking data collection
For recording eye trajectories, we used an Eyelink 1000 eye tracker (SR Research, Osgoode, ON, Canada), which recorded both eyes at 500 Hz at an illumination level of 75%. The screen, a ViewPIXX monitor (VPixx Technologies, Saint-Bruno, Canada) with a resolution of 1920 × 1080 pixels, was placed at a 70-cm distance from the participant, with the head stabilized in a chin rest. The eyes were centered at three fourths of the height of the screen. We used a 5-point calibration grid and subjects were recalibrated every 14 trials. As the experiment required exclusively fixation in the center (and no exploration of the outlying regions of the screen), a 5-point calibration was sufficient to ensure data quality. 
Face images were shown at a resolution of 500 × 500 pixels centered on the screen, meaning that each image subtended 11.4° of visual angle. The experimental session proceeded as follows. Participants were informed that they would be seeing pictures of faces and were asked to fixate the central fixation marker for the duration of the trial. No task was given apart from fixating the fixation marker. The face photograph appeared under the fixation marker for a duration of 300 ms. They were asked not to blink and not to move their eyes. Figure 1 shows the time course of a single trial. 
Figure 1.
 
Time course of a trial. The 2-s fixation check ensures participants fixate the center of the screen. The brief face presentation (0.3 s) is then followed by a gray screen, during which the main measurement is taken.
Figure 1.
 
Time course of a trial. The 2-s fixation check ensures participants fixate the center of the screen. The brief face presentation (0.3 s) is then followed by a gray screen, during which the main measurement is taken.
The experiment was preceded by three training trials, in order to let participants get used to the procedure. The three faces for the training block were taken from the same image data set but were not used again for trials of the same subject. After the training block, the experimental trials followed. Each trial takes roughly 5 s, split into three phases: (a) A fixation check of around 2 s (i.e., if the eyes are not centered on the marker, a recalibration is initiated), (b) an interval of 300 ms of presentation of the face photograph, and (c) a fixation cross for 3 s. After each trial, participants were encouraged to take a break and to blink. Using a key-press, participants indicated when they were ready for the next trial. 
Phases (a) and (c) showed a gray background that covered the same area as the following image. The shade of gray in the background was designed to be adjusted to the luminance of the stimulus. The luminance was proportional but not equal to the stimulus luminance, ranging from a relative luminance of 3.5% to 6% while the images varied in relative luminance from 2.5% to 6.5%. As the distribution of images was well controlled by the experimental design (i.e., each image appeared in each condition), any differences between images can be accounted for by the random effect of image in a linear mixed model (LMM) analysis. 
Each subject saw the following 100 trials in random order: 
  • (a) 10 repetitions of a photograph of their own face (10 trials)
  • (b) 5 repetitions of photographs of 3 selected peers (15 trials)
  • (c) 5 repetitions of photographs of 3 selected strangers (15 trials)
  • (d) 1 photograph each of 30 strangers (30 trials)
  • (e) 1 photograph each of 30 peers (30 trials)
Categories b and c were introduced in order to control for repetition effects that occur during the repeated viewing of one’s own face. In order to exclude a pure oddball effect for the Peers–Strangers, every participant saw an equal number of familiar and unfamiliar faces. 
Pupillometry analyses
In order to prepare the data for analysis, we closely followed the recommended preprocessing pipeline suggested by Mathôt and Vilotijević (2023). Pupil size data were down-sampled to 100 Hz, as a higher resolution is not informative for pupil responses. Missing data were linearly interpolated (and excluded from the analyses). We converted the pupil diameter (which is given by the eye tracker in arbitrary units) to millimeters as our base unit and then computed the pupil response of each trial relative to the baseline. The baseline value for each trial was the average pupil size during the 50-ms surrounding target onset (as proposed by Mathôt & Vilotijević, 2023). We ensured data quality by evaluating the pupil size during the prestimulus phase. We found no indication that any trials or participants had to be excluded on the basis of the baseline values. Trials that included a blink after stimulus onset were excluded from the analyses. 
First, as a qualitative analysis, we plotted the pupil response over time, normalized to the stimulus onset. Second, to statistically support our findings, we used LMMs with pupil size as the dependent variable. While analyses that more efficiently take advantage of the time-series data, such as cluster-based permutation tests, are available and suggested for pupillometry data (Mathôt & Vilotijević, 2023), they were not feasible for this use case, as our design had large differences in the number of samples per group (the Self, First Presentation condition exists only a single time for each participant but many times for Strangers). We therefore chose to analyze both measures using LMMs in five time windows: Baseline (−50 to +50 ms), Constriction (200–600 ms), Dilation (600–1200 ms), Late Dilation (1200–2000 ms), and Stability (2000–3000 ms). The baseline window is included as a sanity check, to ensure no effects are found before target onset. 
Within each window, we computed LMMs for the dependent variable “Pupil Size” using the lme4 package (Bates, Mächler, Bolker, & Walker, 2015) in R (R Core Team, 2019). We define the following (custom) contrasts (Schad, Vasishth, Hohenstein, & Kliegl, 2020) in our LMM to test our hypotheses: 
  • (1) We compare the Self condition to the Strangers and Peers conditions, resulting in a Self–Others comparison.
  • (2) We compare the Strangers and Peers conditions (Strangers–Peers).
  • (3) We compare the repetitions of individual images up to the fifth repetition, using a sliding difference contrast, meaning that we compare Presentation 1 to Presentation 2, Presentation 2 to Presentation 3 and so forth.
  • (4) We compare the interaction of Repetition and Face, which indicates whether the difference in the face conditions is different based on Repetition.
Adding Time within the window and the ordinal trial number as covariates yields the following fixed-effects structure for the model formula:  
\begin{equation} Pupil\_Size \sim Face * Repetition + Time + Trial \end{equation}
(1)
 
The selection process for the random-effect structure is described in Supplementary Material. Following Baayen, Davidson, and Bates (2008), we interpret all |t| > 2 as significant fixed effects. 
Microsaccade rate analyses
Microsaccades were detected with millisecond accuracy from raw data by applying a standard microsaccade detection algorithm using a velocity threshold (Engbert & Kliegl, 2003; Engbert, 2006). MSR can be estimated with the help of a response function that applies filter kernels to the series of onset times (Engbert, 2006; Engbert, 2021). From the series of microsaccade onset times {t1, t2, t3, ...}, the microsaccade rate r(t) at time t is estimated via  
\begin{equation} r_{approx}(t) = \int _{-\infty }^{+\infty } {\mathrm{d}}\tau w(\tau )\rho (t-\tau ) , \end{equation}
(2)
where the microsaccadic response function ρ(t) (Engbert, 2021) is defined as  
\begin{equation} \rho (t)=\sum _{i=1}^N \delta (t-t_i) \end{equation}
(3)
with Dirac’s δ-function δ(t). We applied a filter kernel known as a causal window, that is,  
\begin{equation} w(\tau )=\left[ \alpha ^2\tau \exp (-\alpha \tau ) \right]_+, \end{equation}
(4)
where parameter α = 1/30 and [.]+ vanishes for negative arguments. The microsaccade rate r(t) was computed by averaging over microsaccades from all trials of a participant in a specific experimental condition. The resulting MSR as a function of time is qualitatively similar to the time courses of pupil size or event-related potentials, which are all stimulus-locked, continuous response functions averaged over many experimental trials. 
We identified six relevant time windows for our analyses: Baseline (−300 to 0 ms), Target Onset (0 to 300 ms), Target Offset (300 to 600 ms), Return (600 to 900 ms), and Stability, which we divided into two 300-ms windows for consistency (900 to 1200 ms and 1200 to 1500 ms). The 300-ms window size is consistent with the finding that the MSR decreases in response to a target onset, which has a duration of approximately 300 ms before returning to the baseline (Engbert & Kliegl, 2003). 
For the statistical analysis, we apply the knowledge that microsaccades are Poisson-distributed (Engbert & Mergenthaler, 2006) and conduct Poisson rate tests in each time window. In this analysis, we compare the conditions Self–Other and Strangers–Peers and report the ratio of the estimated Poisson rate. If the estimated Poisson rate is the same in both groups, the ratio will be 1, indicating no difference in MSR. 
Relating microsaccades and pupil size
As both MSR and pupil size are reactive to our face conditions, in an explorative analysis, we pose the question: Are the two measures related (i.e., do trials with many microsaccades also show reduced dilation and vice versa)? A relationship between the two would be an indication of a shared origin. The microsaccade effect precedes the pupil effect in time, with the drop in MSR occurring between 50 and 100 ms after target onset while pupil size affects peak only 1000 ms after target onset. In order to analyze the relationship of microsaccades and pupil size, we begin by selecting the most diagnostic window for identity using microsaccades (i.e., the Target Offset Phase, 300–600 ms after target onset) and use it as a predictor for pupil size. The distribution of microsaccade counts in this subset shows that the majority of trials (7919 trials) had a count of zero microsaccades, and 1992 trials had one or more saccades. This is a significant difference in sample size and may influence the results. We use a binary variable, which encodes the presence of one or more microsaccades in the diagnostic window as an ad hoc predictor variable of pupil size in an explorative LMM using the formula  
\begin{eqnarray} Pupil\_Size \sim 1 + MS\_in\_phase*Face + (1|VP). \;\;\; \end{eqnarray}
(5)
 
The selection of the random-effects structure is detailed in the Supplementary Material
Results
In the present study, we investigate the effect of seeing one’s own face compared to familiar (Peers) and unfamiliar (Strangers) faces on involuntary eye measures. A subset of faces, including the Self condition, was shown repeatedly in order to explore how stimulus repetition influences the effect and to control for novelty effects. Repetition of unfamiliar faces also exposes a fourth recognition condition: recognition of a stranger’s face. 
Pupil size
In Figure 2B as well as Figure 3, the average pupil reaction for each of the three face conditions reveals that there is a qualitative difference between all three. In the following, we report the results of the LMM in each phase (refer also to Table 1; the coefficients in the table may be interpreted as the absolute difference in response in millimeters). 
Table 1.
 
Results of pupil size LMMs. The analysis was conducted using the same model for five time windows: Baseline (−50 to 50 ms), Constriction (200–600 ms), Dilation (600–1200 ms), Late Dilation (600–1200 ms), and Stability (2000–3000 ms). The t values that are marked in bold are considered significant.
Table 1.
 
Results of pupil size LMMs. The analysis was conducted using the same model for five time windows: Baseline (−50 to 50 ms), Constriction (200–600 ms), Dilation (600–1200 ms), Late Dilation (600–1200 ms), and Stability (2000–3000 ms). The t values that are marked in bold are considered significant.
Figure 2.
 
Pupil size and microsaccade rate over time. (A) The lines show the change in pupil size normalized to the size at stimulus onset. The thin ribbons represent the between-subject standard error. (B) The lines show the rate of microsaccades after face onset (time zero), calculated for each participant and then averaged across participants for each condition. The shading shows the between-subject standard error. The area highlighted in gray shows the time in which the target face was present on the screen.
Figure 2.
 
Pupil size and microsaccade rate over time. (A) The lines show the change in pupil size normalized to the size at stimulus onset. The thin ribbons represent the between-subject standard error. (B) The lines show the rate of microsaccades after face onset (time zero), calculated for each participant and then averaged across participants for each condition. The shading shows the between-subject standard error. The area highlighted in gray shows the time in which the target face was present on the screen.
Figure 3.
 
Pupil size over time. The top panel (A) shows the average measured pupil size for each category of images. The colored background separates the response signature into five time windows: Baseline (B, red), Constriction (C, yellow), Dilation (D, rose), Late Dilation (E, green), and Stability (F, blue). Panel A shows that the main effect of pupil size is evident from the Dilation phase onward, where the pupil size becomes significantly different for Strangers, Peers, and Self. The bottom panels, B, C, D, E, and F, show a detailed view of the pupil size over time in the respective time window. Each line in a range of shades of gray shows one repetition, with black being the first viewing of the face and white the 10th.
Figure 3.
 
Pupil size over time. The top panel (A) shows the average measured pupil size for each category of images. The colored background separates the response signature into five time windows: Baseline (B, red), Constriction (C, yellow), Dilation (D, rose), Late Dilation (E, green), and Stability (F, blue). Panel A shows that the main effect of pupil size is evident from the Dilation phase onward, where the pupil size becomes significantly different for Strangers, Peers, and Self. The bottom panels, B, C, D, E, and F, show a detailed view of the pupil size over time in the respective time window. Each line in a range of shades of gray shows one repetition, with black being the first viewing of the face and white the 10th.
In the Baseline condition, as expected, the face conditions behave identically (Figure 3B). This is confirmed by the LMM, as presented in Table 1. The only significant effect is a minimal dilation trend over time. The following window represents the initial constriction, which is a reflexive response to the stimulus presentation. In the constriction phase, too, no significant differences for the main effects can be found, with the exception of the comparison between the fourth and fifth presentation. In all following time windows (i.e., Dilation, Late Dilation, and Stability phases), we observe significant differences between the conditions. The effect size is largest in the Late Dilation phase for the Self–Others comparison and in the Stability phase for the Peers–Strangers comparison. We conclude that starting at 600 ms after target onset recognition, particularly self-recognition, modulates pupil size. 
The repetition main effect is coded by a sliding differences contrast, meaning we compare subsequent presentations. We find a large number of the comparisons to be significant, showing that the pupil response becomes attenuated over repetitions. Note that we investigate the difference of each presentation to the next; a different contrast (e.g., a treatment contrast, comparing each level to the first repetition) would have likely resulted in more consistently significant effects. It may also be interesting to point out that while the effect becomes more muted over the presentations, the last comparison (fifth vs. fourth presentation) actually indicates a reversal, perhaps pointing to an attenuation limit. 
The interaction terms show a varying pattern of significance. The interactions with the Self–Others comparisons are consistently negative or absent, with the exception of the second to third comparison. The interaction term can be interpreted as the attenuation of the pupil-dilation effect over repetitions. A negative term indicates that the attenuation in response to the Self condition is stronger than in the effect in the Peers and Strangers conditions. The same interpretation can be applied to the Peers–Strangers and Repetition interactions. Here, a negative term means stronger attenuation in the Peers condition. Note that in Figure 3 (e.g., in the Stability phase), it is evident that the second presentation of stranger produces a stronger dilation than the first (see second to first: Peers–Strangers interaction), presenting a contrast to the repetition muting effect found in the Peers and Self conditions. This is most likely caused by recognition of the face when it is shown for the second time (i.e., unfamiliar face recognition). In order to understand the full extent of these effects, please refer to Figure 3
As a further analysis, we were interested in how consistently participants were exhibiting the effect. Using the terms computed by the LMM, we compare the effect size of the Self–Others comparison for individual subjects, as estimated by the random effects in the LMM. The results are plotted in Figure 4. We find that the majority of subjects show the effect in the expected direction. Some individuals show only a very minor expression of the effect, and some even mildly reverse the trend (i.e., exhibit less dilation in response to their own face instead of more). 
Figure 4.
 
Effect size of the Self–Others comparison, by participant. Mean values are computed from the estimated random effect terms. Zero in this plot means that there was no effect; negative values indicate a reversal (i.e., reacting less strongly to Self than to Others). Most participants show the effect (to the right of the zero line). A small subset shows a reversed effect.
Figure 4.
 
Effect size of the Self–Others comparison, by participant. Mean values are computed from the estimated random effect terms. Zero in this plot means that there was no effect; negative values indicate a reversal (i.e., reacting less strongly to Self than to Others). Most participants show the effect (to the right of the zero line). A small subset shows a reversed effect.
Microsaccades
We calculated the MSR for each subject and condition over time (Figure 2). The time course of microsaccades is best described as two interacting processes. First, simple display changes cause an inhibition of microsaccades (Engbert & Kliegl, 2003). As our paradigm involved two changes in display (image onset and image offset), both display changes independently cause the MSR to drop. Second, microsaccades can also be inhibited by visual on- or offsets and can be modulated by cognitive factors such as exogeneous or endogenous shifts of attention (Engbert, 2006). In a computational model, Engbert (2012) suggested that the combination of two processes, (a) the modulation of a threshold for triggering of microsaccades and (b) a transient reduction of microsaccades, can explain most of the experimental findings for lower-level display changes and higher-level cognitive task manipulations. An alternative model suggests that microsaccades are modulated via phase resetting of neural oscillations during a stimulus onset (Hafed & Ignashchenkova, 2013), which is further discussed in the context of microsaccadic inhibition (Buonocore & Hafed, 2023; Hafed, Yoshida, Tian, Buonocore, & Malevich, 2021). 
As suggested by previous research, we find that the MSR responds strongly to display changes. An inhibition of microsaccades is clearly visible in the rapid drop in MSR at image onset and image offset. Typically, the MSR recovers quickly, rising to the baseline level. Figure 2 shows that in the Self condition, microsaccades are inhibited for a longer period of time. In all other conditions, the rate recovers to the baseline level before the second inhibition occurs at target offset. In the Self condition, the second inhibition occurs while the first is still active, leading to a further drop in the MSR and a recovery that is correspondingly longer than in the other conditions. Thus, the average time course of the MSR is qualitatively different in the Self condition. Thus, our results are compatible with earlier findings on microsaccadic inhibition in the oddball paradigm (Valsecchi et al., 2006). 
Table 2 shows the results of the Poisson rate tests. The estimate should be interpreted as the ratio of the Poisson rates in the compared conditions (e.g., the estimate of 1.23 in the Return phase when comparing Self and Others means that the rate in the Others condition is higher by a factor of 1.23 than in the Self condition). We find significant differences between the Self and Others conditions, as well as between Peers and Strangers, in the period between 300 and 900 ms. 
Table 2.
 
Results of the microsaccade Poisson rate tests. The test compared the Poisson rate of two groups. The p values marked in bold are considered significant.
Table 2.
 
Results of the microsaccade Poisson rate tests. The test compared the Poisson rate of two groups. The p values marked in bold are considered significant.
Relationship of microsaccades and pupil size
We calculated a model using the presence of microsaccades in the most diagnostic window, as found by the previous analysis, as a predictor for pupil size. The result of the LMM is presented in Table 3. First, consistent with the LMMs of pupil size alone, we find a significant effect of both Self–Others and Peers–Strangers in the Dilation, Late Dilation, and Stability time widows. Second, a predictive effect of microsaccades is present only in the earlier phases of the pupil response (constriction, dilation). 
Table 3.
 
Estimated terms of the LMM relating microsaccades and pupil size. Occurrence of microsaccades in the most diagnostic window (300–600 ms) is used as a predictor for pupil size. The predictive effect of microsaccades can only be found in the early phases of the pupil response. The t values that are marked in bold are considered significant.
Table 3.
 
Estimated terms of the LMM relating microsaccades and pupil size. Occurrence of microsaccades in the most diagnostic window (300–600 ms) is used as a predictor for pupil size. The predictive effect of microsaccades can only be found in the early phases of the pupil response. The t values that are marked in bold are considered significant.
Discussion
In a large-scale eye-tracking experiment, we investigated the behavioral correlates of face recognition in involuntary eye movements. We recorded pupil size and fixational eye movements in two groups of participants. Each participant saw their own face (Self), the familiar faces of members of their own peer group (Peers), or unfamiliar faces (Strangers). Selected faces, including the Self condition, were repeated to investigate familiarization effects. This study design provides a well-controlled data set, where, across participants, each individual face appeared in each condition (Peers, Strangers, Self). We observe that face recognition, particularly self-recognition, modulates the response signatures of both pupil dilation and MSR. 
Recognition of familiar faces
First, we compared the recognition of familiar faces to completely unfamiliar faces (i.e., Peers vs. Strangers). Recognizing a familiar face is related to a stronger and longer-lasting inhibition of microsaccades, starting around 200 ms after target onset. Recognition effects on MSR were first documented by Rosenzweig and Bonneh (2019) and Rosenzweig and Bonneh (2020), but a study by Chen et al. (2023) did not reproduce this result. We also observed increased pupil dilation in response to familiar faces starting around 600 ms after target onset. Our findings regarding pupil dilation are in agreement with previous work (Chen et al., 2023). In accordance with the literature, we propose that early effects (microsaccade response, early stages of pupil response) are related purely to recognition and saliency (Larsen & Waters, 2018; Mathôt, 2018). Early recognition responses in the IT cortex occur 50 to 100 ms after target onset (Perrett et al., 1982; Sugase et al., 1999), and SC shows object detection responses even earlier (Bogadhi & Hafed, 2023; Nguyen et al., 2012; Nguyen et al., 2014; Nguyen et al., 2016). The data show effects on microsaccade inhibition less than 100 ms later, suggesting that feedback from the fusiform face area (FFA) may have a direct influence on the oculomotor system. The mid and late stages of the pupil response are likely related more to arousal and emotional responses (Mathôt, 2018). 
Self-recognition
Similar effects as in the Peers–Strangers comparison are evident in the Self–Others comparison, albeit significantly larger in scale. We hypothesized that the faces of strangers capture little attention and are grouped together, while one’s own face is qualitatively different, even to familiar faces. We find a strong individual variability in the effect, which, we speculate, may be related to differences in the emotional response related to self-image. It is important to note that the Self–Others comparison is not perfectly balanced regarding stimulus frequency. While many different Peers and Strangers were shown, there is only ever one Self face, which is repeated. We controlled for repetition by also repeating certain Peers’ and Strangers’ faces. However, the relative frequency of Self images was still lower than of the Others images. 
We find the largest effects for the first presentation of Self and an attenuation over repetitions. Thus, it is likely that genuine self-recognition effects are further enhanced by a stimulus-probability effect, similar to the Oddball paradigm. 
Repetition effects and recognition of unfamiliar faces
Repetitions of the same image led to reduced pupil dilation. We observed a strong attenuation of the effect over repetitions, such that the 10th Self-presentation is comparable to the Strangers response (see Figure 3). The strongest repetition effects are visible in the Late Dilation and Stability phases. Unlike the neurophysiological novelty response, which occurs 300 ms after target onset (compare: P300 component), this novelty effect on pupil constriction begins much later, around 1000 ms. 
On its own, the attenuation over repetitions seems discouraging for real-world applications, where pictures may be presented repeatedly. However, this is not true of the early response: In the dilation phase, the faster and stronger dilation in the Self condition is still evidently different from the average Strangers condition. Whether this effect is short- or long-term (i.e., whether it would persist over multiple sessions) remains to be investigated. 
The first response to seeing one’s own face is comparatively much larger than in any other condition and is much reduced in subsequent repetitions (see Figure 3, or “(Se-O) : (1st–2nd)” in Stability phase in Table 1). Moreover, when a stranger’s face is repeated (i.e., during the first recognition of a now-familiarized stranger), the dilation is stronger than during the very first viewing (see Figure 3, or “(F:St) : (1st–2nd)” in Late Dilation phase in Table 1). This outlines the effect of the pure recognition of a face, in the absence of any emotional reaction. Further recognitions of the stranger reduce the effect in accordance with the repetition effect. 
The relationship of pupil size and microsaccade rate
We find that microsaccadic inhibition and the pupil dilation are only weakly correlated. Trials that have particularly few microsaccades do not serve well as a predictor for strong dilation. This observation is surprising, as we expected both measures to share a common cause. Our results suggest that MSR and pupil dilation are controlled independently and via separate pathways, although they commonly react to cognitive and emotional factors. The lack of correlation could confer a positive effect on models using a conjunction of both features for its prediction. 
Potential applications
The question of facial familiarity is crucial in security applications. Concealed information testing (CIT) serves to identify individuals with knowledge of a crime or perpetrators without relying on verbal statements. The robust effects presented here serve as a good basis for applications in continuous authentication procedures or identity validation. Initial explorations show the feasibility of using these measures as traits for biometric classification (Graupner, Schwetlick, Engbert, & Meinel, 2023). Possible real-world scenarios include the detection of face presentation attacks (e.g., video deepfakes) in online video conferences. As we found an attenuation of the effect over image repetition, a remaining important consideration is whether the observed differences can be discerned at the level of individual trials and sustained over repetitions. More application-oriented studies will be required to validate the effectiveness of involuntary eye movements for practical use cases. 
Acknowledgments
The authors acknowledge support from Bundesdruckerei GmbH, Berlin, Germany. 
No animal studies are presented in this article. Ethical approval was not required for the studies involving humans because the local legislation and institutional requirements do not require specific approval for studies where no sensitive or identifiable information is published. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study in accordance with the General Data Protection Regulation of European Union. Specifically, under Article 8 of the GDPR, the processing of personal data of people over the age of 16 does not require consent by the holder of parental responsibility. No potentially identifiable images or data are presented in this study. 
Commercial relationships: none. 
Corresponding author: Lisa Schwetlick. 
Address: EPFL SV BMI LPSY, AI 3200, Station 19, Lausanne 1015, Switzerland. 
References
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412, https://doi.org/10.1016/j.jml.2007.12.005. [CrossRef]
Bahill, A. T., Clark, M. R., & Stark, L. (1975). The main sequence, a tool for studying human eye movements. Mathematical Biosciences, 24(3), 191–204, https://doi.org/10.1016/0025-5564(75)90075-9.
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48, https://doi.org/10.18637/jss.v067.i01. [CrossRef]
Bogadhi, A. R., & Hafed, Z. M. (2023). Express detection of visual objects by primate superior colliculus neurons. Scientific Reports, 13(1), 21730, https://doi.org/10.1038/s41598-023-48979-5. [CrossRef] [PubMed]
Bradley, M. M., Miccoli, L., Escrig, M. A., & Lang, P. J. (2008). The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology, 45(4), 602–607, https://doi.org/10.1111/j.1469-8986.2008.00654.x. [CrossRef] [PubMed]
Buonocore, A., Dietze, N., & McIntosh, R. D. (2021). Time-dependent inhibition of covert shifts of attention. Experimental Brain Research, 239(8), 2635–2648, https://doi.org/10.1007/s00221-021-06164-y. [CrossRef] [PubMed]
Buonocore, A., & Hafed, Z. M. (2023). The inevitability of visual interruption. Journal of Neurophysiology, 130(2), 225–237, https://doi.org/10.1152/jn.00441.2022. [CrossRef] [PubMed]
Chen, I. Y., Büchel, P., Karabay, A., van der Mijn, R., Mathot, S., & Akyurek, E. G. (2023). Concealed information detection in rapid serial visual presentation with oculomotor measures, https://osf.io/us7qc/.
Engbert, R. (2012). Computational modeling of collicular integration of perceptual responses and attention in microsaccades. Journal of Neuroscience, 32(23), 8035–8039, https://doi.org/10.1523/jneurosci.0808-12.2012. [CrossRef]
Engbert, R., & Mergenthaler, K. (2006). Microsaccades are triggered by low retinal image slip. Proceedings of the National Academy of Sciences, 103(18), 7192–7197, https://doi.org/10.1073/pnas.0509557103. [CrossRef]
Engbert, R. (2006). Microsaccades: A microcosm for research on oculomotor control, attention, and visual perception. Progress in Brain Research, 154, 177–192, https://doi.org/10.1016/s0079-6123(06)54009-9. [CrossRef] [PubMed]
Engbert, R. (2021). Dynamical models in neurocognitive psychology. Cham (Switzerland): Springer Nature Publishing, https://doi.org/10.1007/978-3-030-67299-7.
Engbert, R., & Kliegl, R. (2003). Microsaccades uncover the orientation of covert attention. Vision Research, 43(9), 1035–1045, https://doi.org/10.1016/s0042-6989(03)00084-1. [CrossRef] [PubMed]
Graupner, H., Schwetlick, L., Engbert, R., & Meinel, C. (2023). Unconventional biometrics: Exploring the feasibility of a cognitive trait based on visual self-recognition. IEEE International Joint Conference on Biometrics (IJCB), Ljubljana, Slovenia (pp. 1–10). IEEE, https://doi.org/10.1109/IJCB57857.2023.10448909.
Hafed, Z. M., & Ignashchenkova, A. (2013). On the dissociation between microsaccade rate and direction after peripheral cues: Microsaccadic inhibition revisited. The Journal of Neuroscience, 33(41), 16220–16235, https://doi.org/10.1523/jneurosci.2240-13.2013. [CrossRef]
Hafed, Z. M., Yoshida, M., Tian, X., Buonocore, A., & Malevich, T. (2021). Dissociable cortical and subcortical mechanisms for mediating the influences of visual cues on microsaccadic eye movements. Frontiers in Neural Circuits, 15, 638429. [CrossRef] [PubMed]
Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498–504, https://doi.org/10.1016/j.tics.2003.09.006. [CrossRef] [PubMed]
Hess, E. H., & Polt, J. M. (1960). Pupil size as related to interest value of visual stimuli. Science, 132(3423), 349–350, https://doi.org/10.1126/science.132.3423.349. [CrossRef] [PubMed]
Johnson, M. M., & Rosenfeld, J. (1992). Oddball-evoked p300-based method of deception detection in the laboratory II: Utilization of non-selective activation of relevant knowledge. International Journal of Psychophysiology, 12(3), 289–306, https://doi.org/10.1016/0167-8760(92)90067-l. [CrossRef]
Johnston, R. A., & Edmonds, A. J. (2009). Familiar and unfamiliar face recognition: A review. Memory, 17(5), 577–596, https://doi.org/10.1080/09658210902976969. [CrossRef] [PubMed]
Kamp, S.-M., & Donchin, E. (2015). Erp and pupil responses to deviance in an oddball paradigm. Psychophysiology, 52(4), 460–471, https://doi.org/10.1111/psyp.12378. [CrossRef] [PubMed]
Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience, 17(11), 4302–4311, https://doi.org/10.1523/jneurosci.17-11-04302.1997. [CrossRef]
Kinner, V. L., Kuchinke, L., Dierolf, A. M., Merz, C. J., Otto, T., & Wolf, O. T. (2017). What our eyes tell us about feelings: Tracking pupillary responses during emotion regulation processes. Psychophysiology, 54(4), 508–518, https://doi.org/10.1111/psyp.12816. [CrossRef] [PubMed]
Kok, A. (2001). On the utility of p3 amplitude as a measure of processing capacity. Psychophysiology, 38(3), 557–577, https://doi.org/10.1017/s0048577201990559. [CrossRef] [PubMed]
Laeng, B., Sirois, S., & Gredebäck, G. (2012). Pupillometry. Perspectives on Psychological Science, 7(1), 18–27, https://doi.org/10.1177/1745691611427305. [CrossRef]
Larsen, R. S., & Waters, J. (2018). Neuromodulatory correlates of pupil dilation. Frontiers in Neural Circuits, 12, 21, https://doi.org/10.3389/fncir.2018.00021. [CrossRef] [PubMed]
Laubrock, J., Cajar, A., & Engbert, R. (2013). Control of fixation duration during scene viewing by interaction of foveal and peripheral processing. Journal of Vision, 13(12), 11, https://doi.org/10.1167/13.12.11. [CrossRef] [PubMed]
Martin, J. G., Davis, C. E., Riesenhuber, M., & Thorpe, S. J. (2020). Microsaccades during high speed continuous visual search. Journal of Eye Movement Research, 13(5), 1–14, https://doi.org/10.16910/jemr.13.5.4. [CrossRef]
Martinez-Conde, S., Macknik, S. L., & Hubel, D. H. (2004). The role of fixational eye movements in visual perception. Nature Reviews Neuroscience, 5(3), 229–240, https://doi.org/10.1038/nrn1348. [CrossRef] [PubMed]
Mathôt, S. (2018). Pupillometry: Psychology, physiology, and function. Journal of Cognition, 1(1), 16, https://doi.org/10.5334/joc.18. [CrossRef] [PubMed]
Mathôt, S., & Vilotijević, A. (2023). Methods in cognitive pupillometry: Design, preprocessing, and statistical analysis. Behavior Research Methods, 55, 3055–3077, https://doi.org/10.3758/s13428-022-01957-7. [CrossRef] [PubMed]
McKone, E., & Kanwisher, N. (2005). Does the human brain process objects of expertise like faces? A review of the evidence. In Dehaene, S., Duhamel, J.-R., Hauser,, M. D., & Rizzolatti, G. (Eds.), From monkey brain to human brain: A fyssen foundation symposium. Cambridge, MA: The MIT Press, https://doi.org/10.7551/mitpress/3136.003.0024.
Mergenthaler, K., & Engbert, R. (2010). Microsaccades are different from saccades in scene perception. Experimental Brain Research, 203(4), 753–757, https://doi.org/10.1007/s00221-010-2272-9. [CrossRef] [PubMed]
Murphy, P. R., O'Connell, R. G., O'Sullivan, M., Robertson, I. H., & Balsters, J. H. (2014). Pupil diameter covaries with BOLD activity in human locus coeruleus. Human Brain Mapping, 35(8), 4140–4154, https://doi.org/10.1002/hbm.22466. [CrossRef] [PubMed]
Nguyen, M. N., Hori, E., Matsumoto, J., Tran, A. H., Ono, T., & Nishijo, H. (2012). Neuronal responses to face-like stimuli in the monkey pulvinar. European Journal of Neuroscience, 37(1), 35–51, https://doi.org/10.1111/ejn.12020. [CrossRef]
Nguyen, M. N., Matsumoto, J., Hori, E., Maior, R. S., Tomaz, C., Tran, A. H., … Nishijo, H. (2014). Neuronal responses to face-like and facial stimuli in the monkey superior colliculus. Frontiers in Behavioral Neuroscience, 8, 85, https://doi.org/10.3389/fnbeh.2014.00085. [PubMed]
Nguyen, M. N., Nishimaru, H., Matsumoto, J., Van Le, Q., Hori, E., Maior, R. S., … Nishijo, H. (2016). Population coding of facial information in the monkey superior colliculus and pulvinar. Frontiers in Neuroscience, 10, 583, https://doi.org/10.3389/fnins.2016.00583. [CrossRef] [PubMed]
Perrett, D., Rolls, E., & Caan, W. (1982). Visual neurones responsive to faces in the monkey temporal cortex. Experimental Brain Research, 47(3), 329–342, https://doi.org/10.1007/bf00239352. [CrossRef] [PubMed]
Picton, T. W. (1992). The p300 wave of the human event-related potential. Journal of Clinical Neurophysiology, 9(4), 456–479, https://doi.org/10.1097/00004691-199210000-00002. [CrossRef]
Polich, J. (2007). Updating p300: An integrative theory of p3a and p3b. Clinical Neurophysiology, 118(10), 2128–2148, https://doi.org/10.1016/j.clinph.2007.04.019. [CrossRef]
R Core Team. (2019). R: A language and environment for statistical computing (Version 3.6.2) [Computer software]. Vienna, Austria. https://www.R-project.org/.
Ratliff, F., & Riggs, L. A. (1950). Involuntary motions of the eye during monocular fixation. Journal of Experimental Psychology, 40(6), 687–701, https://doi.org/10.1037/h0057754. [CrossRef] [PubMed]
Rolfs, M. (2009). Microsaccades: Small steps on a long way. Vision Research, 49(20), 2415–2441, https://doi.org/10.1016/j.visres.2009.08.010. [CrossRef] [PubMed]
Rolls, E. T., Critchley, H. D., Browning, A. S., & Inoue, K. (2005). Face-selective and auditory neurons in the primate orbitofrontal cortex. Experimental Brain Research, 170(1), 74–87, https://doi.org/10.1007/s00221-005-0191-y. [CrossRef] [PubMed]
Rosenzweig, G., & Bonneh, Y. S. (2019). Familiarity revealed by involuntary eye movements on the fringe of awareness. Scientific Reports, 9(1), 3029, https://doi.org/10.1038/s41598-019-39889-6. [CrossRef] [PubMed]
Rosenzweig, G., & Bonneh, Y. S. (2020). Concealed information revealed by involuntary eye movements on the fringe of awareness in a mock terror experiment. Scientific Reports, 10(1), 14355, https://doi.org/10.1038/s41598-020-71487-9. [CrossRef] [PubMed]
Schad, D. J., Vasishth, S., Hohenstein, S., & Kliegl, R. (2020). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. Journal of Memory and Language, 110, 104038, https://doi.org/10.1016/j.jml.2019.104038. [CrossRef]
Schall, J. (1995). Neural basis of saccade target selection. Reviews in the Neurosciences, 6(1), 63–85, https://doi.org/10.1515/REVNEURO.1995.6.1.63. [CrossRef] [PubMed]
Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52(3–4), 591–611, https://doi.org/10.1093/biomet/52.3-4.591.
Shelchkova, N., Tang, C., & Poletti, M. (2019). Task-driven visual exploration at the foveal scale. Proceedings of the National Academy of Sciences, 116(12), 5811–5818, https://doi.org/10.1073/pnas.1812222116. [CrossRef]
Sinn, P., & Engbert, R. (2016). Small saccades versus microsaccades: Experimental distinction and model-based unification. Vision Research, 118, 132–143, https://doi.org/10.1016/j.visres.2015.05.012. [CrossRef] [PubMed]
Sirois, S., & Brisson, J. (2014). Pupillometry. WIREs Cognitive Science, 5(6), 679–692, https://doi.org/10.1002/wcs.1323. [CrossRef]
Sparks, D. L. (2002). The brainstem control of saccadic eye movements. Nature Reviews Neuroscience, 3(12), 952–964, https://doi.org/10.1038/nrn986. [CrossRef] [PubMed]
Squires, N. K., Squires, K. C., & Hillyard, S. A. (1975). Two varieties of long-latency positive waves evoked by unpredictable auditory stimuli in man. Electroencephalography and Clinical Neurophysiology, 38(4), 387–401, https://doi.org/10.1016/0013-4694(75)90263-1. [CrossRef] [PubMed]
Strauch, C., Koniakowsky, I., & Huckauf, A. (2020). Decision making and oddball effects on pupil size: Evidence for a sequential process. Journal of Cognition, 3(1), 7, https://doi.org/10.5334/joc.96. [CrossRef] [PubMed]
Sugase, Y., Yamane, S., Ueno, S., & Kawano, K. (1999). Global and fine information coded by single neurons in the temporal visual cortex. Nature, 400(6747), 869–873, https://doi.org/10.1038/23703. [CrossRef] [PubMed]
Sutton, S., Braren, M., Zubin, J., & John, E. R. (1965). Evoked-potential correlates of stimulus uncertainty. Science, 150(3700), 1187–1188, https://doi.org/10.1126/science.150.3700.1187. [CrossRef] [PubMed]
Valsecchi, M., Betta, E., & Turatto, M. (2006). Visual oddballs induce prolonged microsaccadic inhibition. Experimental Brain Research, 177(2), 196–208, https://doi.org/10.1007/s00221-006-0665-6. [CrossRef] [PubMed]
Valsecchi, M., Dimigen, O., Kliegl, R., Sommer, W., & Turatto, M. (2009). Microsaccadic inhibition and p300 enhancement in a visual oddball task. Psychophysiology, 46(3), 635–644, https://doi.org/10.1111/j.1469-8986.2009.00791.x. [CrossRef] [PubMed]
Willeke, K. F., Tian, X., Buonocore, A., Bellet, J., Ramirez-Cardenas, A., & Hafed, Z. M. (2019). Memory-guided microsaccades. Nature Communications, 10, 3710, https://doi.org/10.1038/s41467-019-11711-x. [CrossRef] [PubMed]
Winterson, B. J., & Collewijn, H. (1976). Microsaccades during finely guided visuomotor tasks. Vision Research, 16(12), 1387–1390, https://doi.org/10.1016/0042-6989(76)90156-5. [CrossRef] [PubMed]
Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81(1), 141–145, https://doi.org/10.1037/h0027474. [CrossRef]
Figure 1.
 
Time course of a trial. The 2-s fixation check ensures participants fixate the center of the screen. The brief face presentation (0.3 s) is then followed by a gray screen, during which the main measurement is taken.
Figure 1.
 
Time course of a trial. The 2-s fixation check ensures participants fixate the center of the screen. The brief face presentation (0.3 s) is then followed by a gray screen, during which the main measurement is taken.
Figure 2.
 
Pupil size and microsaccade rate over time. (A) The lines show the change in pupil size normalized to the size at stimulus onset. The thin ribbons represent the between-subject standard error. (B) The lines show the rate of microsaccades after face onset (time zero), calculated for each participant and then averaged across participants for each condition. The shading shows the between-subject standard error. The area highlighted in gray shows the time in which the target face was present on the screen.
Figure 2.
 
Pupil size and microsaccade rate over time. (A) The lines show the change in pupil size normalized to the size at stimulus onset. The thin ribbons represent the between-subject standard error. (B) The lines show the rate of microsaccades after face onset (time zero), calculated for each participant and then averaged across participants for each condition. The shading shows the between-subject standard error. The area highlighted in gray shows the time in which the target face was present on the screen.
Figure 3.
 
Pupil size over time. The top panel (A) shows the average measured pupil size for each category of images. The colored background separates the response signature into five time windows: Baseline (B, red), Constriction (C, yellow), Dilation (D, rose), Late Dilation (E, green), and Stability (F, blue). Panel A shows that the main effect of pupil size is evident from the Dilation phase onward, where the pupil size becomes significantly different for Strangers, Peers, and Self. The bottom panels, B, C, D, E, and F, show a detailed view of the pupil size over time in the respective time window. Each line in a range of shades of gray shows one repetition, with black being the first viewing of the face and white the 10th.
Figure 3.
 
Pupil size over time. The top panel (A) shows the average measured pupil size for each category of images. The colored background separates the response signature into five time windows: Baseline (B, red), Constriction (C, yellow), Dilation (D, rose), Late Dilation (E, green), and Stability (F, blue). Panel A shows that the main effect of pupil size is evident from the Dilation phase onward, where the pupil size becomes significantly different for Strangers, Peers, and Self. The bottom panels, B, C, D, E, and F, show a detailed view of the pupil size over time in the respective time window. Each line in a range of shades of gray shows one repetition, with black being the first viewing of the face and white the 10th.
Figure 4.
 
Effect size of the Self–Others comparison, by participant. Mean values are computed from the estimated random effect terms. Zero in this plot means that there was no effect; negative values indicate a reversal (i.e., reacting less strongly to Self than to Others). Most participants show the effect (to the right of the zero line). A small subset shows a reversed effect.
Figure 4.
 
Effect size of the Self–Others comparison, by participant. Mean values are computed from the estimated random effect terms. Zero in this plot means that there was no effect; negative values indicate a reversal (i.e., reacting less strongly to Self than to Others). Most participants show the effect (to the right of the zero line). A small subset shows a reversed effect.
Table 1.
 
Results of pupil size LMMs. The analysis was conducted using the same model for five time windows: Baseline (−50 to 50 ms), Constriction (200–600 ms), Dilation (600–1200 ms), Late Dilation (600–1200 ms), and Stability (2000–3000 ms). The t values that are marked in bold are considered significant.
Table 1.
 
Results of pupil size LMMs. The analysis was conducted using the same model for five time windows: Baseline (−50 to 50 ms), Constriction (200–600 ms), Dilation (600–1200 ms), Late Dilation (600–1200 ms), and Stability (2000–3000 ms). The t values that are marked in bold are considered significant.
Table 2.
 
Results of the microsaccade Poisson rate tests. The test compared the Poisson rate of two groups. The p values marked in bold are considered significant.
Table 2.
 
Results of the microsaccade Poisson rate tests. The test compared the Poisson rate of two groups. The p values marked in bold are considered significant.
Table 3.
 
Estimated terms of the LMM relating microsaccades and pupil size. Occurrence of microsaccades in the most diagnostic window (300–600 ms) is used as a predictor for pupil size. The predictive effect of microsaccades can only be found in the early phases of the pupil response. The t values that are marked in bold are considered significant.
Table 3.
 
Estimated terms of the LMM relating microsaccades and pupil size. Occurrence of microsaccades in the most diagnostic window (300–600 ms) is used as a predictor for pupil size. The predictive effect of microsaccades can only be found in the early phases of the pupil response. The t values that are marked in bold are considered significant.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×