Open Access
Article  |   January 2025
Effect of sign language learning on temporal resolution of visual attention
Author Affiliations & Notes
  • Footnotes
     SK and SW should be regarded as joint first authors.
Journal of Vision January 2025, Vol.25, 3. doi:https://doi.org/10.1167/jov.25.1.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Serpil Karabüklü, Sandra Wood, Chuck Bradley, Ronnie B. Wilbur, Evie A. Malaia; Effect of sign language learning on temporal resolution of visual attention. Journal of Vision 2025;25(1):3. https://doi.org/10.1167/jov.25.1.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.

Introduction
American Sign language (ASL) is  used by up to 7 million people (both hearing and Deaf) for everyday visual-linguistic communication (Mitchell & Young, 2022); worldwide, estimates put the number of Deaf signers at close to 70 million. Those who use sign languages in everyday life have very different visual experiences, compared to non-signers. The sign language signal is characterized by high information content (entropy) (Borneman, Malaia, & Wilbur, 2018; Malaia, Borneman, & Wilbur, 2016), because of the use of a dynamic three-dimensional signal that combines multiple independent articulators (hands, face, body) at high temporal resolution. This also means that the visual signal in sign language is characterized by high inherent ambiguity. 
Table 1A.
 
10 Hz model summary
Table 1A.
 
10 Hz model summary
Table 1B.
 
10 Hz fit statistic.
Table 1B.
 
10 Hz fit statistic.
Table 2A.
 
20 Hz model summary and fit statistic
Table 2A.
 
20 Hz model summary and fit statistic
Table 2B.
 
20 Hz fit statistic
Table 2B.
 
20 Hz fit statistic
Table 3A.
 
30 Hz model summary
Table 3A.
 
30 Hz model summary
Table 3B.
 
30 Hz fit statistic
Table 3B.
 
30 Hz fit statistic
Table 4A.
 
60 Hz model summary
Table 4A.
 
60 Hz model summary
Table 4B.
 
60 Hz fit statistic
Table 4B.
 
60 Hz fit statistic
Language comprehension, including sign language, relies heavily on predictive processing. This cognitive mechanism allows individuals to generate expectations for upcoming sensory input based on prior experiences and earlier input (Blumenthal-Dramé & Malaia, 2019; Malaia, Borneman, Borneman, Krebs, & Wilbur, 2023; Radošević, Malaia, & Milković, 2022). In the context of sign language, resolving the inherent uncertainty in the visual signal is crucial for accurate comprehension. Proficient signers use their experience and linguistic knowledge to predict and interpret rapidly changing visual stimuli: they resolve the ambiguities in the perceptual (sensory) visual input based on their knowledge (memory) of phonotactics, vocabulary, and syntactic structures of the sign language that they know and taking into consideration any previous discourse context. 
New learners of sign language, however, face multiple challenges. First, they lack the memory stores of linguistic tokens to compare and identify items. Additionally, they have difficulty noticing the subtle changes in rapid sign movement, and replicating it in their own production (Gurbuz et al., 2020). Adult learners of sign language thus must resolve the conflict in the visual domain related to the uncertainty of the required temporal resolution of perception. This process of adaptation and learning is critical for becoming proficient in sign language and efficiently processing its dynamic visual components. 
Studying how the resolution of dynamic visual uncertainty changes with ASL experience can provide valuable insights into how perception under uncertainty is shaped by learning complex linguistic sensorimotor skills. By examining how learners at different proficiency levels handle visual temporal resolution, researchers can better understand the interplay between learning, perception, and cognitive adaptation. This knowledge can contribute to broader theories of perceptual learning and predictive processing, particularly in the context of high-entropy visual communication systems like sign language. 
Temporal attentional resolution in signers and non-signers
While non-signing adults cannot isolate and identify the individual flashes that compose flickers beyond frequencies in the range of 10 - 20 Hz (Farzin, Rivera, & Whitney, 2011; Stockman, Williams, & Smithson, 2004; Tyler, 1989a; Tyler, 1989b), multiple studies have shown that sign language users are able to make rapid accurate judgments on quickly-moving stimuli (Bottari, Caclin, Giard, & Pavani, 2011; Dye & Bavelier, 2013; Schotter, Johnson, & Lieberman, 2020). Several researchers also noted that the age of sign language acquisition and the length of exposure to sign language appear to affect the sign production patterns and perceptual abilities of proficient signers (Bosworth, Wright, & Dobkins, 2019; Malaia, Krebs, Roehm, & Wilbur, 2020). The perceptual enhancements in processing of dynamic visual stimuli in signers have been traced to several possible causes: differences in spatial allocation of visual attention (peripheral enhancement), sign language knowledge (known as the “sign superiority effect” or predictive processing); lower temporal resolution of visual attention as compared to non-signers, or any combination of the three (Proksch & Bavelier, 2002). 
Peripheral perceptual enhancements of motion processing in signers
Attentional allocation and perceptual modifications in visual modality resulting from exposure to and use of sign languages in native signers have been well-documented (Bavelier, Dye, & Hauser, 2006; Quandt, Kubicek, Willis, & Lamberton, 2021). The current understanding of the source of these perceptual enhancements is that they are due to cognitive demands that ASL processing places on allocation of attention. The requirement that the recipient of signed communication pays attention to the face (which provides semantic and syntactic markers), body position (pragmatics), and the hands as primary articulators lead, over time, to an enhanced ability to process information in the peripheral visual field in sign language users (Dye & Bavelier, 2013; Neville & Lawson, 1987a; Neville & Lawson, 1987b; Neville & Lawson, 1987c; Siple, Hatfield, & Caccamise, 1978). Even when linguistically meaningful stimuli are non-dynamic, and not related to sign language, deaf signers evidence a wider perceptual span, for example, in reading written texts (Bélanger, Lee, & Schotter, 2018; Bélanger, Slattery, Mayberry, & Rayner, 2012). This might be due to the everyday need for signers to incorporate manual signs produced ∼6.5° away, on average, from the center of the visual field (Bosworth et al., 2019). Indeed, when the stimuli are presented at eccentricities exceeding those of typical signing (about 30̊, cf. Bosworth et al., 2019), signers’ ability to identify signs drops in accuracy by 50% (Swisher, Christie, & Miller, 1989). 
Linguistic prediction versus temporal resolution
Although learning ASL appears to drive peripheral perceptual enhancement, it is not clear whether it affects only sign detection. For example, Schotter et al. (2020) have identified what was termed a “sign superiority effect”—that is, enhanced accuracy in identification of signs versus non-signs presented at near (∼8° off-center) and far (∼12° off-center) eccentricities. Interestingly, the effect held for both the signers and the non-signers in the study, although non-signers were ∼10% less accurate than signers across all categories. Given that non-signers were completely ASL-naïve, and thus the results could not be explained entirely by sign superiority effects, Schotter et al. (2020) suggested that the motion characteristics of the stimuli might have additionally facilitated the processing of signs in both populations. 
Additional difficulty comes from the fact that proficient sign language users rely on their linguistic experience for generating expectations for the upcoming dynamic input. Generally, this mechanism is described as neural predictive processing, which consists of the brain generating predictions for the upcoming sensory input based on earlier input and prior experiences, including linguistic ones (Radošević et al., 2022). In sign language processing, the effects of predictive processing have been found especially prominent in experience with natural (ecologically valid) dynamic stimuli, including fingerspelling (Krebs, Malaia, Wilbur, & Roehm, 2022; Leannah, Willis, & Quandt, 2022; Malaia, Borneman, Krebs, & Wilbur, 2021). 
However, the research on enhanced temporal resolution of attention in signers is not limited to sign language stimuli. For example, in a speeded visual detection task (Bottari et al., 2011), deaf native signers have shown faster reaction times to dynamic stimuli, which were driven by faster neural detection as measured by ERPs, occurring as rapidly as 80 msec after the change in the stimulus. Early cross-linguistic sign language research by Klima et al. (1999) indicated that signers and non-signers differ drastically in their perception of rapidly changing visual stimuli, especially those that incorporate motion. 
The current study
Previous studies examining the effects of deafness and sign language proficiency on visual attention have failed to adequately dissociate the effects of sign language experience, deafness, and spatial (eccentricity) versus temporal factors. One approach to addressing this limitation would be to assess hearing sign language learners at varying levels of proficiency. Additionally, while some prior studies have utilized dynamic sign language stimuli (Klima et al., 1999; Schotter et al., 2020), the motion parameters were not well controlled, making it difficult to characterize the temporal resolution of visual frequency. Finally, using semantically meaningful signs makes it difficult to isolate effects of sign language experience from perceptual learning. We have addressed these gaps by investigating the timeline of change for temporal resolution of visual attention in sign language learners at various levels of experience. Comparing perceptual thresholds across beginning, intermediate, and advanced American Sign Language learners provides insight into how visual attention is shaped by experience with a visual-spatial language. Of particular interest are effects on (1) isolating objects presented in rapid temporal succession, and (2) processing stimuli in central versus peripheral vision across a range of frequencies. 
In the current study, we manipulated spatial eccentricity and temporal resolution (frequency) of simple square-wave flicker stimuli, to probe temporal resolution of visual attention in college-level learners of ASL at various stages of proficiency. Comparing perceptual thresholds across the first six semesters of ASL learning allowed for direct inquiry into how visual attention is shaped by perceptual learning due to sign language experience. Of particular interest were the learning effects on (1) temporal isolation of objects presented in rapid succession and (2) processing stimuli presented with varying levels of eccentricity (foveal vs. parafoveal presentation) across a range of frequencies. 
Based on the inferences from previous research, the following hypotheses were investigated: 
  • 1. Sign language proficiency was expected to affect temporal resolution of visual attention in signers. Two counteracting processes were expected to affect the results: first, experience with sign language articulator motion, with speeds reaching up to 4 m/s (Bosworth et al., 2019; Malaia et al., 2022), might increase temporal resolution of visual attention. However, predictive processing based on sign language proficiency would likely make continuous monitoring of the ASL signal unnecessary, thus decreasing the need for high temporal resolution of visual attention in everyday communication. To model this potential non-linearity in ASL proficiency level effects, ASL proficiency was coded as a nominal variable in the models.
  • 2. In general, skilled signers rely on information in parafoveal presentation more than non-signers (Bosworth & Dobkins, 2002a; Proksch & Bavelier, 2002). Thus we expected that learners of ASL would have higher accuracy of responses to centrally presented small blocks than to large blocks (more peripheral presentation) during the first few months (ASL level 1).
Methods
Participants
Seventy-three participants (F N = 56, age range 17 to 43, age M = 22, SD = 4) took part in the study at two sites: Purdue University (IRB no. 1212012994) and University of Southern Maine (IRB no. 18-10-1146). IRBs of Purdue University and University of Southern Maine approved study procedures, which adhered to the Declaration of Helsinki. The experiments were undertaken with the understanding and written consent of each participant. Each semester, the participants had in-person small-group ASL courses three times a week for 50 minutes at a time. ASL level was coded as the participant's current semester of enrollment in ASL class at the time of testing (14 – semester 1, 17 – semester 2, 7 – semester 3, 20 – semester 4, 9 – semester 5, and 9 – semester 6). 
Materials and design
The stimuli were presented on a 15-inch LCD monitor with 60-Hz refresh rate. Participants were seated approximately 40 cm from the monitor. The task was programmed and presented using Presentation software (Neurobehavioral Systems).  Stimuli were composed of four blocks around a fixation point on the screen with neutral (gray) background (Figure 1A). Blocks changed colors between white and black, creating a flicker (Figure 1B). In each trial, one of the blocks flickered in counter-phase with others: that is, when three blocks were white, the fourth was black, and vice versa. Block color change rates used in the experiment included 10 Hz, 20 Hz, 30 Hz, and 60 Hz. Sixteen trials were presented at each temporal frequency, in random order; a total of 64 trials were administered. Target location was counterbalanced across trials. 
Figure 1.
 
Stimuli presentation on the screen, flicker paradigm, and associated response keys.
Figure 1.
 
Stimuli presentation on the screen, flicker paradigm, and associated response keys.
Figure 2.
 
Accuracy as a function of ASL Level for different block sizes (large and small). The solid line represents the large block size, while the dashed line represents the small block size. The error bars indicate the standard error of the mean, capped at 100% accuracy. The violin plot overlays on the line graph to show the distribution of accuracy scores at each ASL level. The outline of each violin plot represents the kernel density estimation of the data, showing the probability density of the data at different values. The width of the violin indicates the density; a wider section means a higher density of data points at that accuracy level. (A) 10 Hz response accuracy. (B) 20 Hz response accuracy. (C) 30 Hz response accuracy. (D) 60 Hz response accuracy.
Figure 2.
 
Accuracy as a function of ASL Level for different block sizes (large and small). The solid line represents the large block size, while the dashed line represents the small block size. The error bars indicate the standard error of the mean, capped at 100% accuracy. The violin plot overlays on the line graph to show the distribution of accuracy scores at each ASL level. The outline of each violin plot represents the kernel density estimation of the data, showing the probability density of the data at different values. The width of the violin indicates the density; a wider section means a higher density of data points at that accuracy level. (A) 10 Hz response accuracy. (B) 20 Hz response accuracy. (C) 30 Hz response accuracy. (D) 60 Hz response accuracy.
Two versions of the stimuli were developed: for high visual frequency presentation (small blocks), block size was 2 cm on the screen, corresponding to 0.01 cyc/mrad visual frequency, and subtending ∼3° of visual angle (foveal presentation); for lower visual frequency presentation (large blocks), block size was 4 cm on the screen, corresponding to .005 cyc/mrad visual frequency, and subtending ∼6° of visual angle (parafoveal presentation). The order of stimuli presentation (small vs. large or higher vs. lower visual frequency) was counterbalanced among participants, such that half the time the trial began with small block presentation. The experiment took approximately 30 minutes per participant. For one participant, only large block data was collected because of equipment malfunction. This data is excluded from analysis. 
Procedure
The experiment was conducted as a four-alternative forced-choice paradigm, where the participants were asked to look at the fixation point, try to decide which block is flickering out of sync with others, and press the corresponding key on the keyboard (Figure 1C). At slower rates of flicker (10 Hz), the target block was easier to identify because participants were able to individuate alternating black and white states. At frequencies above individual subjects’ thresholds for phase individuation, all blocks appeared to change color at the same time, and the out-of-phase block could not be identified, which was expected to yield chance performance (25% correct). 
Results
Because very little is known about temporal visual attention resolution above ∼20 Hz, we have modeled response accuracy separately at each frequency (10, 20, 30, and 60 Hz). Statistical analyses for the two dependent variables, accuracy and response time, were conducted using mixed-effects models using the JASP 0.19 software package (JASP Team, 2024), 
Accuracy
Separate models were constructed for each level of frequency presentation (10 Hz, 20 Hz, 30 Hz, and 60 Hz), which included fixed effects for ASL proficiency level and the block size (large, small [i.e., parafoveal and foveal] presentation) in the stimuli. Random intercepts were modeled for each participant. Sign language proficiency level was treated as a categorical variable with six levels. Restricted maximum likelihood estimation (REML) was used with the Satterthwaite method for denominator degrees of freedom in hypothesis testing. For each model, the model likelihood, aikake information criterion (AIC), and bayesian information criterion (BIC) were examined to assess model fit; estimated marginal means for accuracy at each ASL proficiency level and block presentation size were computed along with 95% confidence intervals. 
Across all frequency-level models, level 3 of ASL proficiency (participants with 12 to 18 months of ASL learning experience) manifested the highest accuracy of responses (see Figure 2). Within each model, there was also a significant interaction between proficiency level and block size (at least p < 0.015). Thus the accuracy of performance appears to demonstrate an inverted U-shaped curve with experience: accuracy across all temporal and spatial frequencies reaches its peak at ASL level 3 and is lower before (ASL levels 1 and 2) and after it (ASL levels 4–6). 
Response time
After removal of outlier data points (2 SDs above and below the mean), reaction times averaged 13.4 seconds, with SD = 8.3 seconds. For the LME analysis, reaction times were log-transformed. Separate linear mixed models were constructed for each level of frequency presentation (10 Hz, 20 Hz, 30 Hz, and 60 Hz), similar to those constructed for accuracy models. The factors included fixed effects for ASL proficiency level and the block size of the stimuli (large for parafoveal and small for foveal presentation). Random intercepts were modeled for each participant. Sign language proficiency level was treated as a categorical variable with six levels. REML was used with the Satterthwaite method for denominator degrees of freedom in hypothesis testing (see Tables 1A, 2A, 3A, and 4A). For each model, the model likelihood, AIC, and BIC were examined to assess model fit; estimated marginal means for accuracy at each ASL proficiency level and block presentation size were computed along with 95% confidence intervals (see Tables 1B, 2B, 3B, and 4B). None of the linear mixed model analyses revealed any significant effects. 
Discussion
Sign languages utilize extremely rapid changes of handshape, hand orientation, and place of articulation in visual production. The visual environment of a signer or sign learner thus differs drastically from that of everyday life. The data suggests that sign language learners are able to increase, temporarily, visual sampling rate, likely as it becomes necessary to accurately and reliably identify visual parameters of articulation by proficient signers. This study is the first to characterize the sampling rate/temporal resolution of visual attention in sign language learners across proficiency levels. 
We investigated whether the temporal resolution of visual attention, or attentional sampling rate at two different spatial frequencies, can be affected by the process of learning American Sign Language. The results indicate that the attentional sampling rate at which visual events can be individuated increases rapidly during the first three semesters of ASL learning, and then decreases. Third-semester ASL learners demonstrated a marked increase in temporal resolution of visual sampling at both lower and higher spatial frequencies, performing flicker detection with above 70% accuracy at 60 Hz. This adaptive behavior is consistent with what is required by the spatiotemporal properties of visual events in sign language (Bosworth et al., 2019). 
Temporal resolution changes due to SL
By presenting ASL learners with stimuli that had varying temporal frequency thresholds at which an out-of-phase flickering stimulus could be identified, we establish that the resolution of temporal visual attention in sign learners increases as a result of exposure to sign language: third-semester ASL students could individuate alternating states of flicker up to a rate of 60 Hz with accuracy above chance, while the above-chance flicker threshold for first-semester learners was 10 Hz. The temporal resolution of attention, or visual sampling rate, of third-semester students was six times finer than that observed in first-semester learners, or attested in neurotypical adults in the literature (Farzin et al., 2011). Our findings point to a rapid increase in resolution of temporal attention sign language learners to the levels commensurate with recorded speeds of sign language articulator motion in everyday signing (Malaia, Wilbur, & Milkovic, 2013; Malaia & Wilbur, 2012; Malaia et al., 2022). 
Interestingly, spatial frequency of the stimuli affected accuracy of responses across proficiency levels. The stimuli with higher spatial frequency (small squares) elicited higher response accuracy at all levels of proficiency than stimuli with lower spatial frequency. Previous research indicated that high spatial frequencies dominate sign language experience (Bosworth, Bartlett, & Dobkins, 2006), which might help explain the shifting of attentional focus to higher frequency stimuli. However, although temporal resolution of visual attention does appear to be higher for higher spatial frequencies, our data suggests that it is also malleable to increase for lower spatial frequencies in the process of exposure to sign language. 
The finding that learners do increase their attentional resolution has implications for sign language teaching and learning, as well as for understanding sign language acquisition in infants, whose attentional temporal resolution lags far behind that of adults (Farzin et al., 2011). The human ability to sample the visual environment to identify individual events (such as change in handshape, or contact at the place of articulation) plays a crucial role in identifying phonological, semantic, and syntactic features in sign languages. Identification has to happen before perceptual binding of events across different parts of the visual field can take place for sign language comprehension (for example, a syntactic non-manual marker in sign language, such as eyebrow lowering, scopes over multiple manual signs in a signed sentence, and is critical for correct comprehension of ASL). 
It might be instructive to compare these results to what is known about the impact of the process of learning to read through traditional methods. In simplified terms, beginning readers are taught to focus on the printed letters to associate them with their pronunciation by various means (phonics instruction; cf., Ehri, 2022; O'Leary & Ehri, 2020; Stein, 2001). This helps the word become recognizable when spoken aloud. As readers advance, they are encouraged to increase their reading speed and comprehension by ignoring print details and refraining from reading aloud or silently articulating the words to themselves. Various methods, such as text-fading, have been tested to determine the most efficient technique for achieving this transition. Multiple studies (cf., Nagler et al., 2015, for review), have shown that increased reading rate enhances sentence processing speed (within reasonable limits) by freeing up cognitive resources that can be reallocated to comprehension. 
A by-product of traditional reading instruction is that while learning to read, individuals focus on letters. However, once they recognize a word, they no longer pay attention to the individual letters. Healy (1981) reported that when scanning text for misspellings, proofreaders rely heavily on the overall shape and envelope of words. They use additional features only to discriminate among similar letters. Essentially, if the word's overall envelope (low-frequency visual outline) fits with expectations/predictions, proofreaders overlook small strokes (high-frequency visual components) that differentiate letters, such as the difference between a lowercase “e” and “c” or “c” and “o.” Healy's speculation, supported by Lupker (1979), suggests that peripheral vision may suffice for recognizing the letter envelope, while foveal vision is necessary for identifying smaller letter features. This indicates that non-signers performing skilled visual tasks, like proofreading, rely on peripheral vision in a manner similar to proficient signers (although, of course, the primary difference between signing and print is that of dynamicity). 
Biological motion perception in humans in general and signers in particular
The study suggests the mechanisms for how, in the course of learning a sign language, the learners might also alter their perception of biological motion. Biological motion is a critical component of understanding human movement and social interaction. As individuals up-regulate their temporal visual resolution for sign language learning, they may also develop enhanced sensitivity to the rapid changes in other, noncommunicative biological motion. Experienced signers exhibit superior abilities in recognizing and predicting human actions, given their extensive practice in discerning subtle differences in movement patterns. 
Multiple behavioral studies demonstrated that experienced signers have remarkable abilities in recognizing and predicting human actions (Klima et al., 1999). For example, Deaf signers can both parse and synthesize complex dynamic movements, even when presented as abstract point-light displays devoid of contextual cues​ (Klima et al., 1999; Leannah et al., 2022). This suggests that the experience with sign language, which involves decoding subtle rapid changes in motion at multiple scales (from sign to sentence dynamics), leads to a broader enhancement in processing human motion, possibly also at a variety of scales (Blumenthal-Dramé & Malaia, 2019). Neurophysiological evidence further supports the notion of enhanced biological motion processing in signers; for example, Quandt et al. (2021) found that deaf native signers demonstrated earlier and more consistent differentiation between biological motion and scrambled motion displays in faster engagement of sensorimotor regions. 
The convergence of behavioral and neural evidence suggests that the extensive visual and spatial processing demands of sign language results in signers having a different processing strategy for both biological motion and sign language. Importantly, hearing individuals fluent in sign language show similar (though less pronounced) motion processing advantage as compared to non-signers, indicating that experience with sign language is the primary driver behind the neurobehavioral strategy change for visual processing observed in signers (Quandt et al., 2021). 
Central vs. peripheral vision in sign language
Prior research in proficient signers – both Deaf and hearing – indicates a robust effect of speeded attention for visual processing in the peripheral visual field, both for linguistic and non-linguistic stimuli (Bosworth & Dobkins, 2002b; Schotter et al., 2020; Stoll & Dye, 2019). The peripheral motion perception enhancement, which is especially drastic between signers and non-signers, has been attributed to both auditory deprivation and experience with sign language (Proksch & Bavelier, 2002). Among the signers (on the basis of sign language stimuli), it appears to be correlated with early exposure to ASL and greater ASL fluency correlated with improved performance (Leannah et al., 2022). Both early deafness and sign language experience appear to lead to a redistribution of visual attention, with increased resources allocated to peripheral vision (Proksch & Bavelier, 2002; Stoll & Dye, 2019). 
In the present study, participants were significantly more accurate in identifying flicker stimuli when presented in small blocks (foveal vision) as compared to large blocks (peripheral vision). This finding corresponds to the typical pattern in early stages of ASL acquisition, where learners are primarily focused on decoding small, detailed handshapes and movements central to the signing space. This reflects learners' adaptation to the demands of visual statistics of sign language (Bosworth et al., 2006). The fact that we did not see enhancements in the peripheral processing among the learners in the first three years of sign language exposure might indicate that the shift in attentional processing to the visual periphery is something that is more likely to happen during the critical period for language acquisition. Alternatively, the shift toward peripheral visual attention might require higher proficiency or take longer to establish than the duration of experiment allowed. 
Predictive processing framework in sign languages
The neural mechanisms underlying the comprehension of sign language involve cortical tracking of the visual signal, much like the way neural activity tracks acoustic envelopes in spoken language comprehension (Ford, Borneman, Krebs, Malaia, & Ames, 2021). This cortical tracking relies on lower frequencies of electroencephalography response (0.2-4 Hz), which align with predictive processing based on prior language experience (Blumenthal-Dramé & Malaia, 2019). The difference between tracking biological motion and sign language is that sign language is characterized by higher entropy compared to non-linguistic human motion (Borneman et al., 2018; Malaia et al., 2016); this means that in real time, sign language signals are less predictable (there are many more potential states that a specific motion may be a part of). This difference in information content may explain why real-time sign language comprehension is driven by predictive processing (Malaia et al., 2023; Radošević et al., 2022). 
In the present experiment, the finding that resolution of temporal attention in sign language learners appears to decrease after approximately the third semester of studying ASL, points to the initial stages of ability to predict the incoming signal based on sign language experience. This may suggest a shift from bottom-up to top-down processing as learners become more proficient. While visual strategies in the initial stages of ASL learning are similar to those in biological motion processing, and rely heavily on high-resolution sampling of visual stimuli (bottom-up processing), learners begin to employ top-down predictive mechanisms as proficiency increases, integrating learned linguistic patterns across multiple scales (Blumenthal-Dramé & Malaia, 2019). 
Limitations and future work
The primary limitation of this study is that it does not directly link the changes in the temporal resolution of non-linguistic perception (flicker) to the expected changes in the temporal resolution of sign language stimuli. Although our findings suggest that learners likely adapt both the perception and prediction time windows, we did not test both in the same participants. Additionally, there is a potential for self-selection bias among participants in ASL courses. It is possible that students who choose to enroll in ASL university courses may possess inherent advantages in temporal visual processing; alternatively, some may be drawn to ASL for reasons beyond inherent ability (e.g. learning differences like auditory processing disorder or dyslexia that make spoken language learning more challenging, or may deter students from languages with written components for satisfying foreign language requirement). This self-selection could influence their performance on tasks measuring visual temporal resolution of attention, as compared to the rest of the population (Tyler, 1989a). We have therefore described our findings as correlations rather than causal effects. 
The finding that perceptual resolution of visual processing increases and then decreases in the course of sign language learning points to several intriguing avenues for further research. First, multiple studies indicate that dynamic stimuli facilitate recognition and comprehension in signers (Bavelier et al., 2006; Klima et al., 1999; Neville & Lawson, 1987b; Schotter et al., 2020). This might mean that sign language experience allows the individual to use motion processing strategies that do not rely solely on temporal resolution but rather trace the envelope of the motion signal (as manifested, for example, by motion derivatives—acceleration, jerk, etc.). In combination with studies of predictive processing for dynamic signs, this suggests that rhythmic characteristics of movement are important to sign language fluency in both production and perception (Gurbuz et al., 2020), and a better understanding of spatiotemporal strategies in sign language processing at both lexical and suprasegmental level (i.e. prosody) might yield further insights into perceptual learning that accompanies sign language learning and acquisition by children. 
Conclusions
The present study provides evidence that ASL proficiency significantly affects the temporal resolution of visual attention. The observed inverted U-shaped curve in accuracy, with the highest performance at intermediate proficiency levels, suggests a tradeoff relationship between sign language experience and perceptual adaptation. These findings align with the predictive processing framework (Friston, 2018; Malaia et al., 2023), indicating that learners initially rely heavily on visual information but, as proficiency increases, develop more efficient predictive mechanisms to handle high-temporal resolution stimuli. 
The implications of this study extend to understanding how learning a high-entropy visual language like ASL shapes cognitive processes. The enhanced ability of proficient signers to process rapid visual stimuli supports the idea that extensive experience with dynamic visual input leads to specialized perceptual adaptations. This adaptation likely involves both improved temporal resolution and predictive processing capabilities, allowing signers to efficiently navigate the inherent uncertainty in visual communication. 
Acknowledgments
Supported by grant 1734938 from the U.S. National Science Foundation to Ronnie B. Wilbur and Evie Malaia, grant 1932547 from the U.S. National Science Foundation to Evie Malaia, and grant R01#108306 from the National Institutes of Health to Ronnie B. Wilbur. 
Commercial relationships: none. 
Corresponding author: Evie A. Malaia. 
Address: Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL 35404, USA. 
References
Bavelier, D., Dye, M. W., & Hauser, P. C. (2006). Do deaf individuals see better? Trends in Cognitive Sciences, 10(11), 512–518. [CrossRef] [PubMed]
Bélanger, N. N., Lee, M., & Schotter, E. R. (2018). Young skilled deaf readers have an enhanced perceptual span in reading. Quarterly Journal of Experimental Psychology, 71(1), 291–301, https://doi.org/10.1080/17470218.2017.1324498. [CrossRef]
Bélanger, N. N., Slattery, T. J., Mayberry, R. I., & Rayner, K. (2012). Skilled Deaf Readers Have an Enhanced Perceptual Span in Reading. Psychological Science, 23(7), 816–823, https://doi.org/10.1177/0956797611435130. [CrossRef] [PubMed]
Blumenthal-Dramé, A., & Malaia, E. (2019). Shared neural and cognitive mechanisms in action and language: The multiscale information transfer framework. Wiley Interdisciplinary Reviews: Cognitive Science, 10(2), e1484. [PubMed]
Borneman, J. D., Malaia, E. A., & Wilbur, R. B. (2018). Motion characterization using optical flow and fractal complexity. Journal of Electronic Imaging, 27(05), 1, https://doi.org/10.1117/1.JEI.27.5.051229. [CrossRef]
Bosworth, R. G., Bartlett, M. S., & Dobkins, K. R. (2006). Image statistics of American sign language: Comparison with faces and natural scenes. Journal of the Optical Society of America A, 23(9), 2085–2096. [CrossRef]
Bosworth, R. G., & Dobkins, K. R. (2002a). The effects of spatial attention on motion processing in deaf signers, hearing signers, and hearing nonsigners. Brain and Cognition, 49(1), 152–169, https://doi.org/10.1006/brcg.2001.1497. [CrossRef] [PubMed]
Bosworth, R. G., & Dobkins, K. R. (2002b). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49(1), 170–181, https://doi.org/10.1006/brcg.2001.1498. [CrossRef] [PubMed]
Bosworth, R. G., Wright, C. E., & Dobkins, K. R. (2019). Analysis of the visual spatiotemporal properties of American Sign Language. Vision Research, 164, 34–43, https://doi.org/10.1016/j.visres.2019.08.008. [CrossRef] [PubMed]
Bottari, D., Caclin, A., Giard, M.-H., & Pavani, F. (2011). Changes in early cortical visual processing predict enhanced reactivity in deaf individuals. Plos One, 6(9), e25607, https://doi.org/10.1371/journal.pone.0025607. [CrossRef] [PubMed]
Dye, M. W. G., & Bavelier, D. (2013). Visual attention in deaf humans: A neuroplasticity perspective. In Kral, A., Popper, A. N., & Fay, R. R. (Eds.), Deafness (Vol. 47, pp. 237–263). New York: Springer, https://doi.org/10.1007/2506_2013_9.
Ehri, L. C. (2022). What teachers need to know and do to teach letter–sounds, phonemic awareness, word reading, and phonics. The Reading Teacher, 76(1), 53–61, https://doi.org/10.1002/trtr.2095.
Farzin, F., Rivera, S. M., & Whitney, D. (2011). Time crawls: The temporal resolution of infants’ visual attention. Psychological Science, 22(8), 1004–1010. [PubMed]
Ford, L. K. W., Borneman, J., Krebs, J., Malaia, E., & Ames, B. (2021). Classification of visual comprehension based on EEG data using sparse optimal scoring. Journal of Neural Engineering, 18(2), 026025, https://doi.org/10.1088/1741-2552/abdb3b.
Friston, K. (2018). Does predictive coding have a future? Nature Neuroscience, 21(8), 1019–1021. [PubMed]
Gurbuz, S. Z., Gurbuz, A. C., Malaia, E. A., Griffin, D. J., Crawford, C., Kurtoglu, E., … Mdrafi, R. (2020). ASL recognition based on kinematics derived from a multi-frequency RF sensor network. 2020 IEEE Sensors, 1–4.
Healy, A. F. (1981). The effects of visual similarity on proofreading for misspellings. Memory & Cognition, 9(5), 453–460, https://doi.org/10.3758/BF03202339. [PubMed]
JASP Team. (2024). JASP (Version 0.19.0) [Computer software]. https://jasp-stats.org/faq/.
Klima, E. S., Tzeng, O. J., Fok, Y. Y. A., Bellugi, U., Corina, D., & Bettger, J. G. (1999). From sign to script: Effects of linguistic experience on perceptual categorization. Journal of Chinese Linguistics Monograph Series, 96–129.
Krebs, J., Malaia, E., Wilbur, B. R., & Roehm, D. (2022). EEG analysis based on dynamic visual stimuli: Best practices in analysis of sign language data. Hrvatska Revija Za Rehabilitacijska Istraživanja, 58(Special Issue), 245–266.
Leannah, C., Willis, A. S., & Quandt, L. C. (2022). Perceiving fingerspelling via point-light displays: The stimulus and the perceiver both matter. Plos One, 17(8), e0272838, https://doi.org/10.1371/journal.pone.0272838. [PubMed]
Lupker, S. J. (1979). On the nature of perceptual information during letter perception. Perception & Psychophysics, 25(4), 303–312, https://doi.org/10.3758/BF03198809. [PubMed]
Malaia, E. A., Borneman, J. D., Kurtoglu, E., Gurbuz, S. Z., Griffin, D., Crawford, C., ... Gurbuz, A. C. (2022). Complexity in sign languages. Linguistics Vanguard, 9(s1), 121–131.
Malaia, E. A., Borneman, S. C., Borneman, J. D., Krebs, J., & Wilbur, R. B. (2023). Prediction underlying comprehension of human motion: An analysis of Deaf signer and non-signer EEG in response to visual stimuli. Frontiers in Neuroscience, 17, 1218510, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10602904/. [PubMed]
Malaia, E. A., Borneman, S. C., Krebs, J., & Wilbur, R. B. (2021). Low-frequency entrainment to visual motion underlies sign language comprehension. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 2456–2463.
Malaia, E. A., Krebs, J., Roehm, D., & Wilbur, R. B. (2020). Age of acquisition effects differ across linguistic domains in sign language: EEG evidence. Brain and Language, 200, 104708. [PubMed]
Malaia, E., Borneman, J. D., & Wilbur, R. B. (2016). Assessment of information content in visual signal: Analysis of optical flow fractal complexity. Visual Cognition, 24(3), 246–251.
Malaia, E., & Wilbur, R. B. (2012). Kinematic signatures of telic and atelic events in ASL predicates. Language and Speech, 55(3), 407–421. [PubMed]
Malaia, E., Wilbur, R. B., & Milkovic, M. (2013). Kinematic parameters of signed verbs. Journal of Speech, Language, and Hearing Research: JSLHR, 56(5), 1677–1688, https://doi.org/10.1044/1092-4388(2013/12-0257). [PubMed]
Mitchell, R. E., & Young, T. A. (2022). How many people use sign language? A national health survey-based estimate. Journal of Deaf Studies and Deaf Education, 28(1), 1–6, https://doi.org/10.1093/deafed/enac031. [PubMed]
Nagler, T., Korinth, S. P., Linkersdörfer, J., Lonnemann, J., Rump, B., Hasselhorn, M., ... Lindberg, S. (2015). Text-fading based training leads to transfer effects on children's sentence reading fluency. Frontiers in Psychology, 6, 119, https://doi.org/10.3389/fpsyg.2015.00119. [PubMed]
Neville, H. J., & Lawson, D. (1987a). Attention to central and peripheral visual space in a movement detection task: An event-related potential and behavioral study. I. Normal hearing adults. Brain Research, 405(2), 253–267. [PubMed]
Neville, H. J., & Lawson, D. (1987b). Attention to central and peripheral visual space in a movement detection task: An event-related potential and behavioral study. II. Congenitally deaf adults. Brain Research, 405(2), 268–283. [PubMed]
Neville, H. J., & Lawson, D. (1987c). Attention to central and peripheral visual space in a movement detection task. III. Separate effects of auditory deprivation and acquisition of a visual language. Brain Research, 405(2), 284–294. [PubMed]
O'Leary, R., & Ehri, L. C. (2020). Orthography facilitates memory for proper names in emergent readers. Reading Research Quarterly, 55(1), 75–93, https://doi.org/10.1002/rrq.255.
Proksch, J., & Bavelier, D. (2002). Changes in the spatial distribution of visual attention after early deafness. Journal of Cognitive Neuroscience, 14(5), 687–701. [PubMed]
Quandt, L. C., Kubicek, E., Willis, A., & Lamberton, J. (2021). Enhanced biological motion perception in deaf native signers. Neuropsychologia, 161, 107996, https://doi.org/10.1016/j.neuropsychologia.2021.107996. [PubMed]
Radošević, T., Malaia, E. A., & Milković, M. (2022). Predictive processing in sign languages: A systematic review. Frontiers in Psychology, 13, 1334.
Schotter, E. R., Johnson, E., & Lieberman, A. M. (2020). The sign superiority effect: Lexical status facilitates peripheral handshape identification for deaf signers. Journal of Experimental Psychology: Human Perception and Performance, 46(11), 1397–1410, https://doi.org/10.1037/xhp0000862. [PubMed]
Siple, P., Hatfield, N., & Caccamise, F. (1978). The role of visual perceptual abilities in the acquisition and comprehension of sign language. American Annals of the Deaf, 852–856.
Stein, J. (2001). The sensory basis of reading problems. Developmental Neuropsychology, 20(2), 509–534, https://doi.org/10.1207/S15326942DN2002_4. [PubMed]
Stockman, A., Williams, M., & Smithson, H. (2004). Flicker-clicker: Cross modality matching experiments. Journal of Vision, 4(11), 86–86.
Stoll, C., & Dye, M. W. G. (2019). Sign language experience redistributes attentional resources to the inferior visual field. Cognition, 191, 103957, https://doi.org/10.1016/j.cognition.2019.04.026. [PubMed]
Swisher, M. V., Christie, K., & Miller, S. L. (1989). The reception of signs in peripheral vision by deaf persons. Sign Language Studies, 63(1), 99–125.
Tyler, C. W. (1989a). The full range of human temporal resolution. Human Vision, Visual Processing, and Digital Display, 1077, 93–107, https://www.spiedigitallibrary.org/conference-proceedings-of-spie/1077/0000/The-Full-Range-of-Human-Temporal-Resolution/10.1117/12.952707.short.
Tyler, C. W. (1989b). Two processes control variations in flicker sensitivity over the life span. Journal of the Optical Society of America A, 6(4), 481–490, https://doi.org/10.1364/JOSAA.6.000481.
Figure 1.
 
Stimuli presentation on the screen, flicker paradigm, and associated response keys.
Figure 1.
 
Stimuli presentation on the screen, flicker paradigm, and associated response keys.
Figure 2.
 
Accuracy as a function of ASL Level for different block sizes (large and small). The solid line represents the large block size, while the dashed line represents the small block size. The error bars indicate the standard error of the mean, capped at 100% accuracy. The violin plot overlays on the line graph to show the distribution of accuracy scores at each ASL level. The outline of each violin plot represents the kernel density estimation of the data, showing the probability density of the data at different values. The width of the violin indicates the density; a wider section means a higher density of data points at that accuracy level. (A) 10 Hz response accuracy. (B) 20 Hz response accuracy. (C) 30 Hz response accuracy. (D) 60 Hz response accuracy.
Figure 2.
 
Accuracy as a function of ASL Level for different block sizes (large and small). The solid line represents the large block size, while the dashed line represents the small block size. The error bars indicate the standard error of the mean, capped at 100% accuracy. The violin plot overlays on the line graph to show the distribution of accuracy scores at each ASL level. The outline of each violin plot represents the kernel density estimation of the data, showing the probability density of the data at different values. The width of the violin indicates the density; a wider section means a higher density of data points at that accuracy level. (A) 10 Hz response accuracy. (B) 20 Hz response accuracy. (C) 30 Hz response accuracy. (D) 60 Hz response accuracy.
Table 1A.
 
10 Hz model summary
Table 1A.
 
10 Hz model summary
Table 1B.
 
10 Hz fit statistic.
Table 1B.
 
10 Hz fit statistic.
Table 2A.
 
20 Hz model summary and fit statistic
Table 2A.
 
20 Hz model summary and fit statistic
Table 2B.
 
20 Hz fit statistic
Table 2B.
 
20 Hz fit statistic
Table 3A.
 
30 Hz model summary
Table 3A.
 
30 Hz model summary
Table 3B.
 
30 Hz fit statistic
Table 3B.
 
30 Hz fit statistic
Table 4A.
 
60 Hz model summary
Table 4A.
 
60 Hz model summary
Table 4B.
 
60 Hz fit statistic
Table 4B.
 
60 Hz fit statistic
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×