Open Access
Article  |   March 2017
Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds
Author Affiliations
  • Vivian M. Ciaramitaro
    Department of Psychology, Developmental and Brain Sciences, University of Massachusetts, Boston, MA, USA
    vivian.ciaramitaro@umb.edu
  • Hiu Mei Chow
    Department of Psychology, Developmental and Brain Sciences, University of Massachusetts, Boston, MA, USA
    dorischm@gmail.com
  • Luke G. Eglington
    Department of Psychology, Developmental and Brain Sciences, University of Massachusetts, Boston, MA, USA
    Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
    Luke.G.Eglington.GR@dartmouth.edu
Journal of Vision March 2017, Vol.17, 20. doi:10.1167/17.3.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vivian M. Ciaramitaro, Hiu Mei Chow, Luke G. Eglington; Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds. Journal of Vision 2017;17(3):20. doi: 10.1167/17.3.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation—two consecutive intervals of streams of visual letters—and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower—that is, auditory sensitivity was improved—for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

Introduction
Our sensory epithelium is bombarded by a plethora of information from our different senses, each specialized to subserve the processing of unique features of information. Through mechanisms of attention we select and prioritize a subset of this overwhelming amount of information for more detailed processing. Studies of attention in the domain of vision have yielded interesting insights by borrowing from methodologies and concepts used to understand basic perceptual mechanisms, such as the representation of visual contrast. In general, contrast control is an important mechanism by which our sensory systems optimize input to maintain maximal sensitivity in changing contexts, such as moving from indoor to outdoor lighting or from an enclosed noisy room to an open quiet area. It is via mechanisms of contrast control that our sensory systems maintain exquisite sensitivity in processing information across a broad range of conditions. 
The study of how contrast is encoded and how attention may usurp basic mechanisms of contrast control to alter visual processing has been examined in psychophysical studies of behavior (e.g., Cameron, Tai, & Carrasco, 2002; Carrasco, 2006; Carrasco, Eckstein, Verghese, Boynton, & Treue, 2009; Carrasco, Ling, & Read, 2004; Herrmann, Montaser-Kouhsari, Carrasco, & Heeger, 2010; Pestilli & Carrasco, 2005); in neurophysiological studies at the level of single-unit physiology (e.g., Martínez-Trujillo & Treue, 2002; Reynolds, Pasternak, & Desimone, 2000; Williford & Maunsell, 2006); at the level of populations of neurons (e.g., Buracas & Boynton, 2007; Di Russo, Spinelli, & Morrone, 2001; Morrone, Denti, & Spinelli, 2002; Pestilli, Carrasco, Heeger, & Gardner, 2011); and in computational models (for a model reconciling differences across studies, see Reynolds & Heeger, 2009). In audition, although studies have investigated general mechanisms of auditory contrast control (e.g., Rabinowitz, Willmore, Schnupp, & King, 2011; Shechter & Depireux, 2012; Willmore, Cooke, & King, 2014), little is known regarding the role of attention in modulating auditory contrast. 
Whereas visual contrast can be conceptualized as the difference in luminance across the spatial extent of an image that makes it distinguishable from the background image, auditory contrast can be conceptualized as the difference in spectro-temporal properties of a sound that make it distinguishable from the background noise. Two important spectro-temporal properties are amplitude modulation (AM) and frequency modulation (FM). Previous studies have provided inconclusive evidence as to whether or not AM and FM sounds share common neuronal substrates and underlying mechanisms. While some studies suggest that AM and FM sounds may rely on distinct auditory-processing pathways (behavioral studies: e.g., Kay, 1982; Regan & Tansley, 1979; neuroimaging studies: e.g., Lu, Liang, & Wang, 2001; Mäkelä, Hari, & Linnankivi, 1987), other studies suggest the opposite (behavioral studies: e.g., Wakefield & Viemeister, 1984; Zwicker, 1962; neuroimaging studies: e.g., Hart, Palmer, & Hall, 2003; Luo, Wang, Poeppel, & Simon, 2006; for a review, see Altmann & Gaese, 2014). Thus, it is unclear if the influence of attention on auditory contrast will be similar irrespective of whether auditory stimuli are modulated by amplitude or frequency. 
Our goal was to investigate if attention influenced the processing of auditory contrast (given its known influence on the processing of visual contrast), toward determining whether basic mechanisms of attention may act, irrespective of sensory modality, on the processing of select stimulus dimensions—namely, contrast. We used visual load to control the allocation of attention, and we compared and contrasted the effect of such cross-modal attention on auditory detection thresholds for AM and FM sounds while holding other parameters and task procedures constant. Previous studies have shown that manipulating visual load can alter auditory detection, yielding inattentional deafness, where high visual load can worsen the detection of an auditory tone relative to low visual load (Macdonald & Lavie, 2011; Raveh & Lavie, 2015). In our paradigm, we hypothesize that a more demanding, high-load visual task should increase auditory contrast thresholds (worsen auditory contrast sensitivity) compared to a less demanding, low-load visual task, which should decrease auditory contrast thresholds (improve auditory contrast sensitivity). Given inconsistencies from previous findings, as described earlier, we had no clear predictions as to whether manipulating visual load to redirect cross-modal attention would have similar effects on auditory contrast irrespective of whether sounds were modulated by AM or FM. 
As predicted, we found that auditory contrast detection thresholds were increased under high relative to low visual load, suggesting that cross-modal attention may affect auditory contrast in ways that mirror the effects of visual attention on visual contrast. Interestingly, we found that cross-modal attention had similar affects for AM (Experiment 1) and FM (Experiment 2) auditory contrast. 
Experiment 1: AM sound
Materials and methods
Participants
Seven adults participated in Experiment 1 (five women, two men; mean age = 24.6 years; SEM = 1.9). Participants were undergraduate or graduate students recruited from the University of Massachusetts Boston community. Experiments were reviewed and approved by the university's institutional review board, and written informed consent was obtained from all participants. All participants had no known psychiatric or neurological history, had normal or corrected-to-normal vision, and did not report any hearing problems. One additional participant was excluded because of musical training (over 10 years), since such extensive musical experience can alter brain plasticity and general auditory processing (e.g., Herholz & Zatorre, 2012) as well as the effects of visual-load manipulations (Ro, Friggel, & Lavie, 2009). 
Apparatus and stimuli
Visual and auditory stimuli were generated with a Mac PowerBook G4 (OS 9.2.2), using the Psychophysics Toolbox (Brainard, 1997; Kleiner, Brainard, & Pelli, 2007; Pelli, 1997), Video Toolbox, and the MATLAB (version 5.2) programming language. Visual stimuli were displayed on a 21-in. CRT monitor placed 57 cm from the observer, who was positioned on a chin and forehead rest to maintain stable head position. All auditory stimuli were displayed binaurally via high-fidelity stereo headphones (Jabra C820s; Jabra, Ballerup, Denmark). 
Visual stimuli were letters subtending 1.43° of visual angle, presented 0.56° above central fixation. A total of 10 letters out of a possible 26 were presented in a rapid serial visual presentation on a given trial, five letters in each of two streams, for a total duration of 500 ms in each interval, giving a presentation rate of 10 Hz. 
Auditory stimuli were presented concurrently with visual stimuli during each 500-ms interval. An auditory standard was presented in one interval and an auditory test stimulus in the other interval, with the interval containing the test stimulus selected at random. The auditory standard (Audstandard) was white noise (sampling rate = 44.1 kHz, duration = 500 ms). The auditory test stimulus (Audtest) was white noise sinusoidally amplitude-modulated at a modulation frequency (MF) randomly selected from one of the following values: 0.5, 1, 5, and 10 Hz. These parameters were informed by the work of Zwicker (1962). The modulation index (MI)—the depth of amplitude modulation—was randomly selected, following the method of constant stimuli, from one of five values based on pilot work: 0.075, 0.1313, 0.2297, 0.402, 0.7034. These are increments of 1.75 × MI, starting from the lowest MI value. The resultant AM sound (Audtest; see Equations 1 and 2) was perceived as a change in sound loudness at one of the four possible frequencies.     
Of note, all auditory-stimulus parameters in our paradigm, MF, and MI for AM sounds were selected at random, not blocked. Trials were only blocked based on visual-task demands. 
Psychophysical methods
Visual and auditory stimuli were presented concurrently for each of two 500-ms intervals in a two-interval forced-choice dual task. For the auditory task, observers had to judge which interval contained the target sound, the modulated test stimulus already described. For the visual task, there were two possible options: a color one and a number one. In the color task, one interval contained black letters and the other white letters, with the interval containing white letters counterbalanced across trials; observers had to judge which interval contained white letters. In the number task, the number of As presented in each interval varied between one and three, with the interval containing more As counterbalanced across trials; observers had to judge which interval contained more of the target letter A
Procedure
Participants were first trained on each task separately, completing 50 trials for each of the following three tasks: the auditory task (judging which interval had the AM sound), the visual color task (judging which interval had the white letters), and the visual number task (judging which interval had more As). Then all participants practiced the dual task, where visual and auditory stimuli were presented concurrently and observers had to indicate the interval containing the visual and the auditory target at the end of each trial. Participants completed 150 trials of the dual task with the easier color task and 150 trials with the more difficult number task. 
After training, participants completed a total of ∼3,000 psychophysical trials, over 10 blocks of 150 trials each, for each dual task, with order of dual-task presentation counterbalanced across blocks. Data were collected over several days, with a given session lasting no more than 1.5 hr. Observers typically completed four blocks per session. To reduce fatigue, participants were provided with a forced break every 50 trials and had to press a key to resume the experiment when ready. 
At the beginning of each trial, a fixation cross was displayed for 200 ms. Visual and auditory stimuli were presented in a first interval for 500 ms, followed by a 200-ms delay, and then visual and auditory stimuli were presented in a second 500-ms interval (see Figure 1). At the end of the second interval, participants had 1000 ms to judge the visual stimulus and received feedback on their judgment (correct: central fixation turned green; incorrect: central fixation turned red; too fast or slow: central fixation turned blue). They then had another 1000 ms to judge the auditory stimulus, for which they received the same feedback as for the visual task. 
Figure 1
 
Stimuli and psychophysical procedure. Trials began with a fixation stimulus, followed by the simultaneous presentation of a visual stream of letters (rapid serial visual presentation), just above central fixation, and an auditory stimulus for 500 ms (first interval). Following a 200-ms delay, another visual stream of letters and another auditory stimulus were presented for 500 ms (second interval). At the end of each trial, participants had a limited time (1000 ms) to report via button press which interval contained the visual target—white letters in the visual color task (low-load condition) or more of the letter A in the visual number task (high-load condition)—and then to report which interval contained the auditory target, which was an amplitude-modulated sound (Experiment 1) or a frequency-modulated sound (Experiment 2). Feedback was provided after each visual and each auditory response. Participants completed blocks of trials for a given condition (high or low load) across several days, in either Experiment 1 or Experiment 2.
Figure 1
 
Stimuli and psychophysical procedure. Trials began with a fixation stimulus, followed by the simultaneous presentation of a visual stream of letters (rapid serial visual presentation), just above central fixation, and an auditory stimulus for 500 ms (first interval). Following a 200-ms delay, another visual stream of letters and another auditory stimulus were presented for 500 ms (second interval). At the end of each trial, participants had a limited time (1000 ms) to report via button press which interval contained the visual target—white letters in the visual color task (low-load condition) or more of the letter A in the visual number task (high-load condition)—and then to report which interval contained the auditory target, which was an amplitude-modulated sound (Experiment 1) or a frequency-modulated sound (Experiment 2). Feedback was provided after each visual and each auditory response. Participants completed blocks of trials for a given condition (high or low load) across several days, in either Experiment 1 or Experiment 2.
Analysis
Data were analyzed using MATLAB (R2012a) and Psignifit toolbox 4.0 (Schütt, Harmeling, Macke, & Wichmann, 2016), SPSS (version 17.0), R (release year 2012), and the lme4 package (Bates, Mächler, Bolker, & Walker, 2014). Trials on which observers failed to respond within the allotted time, to either visual or auditory stimuli, were discarded from subsequent analysis. 
For each observer we calculated percent correct for the visual and auditory tasks, for each visual task (color and number), each auditory contrast level, and each frequency. To determine auditory contrast threshold, estimated slope (the rate of change of performance as a function of stimulus intensity) at threshold, and lapse rate (the rate at which participants responded incorrectly regardless of stimulus intensity), we used Psignifit to fit a Weibull function to describe the relationship between percent correct performance on the auditory task and AM or FM modulation depth. Auditory contrast thresholds were defined as the AM/FM MI, or contrast, supporting 75% correct performance (default option of Psignifit). Freeing lapse rates and allowing them to be estimated from the function gives rise to better fits of psychometric functions (e.g., Wichmann & Hill, 2001). All auditory threshold data were log10 transformed to obtain data well described by a normal distribution, enabling analysis using parametric statistics.1 All analyses presented later were performed on log10-transformed auditory threshold data. 
For our repeated-measures ANOVA, which consisted of factors with more than two levels, we tested our data for violations of the assumption of sphericity using Mauchly's test, which revealed no significant effects: The assumption of sphericity was not violated and no correction of statistical results was needed. 
We also quantified how visual and auditory performance changed over time on task and whether this relationship was influenced by properties of the visual task or auditory frequency. To examine such learning effects, we segmented data for a given visual task into blocks of 500 trials, three blocks for each visual task, and recomputed auditory threshold and mean visual percent correct for each block, each visual task (color and number), and each frequency. To understand how performance changed as a function of blocks, we performed a linear mixed-effects analysis with auditory threshold as the dependent variable. We increased the number of predictors (e.g., block, task) one at a time, looking for the simplest model (i.e., fewest predictors) to account for the most variance in the data. Evaluation of models was based on data likelihood—how likely it was that we would observe the data, given the model. Common ways to report data likelihood are log likelihood (no adjustment for the number of predictors), Akaike information criterion (AIC, adjustment for the number of predictors), and Bayesian information criterion (BIC, adjustment for the number of predictors and number of observations). While in our study log likelihood was used to test if there was significant change in data likelihood across models through a chi-square test (thus whether a predictor is significant in improving model fit), AIC and BIC are reported as well for reference. 
Results
All observers were well practiced on the task before data collection. For the data presented in the following, participants missed an average of 7.8% of trials because they failed to respond to either a visual or an auditory stimulus in the allotted time. Given that such responses are ambiguous, we did not feel we could classify such trials as incorrect, and we thus excluded these trials from subsequent analysis. 
Figure 2 plots percent correct performance on the auditory task as a function of auditory contrast for AM sound, at each frequency. Percent correct performance was fitted with a Weibull function to determine auditory contrast AM threshold, slope at threshold, and lapse rate for each observer and each condition. Auditory thresholds for each participant were computed for each of three frequencies (1, 5, and 10 Hz2) and for each visual task (dashed line: low-load visual task; solid line: high-load visual task). A sample observer's psychometric functions, from which we derived threshold measures, are plotted in Figure 2A, while the summary of auditory thresholds for each visual task is plotted in Figure 2B, showing the mean ± standard error of the mean across observers, plus each individual's data. 
Figure 2
 
Auditory performance in amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast, for white-noise sounds presented at 1, 5, and 10 Hz. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory AM thresholds increased when the concurrent visual task was the number versus the color task. This was significant in six of seven observers and across observers for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625). (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task and frequency.
Figure 2
 
Auditory performance in amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast, for white-noise sounds presented at 1, 5, and 10 Hz. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory AM thresholds increased when the concurrent visual task was the number versus the color task. This was significant in six of seven observers and across observers for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625). (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task and frequency.
We performed a 2 (task: color, number) × 3 (frequency: 1, 5, 10 Hz) repeated-measures ANOVA to compare auditory contrast AM thresholds across conditions. The auditory threshold for the low-load (visual color) task (M = −0.480) was significantly smaller than for the high-load (visual number) task (M = −0.374), F(1, 6) = 17.845, p = 0.006, Display FormulaImage not available = 0.748). This suggests that changing visual-task demands influences auditory contrast threshold created via AM; as visual-task demands increase, auditory AM thresholds worsen. There was a significant main effect of frequency, F(2, 12) = 21.154, p < 0.001, Display FormulaImage not available = 0.779). Auditory thresholds at 1 Hz (M = −0.344) were significantly larger than at 5 Hz (M = −0.476, Bonferroni-adjusted p = 0.001) and at 10 Hz (M = −0.461, Bonferroni-adjusted p = 0.023). There were no significant differences in thresholds for 5 and 10 Hz (Bonferroni-adjusted p = 1.000). Furthermore, we found no significant interaction effect between task and MF, F(2, 12) = 1.511, p = 0.260, Display FormulaImage not available = 0.201).  
A similar ANOVA was performed to compare slope measures at threshold and lapse rates between tasks and frequencies. We found no effect of task on slope measures, F(1, 6) = 0.546, p = 0.488, Display FormulaImage not available = 0.083, or on lapse rates, F(1, 6) = 0.281, p = 0.615, Display FormulaImage not available = 0.045. There was no effect of frequency on lapse rates, F(2, 12) = 2.420, p = 0.131, Display FormulaImage not available = 0.287, but there was a significant main effect of frequency on slope measure, F(2, 12) = 4.958, p = 0.027, Display FormulaImage not available = 0.452; the comparisons, however, did not survive Bonferroni correction, ps > 0.066. There was no effect of the interaction between task and frequency on slopes, F(2, 12) = 0.742, p = 0.497, Display FormulaImage not available = 0.110, or lapse rates, F(2, 12) = 2.305, p = 0.142, Display FormulaImage not available = 0.278. This suggests that the effect of visual-task demands on AM contrast detection is contributed mainly by threshold shifts, not changes in slope or lapse rates.  
Figure 3 plots percent correct performance on the visual task as a function of auditory contrast for an AM sound, at each of three frequencies (1, 5, and 10 Hz). Performance on the visual task is shown for the same sample observer in Figure 3A, plotting percent correct performance across auditory contrast for each visual task and each frequency. A summary of individual observers' data and mean visual percent correct performance, collapsed across auditory contrast, is plotted for each visual task in Figure 3B (± standard error of the mean across observers). 
Figure 3
 
Visual performance in the amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the amplitude modulation for the concurrent auditory task at 1, 5, and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average visual percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625).
Figure 3
 
Visual performance in the amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the amplitude modulation for the concurrent auditory task at 1, 5, and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average visual percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625).
We performed a 2 (task: color, number) × 3 (frequency: 1, 5, 10 Hz) × 5 (auditory contrast: 1–5) repeated-measures ANOVA to compare percent correct performance on the visual task across conditions. As expected, participants performed significantly better overall on the color task (M = 96.7%) than on the number task (M = 78.4%), F(1, 6) = 36.666, p = 0.001, Display FormulaImage not available = 0.764. In addition, visual performance did not vary significantly as a function of auditory contrast, F(4, 24) = 0.535, p = 0.711, Display FormulaImage not available = 0.001, or auditory frequency, F(2, 12) = 0.276, p = 0.763, Display FormulaImage not available < 0.001. Thus, properties of the auditory stimuli presented for the concurrent auditory task did not alter visual performance. Furthermore, there were no significant interaction effects (ps > 0.285), except for the three-way interaction between the three factors, F(8, 48) = 2.143, p = 0.049, Display FormulaImage not available = 0.02.  
Given previous studies suggesting that observers can learn to split, instead of share, attentional resources across sensory modalities with practice (e.g., see Ruthruff, Johnston, Van Selst, Whitsell, & Remington, 2003), we also examined performance as a function of time on task for our dual task. We recomputed auditory threshold and mean visual accuracy for the first, second, and third blocks of 500 trials, for each visual task (color and number), and for each frequency (1, 5, and 10 Hz). The effects of time on task, or practice, were quantified by estimating the slope of the linear fit for performance across blocks. Figure 4A plots slope estimates for visual performance to depict visual practice effects, and Figure 4B plots slope estimates for auditory AM thresholds to depict auditory practice effects, for each visual task and each frequency. 
Figure 4
 
Practice effects on visual and auditory performance in the amplitude-modulation (AM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for auditory AM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving performance (decreasing AM thresholds) as a function of time on task.
Figure 4
 
Practice effects on visual and auditory performance in the amplitude-modulation (AM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for auditory AM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving performance (decreasing AM thresholds) as a function of time on task.
We performed a linear mixed-effects analysis to examine how auditory threshold changed across blocks of trials. We started with an empty model with subject as a random effect in intercept to predict auditory AM thresholds (AIC = −97.85; BIC = −89.35; log likelihood = 51.93). Next, we added number of blocks as a predictor of auditory threshold. The subsequent model had a substantially improved model fit (AIC = −116; BIC = −104.6; log likelihood = 61.98), χ2(1) = 20.113, p < 0.001. Then we added frequency of the auditory stimulus as a fixed effect, which also improved the model fit (AIC = −140.94; BIC = −123.92; log likelihood = 76.470), χ2(2) = 28.9723, p < 0.001. The interaction effect between frequency and number of blocks did not improve the model fit (AIC = −138.38; BIC = −115.69; log likelihood = 77.190), χ2(2) = 1.4415, p = 0.486. Finally, we added visual task as a fixed effect, which improved the model fit (AIC = −162.49; BIC = −142.64; log likelihood = 88.245), χ2(1) = 23.552, p < 0.001. The interaction effect between task and number of blocks was significant (AIC = −164.85; BIC = −142.16; log likelihood = 90.427), χ2(1) = 4.3631, p = 0.03673. 
In summary, we found the following results for practice effects in the final model: (i) a significant fixed effect of number of blocks (β̂ = −0.046, SE = 0.0168, t = −2.713), indicating that auditory AM threshold decreased with the number of blocks, or time on task, reflecting a practice effect; (ii) a significant fixed effect of frequency (β̂MF=5Hz = −0.132, SE = 0.024, t = −5.538; β̂MF=10Hz = −0.134, SE = 0.024, t = −5.616), indicating that auditory AM thresholds decreased as frequency increased from 1 to 5 or 10 Hz, consistent with our previous analysis; (iii) a significant fixed effect of visual task (β̂ = 0.201, SE = 0.051, t = 3.916), indicating that auditory AM threshold increased for the concurrent high-load versus low-load task and confirming the effect of changing visual-task demand on auditory AM threshold; and (iv) interestingly, an interaction between task and number of blocks (β̂ = −0.050, SE = 0.024, t = −2.108), suggesting that auditory AM thresholds across blocks of trials improved more quickly when participants performed the number task compared to the color task. 
We examined two other possible influences on performance. First, memory limitations could have affected overall performance. We hypothesized that if memory limitations influenced performance, then performance would be more negatively affected when visual and auditory targets for the dual task occurred in the same interval compared to different intervals. Second, stimulus presentation rate could have affected overall performance. We hypothesized that if presentation rate influenced performance, then performance would be more negatively affected when visual and auditory targets for the dual task were presented at the same rate in a given interval. In our study, when the AM white noise was presented at 10 Hz, the presentation rate for the auditory stimulus in a given interval was the same as the presentation rate of the letters in the rapid serial visual presentation for that interval. Thus, we expected the worst performance in the 10-Hz condition, if presentation rate contributed to our results. 
To determine whether these factors influenced performance measures, we segmented our data based on whether visual and auditory targets on a given trial were presented in the same or different intervals, and then recalculated auditory AM threshold and average visual-task accuracy for each frequency, each observer, and each condition (visual and auditory targets in same vs. different interval). Figure 5A plots visual performance and Figure 5B plots auditory AM thresholds comparing same and different intervals, for each visual task and each frequency. 
Figure 5
 
Performance when visual and auditory targets are in the same versus different intervals in the amplitude-modulation (AM) experiment. (A) Percent correct on the visual task and auditory AM (B) thresholds, (C) slopes, and (D) lapse rates when visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between the two conditions.
Figure 5
 
Performance when visual and auditory targets are in the same versus different intervals in the amplitude-modulation (AM) experiment. (A) Percent correct on the visual task and auditory AM (B) thresholds, (C) slopes, and (D) lapse rates when visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between the two conditions.
The dependent variables were separately submitted to a 2 (condition: same, different) × 2 (task: color, number) × 3 (frequency: 1, 5, 10 Hz) repeated-measures ANOVA. If memory limitations interfere with processing when both visual and auditory targets occur in the same interval, we should see a main effect of condition or an interaction with condition. If the congruency between auditory and visual presentation rate matters, we should see an interaction between condition and frequency, with a significant effect at 10 Hz but not other frequencies. Statistical analyses failed to reveal any such effects in our data. The effect of condition (same vs. different interval) on auditory AM threshold was not significant, F(1, 6) = 0.141, p = 0.720, Display FormulaImage not available = 0.023; neither were the effects of condition on slope, F(1, 6) = 0.014, p = 0.910, Display FormulaImage not available = 0.002, or lapse rate, F(1, 6) = 0.073, p = 0.796, Display FormulaImage not available = 0.012. Furthermore, there was no significant effect of the interaction between condition and frequency on auditory AM threshold, F(2, 12) = 1.73, p = 0.315, Display FormulaImage not available = 0.175; slope, F(2, 12) = 1.13, p = 0.353, Display FormulaImage not available = 0.159; or lapse rate, F(2, 12) = 0.032, p = 0.968, Display FormulaImage not available = 0.005. The effect of condition on visual accuracy was also not significant, F(1, 6) = 6.012, p = 0.050, Display FormulaImage not available = 0.005, nor was there a significant interaction between condition and frequency, F(2, 12) = 0.407, p = 0.674, Display FormulaImage not available < 0.001.  
Experiment 2: FM sound
Participants
Seven additional adults participated in Experiment 2 (five women, two men; mean age = 21.7 years, SEM = 0.52). As in Experiment 1, participants were undergraduate or graduate students recruited from the University of Massachusetts Boston, had no known psychiatric or neurological history, had normal or corrected-to-normal vision, and reported no hearing problems. Data from one additional participant were not included in the analysis because overall accuracy on the auditory task was below 65% correct, and data from another additional participant were not included due to extensive musical experience (over 10 years of musical training). 
Apparatus, stimuli, and procedure
The setup, procedure, and visual stimuli for Experiment 2 were identical to those of Experiment 1. The auditory stimulus was a sinusoidally frequency-modulated pure tone (FM sound) instead of the AM sound used in Experiment 1. The auditory standard (Audstandard) was a pure tone (center frequency [CF] = 1000 Hz; sampling rate = 44.1 kHz; duration = 500 ms; see Equation 3). The auditory test stimulus (Audtest) was generated by modulating the CF with a sinusoid (see Equations 4 and 5). The frequency of the modulating sinusoid (MF) was randomly selected from one of four possible values (0.5, 1, 5, and 10 Hz), and the MI, presented using the method of constant stimuli, was randomly selected from one of five values based on pilot work: 0.2, 0.4, 0.8, 1.6, and 3.2. These are increments of 2 × MI, starting from the lowest MI value. The resultant FM sound was a tone modulated in frequency at one of five levels, based on the MI, which gave rise to the percept of a sound changing in pitch at one of the four possible frequencies. All FM auditory stimuli were corrected to eliminate concomitant changes in loudness associated with changes in frequency by multiplying the auditory standard and test sound waves by a fixer term (see Equation 6).       
Analysis
Data from Experiment 2 were analyzed as described for Experiment 1. Auditory thresholds were determined from a Weibull fit of our percent correct data as a function of the FM MI, or auditory FM contrast, and all data were log10 transformed.3 
Results
All observers were well practiced on the task before data collection commenced. For the data reported later, participants missed an average of 6.4% of trials, failing to make either a visual or an auditory judgment in the allotted time. As before, these ambiguous trials were excluded from subsequent analysis. 
Figure 6 plots percent correct performance on the auditory task as a function of auditory contrast for an FM sound, at each frequency. Percent correct performance on the auditory task was fitted with a Weibull function to determine FM auditory contrast thresholds for each observer, defined as the auditory contrast level of an FM sound supporting 75% correct performance. Auditory thresholds for each participant were computed for each of two modulation frequencies (5 and 10 Hz4) and for each visual task (dashed line: low-load visual task; solid line: high-load visual task). A sample observer's psychometric functions are plotted in Figure 6A, while the summary of auditory FM thresholds for each visual task is plotted in Figure 6B, showing the mean (± standard error of the mean across observers) in addition to each participant's data. 
Figure 6
 
Auditory performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the FM sound as a function of the FM index, our measure of auditory contrast, for the 5- and 10-Hz sounds. (B) Individual and average FM threshold across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory thresholds increased when the concurrent visual task was the number versus the color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588). Individual and average auditory FM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory FM lapse rates across observers (± standard error of the mean across observers) for each visual task and auditory frequency.
Figure 6
 
Auditory performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the FM sound as a function of the FM index, our measure of auditory contrast, for the 5- and 10-Hz sounds. (B) Individual and average FM threshold across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory thresholds increased when the concurrent visual task was the number versus the color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588). Individual and average auditory FM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory FM lapse rates across observers (± standard error of the mean across observers) for each visual task and auditory frequency.
A 2 (task: color, number) × 2 (MF: 5, 10 Hz) repeated-measures ANOVA was performed to compare auditory contrast FM thresholds across conditions. The auditory threshold for the low-load, visual color task (M = −0.524) was significantly smaller than for the high-load, visual number task (M = −0.347), F(1, 6) = 17.796, p = 0.006, Display FormulaImage not available = 0.748. This suggests that changing visual-task demand influences auditory contrast sensitivity for an auditory contrast measure created via FM; as visual-task demands increase, auditory FM thresholds worsen. There was also a significant main effect of frequency: Auditory FM thresholds were significantly higher at 5 Hz (M = −0.292) than 10 Hz (M = −0.580), F(1, 6) = 95.956, p < 0.001, Display FormulaImage not available = 0.941, with a significant interaction between task and frequency, F(1, 6) = 19.531, p = 0.04, Display FormulaImage not available = 0.765. Differences in FM thresholds between color and number tasks were larger at 5 Hz (M = 0.227) than 10 Hz (M = 0.125), t(6) = 4.419, p = 0.004.  
Analyses were also performed on estimates of slope and lapse rate at FM threshold. FM psychometric function slopes were steeper for the color task (M = 1.353) than the number task (M = 0.788), F(1, 6) = 6.323, p = 0.046, Display FormulaImage not available = 0.513, and steeper for 10 Hz (M = 1.327) than 5 Hz (M = 0.814), F(1, 6) = 8.065, p = 0.030, Display FormulaImage not available = 0.573. There was no significant main effect of either factor on lapse rates (ps > 0.05), nor any significant effect of the interaction between task and frequency on slope estimates, F(1, 6) = 0.034, p = 0.860, Display FormulaImage not available = 0.006, or lapse rates, F(1, 6) = 0.273, p = 0.620, Display FormulaImage not available = 0.044. These results suggest the effect of visual-task demand on FM contrast detection is driven by changes in threshold and slope but not lapse rate.  
Figure 7 plots percent correct performance on the visual task as a function of auditory contrast for the FM sound, at each of two frequencies (5 and 10 Hz). Performance on the visual task is shown for the same sample observer in Figure 7A, plotting percent correct performance across auditory contrast for each visual task and each frequency. A summary of individual observers' data and mean visual percent correct performance, collapsed across auditory contrast, is plotted for each visual task in Figure 7B (± standard error of the mean across observers). 
Figure 7
 
Visual performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the FM index for the concurrent auditory task at 5 and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588).
Figure 7
 
Visual performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the FM index for the concurrent auditory task at 5 and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588).
A 2 (task: color, number) × 2 (MF: 5, 10 Hz) × 5 (auditory contrast: 1–5) repeated-measures ANOVA was performed to compare percent correct on the visual task across conditions. As expected, participants performed better on the color task (M = 97.3%) than on the number task (M = 76.8%), F(1, 6) = 24.849, p = 0.002, Display FormulaImage not available = 0.756. Furthermore, visual performance was not influenced by auditory FM contrast, F(4, 24) = 0.799, p = 0.538, Display FormulaImage not available = 0.002, or auditory frequency, F(1, 6) = 0.016, p = 0.902, Display FormulaImage not available < 0.001. In other words, properties of the auditory stimuli presented during the concurrent auditory task did not alter visual performance. Furthermore, there were no significant interaction effects (ps > 0.349).  
As before, we examined how practice and other factors influenced performance. Figure 8 plots visual and auditory performance as a function of time on task, for the first, second, and third blocks of 500 trials, for each visual task, and for each frequency. Figure 8A plots slope estimates for visual performance to depict visual practice effects, and Figure 8B plots slope estimates for auditory FM thresholds to depict auditory practice effects. 
Figure 8
 
Practice effects on visual and auditory performance in the frequency-modulation (FM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for FM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving auditory performance (decreasing FM threshold) as a function of time on task.
Figure 8
 
Practice effects on visual and auditory performance in the frequency-modulation (FM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for FM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving auditory performance (decreasing FM threshold) as a function of time on task.
We performed a linear mixed-effects analysis to examine auditory thresholds across blocks of trials, or time on task. We started with an empty model with subject as a random factor to predict auditory FM thresholds (AIC = 20.66; BIC = 27.96; log likelihood = −7.331). Next we added number of blocks as a predictor of auditory threshold. The subsequent model had a substantially improved model fit (AIC = 17.73; BIC = 27.45; log likelihood = −4.863), χ2(1) = 4.9375, p = 0.026. Then we added frequency of the auditory stimulus as a fixed effect, which also improved the model fit (AIC = −26.480; BIC = −14.326; log likelihood = 18.240), χ2(1) = 46.2058, p < 0.001. The interaction effect between frequency and number of blocks did not improve the model fit (AIC = −27.152; BIC = −12.567; log likelihood = 19.5758), χ2(1) = 2.6711, p = 0.1022. Finally, we added visual task as a fixed effect, which improved the model fit (AIC = 49.761; BIC = −35.176; log likelihood = 30.881), χ2(1) = 25.281, p < 0.001. The interaction effect between task and number of blocks only marginally improved the model fit (AIC = −51.173; BIC = −34.158; log likelihood = 32.587), χ2(1) = 3.4121, p = 0.0647. 
In summary, in the final model we found the following results for the effects of practice: (i) a significant fixed effect of number of blocks (β̂ = −0.0357, SE = 0.028, t = −1.276), indicating that auditory FM thresholds decreased with number of blocks, or time on task, which reflects a practice effect; (ii) a significant fixed effect of modulating frequency (β̂ = −0.310, SE = 0.032, t = −9.587), indicating that auditory FM thresholds decreased when the modulating frequency was 10 Hz compared to 5 Hz, consistent with our earlier analyses; (iii) a significant fixed effect of visual task (β̂ = 0.328, SE = 0.085, t = 3.843), indicating that auditory FM thresholds increased for the concurrent high-load versus low-load task, which confirms the effect of changing visual-task demand on auditory FM threshold; and (iv) a marginally significant interaction between task and number of blocks (β̂ = −0.074, SE = 0.040, t = −1.868), which means that auditory FM threshold across blocks of trials tended to improve more quickly when participants performed the number task versus the color task. 
Similar to analyses performed for our AM condition, we also examined whether memory limitations or stimulus presentation rate influenced performance in our FM condition. We recalculated auditory FM thresholds and average visual accuracy for each frequency, each observer, and each of two conditions (visual and auditory targets in same vs. different interval). Figure 9A plots visual performance, and Figure 9B plots auditory FM thresholds comparing same and different intervals, for each visual task and each frequency. 
Figure 9
 
Performance when visual and auditory targets are in the same versus different intervals in the frequency-modulation (FM) experiment. (A) Percent correct on the visual task and auditory FM (B) thresholds, (C) slopes, and (D) lapse rates when the visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between conditions.
Figure 9
 
Performance when visual and auditory targets are in the same versus different intervals in the frequency-modulation (FM) experiment. (A) Percent correct on the visual task and auditory FM (B) thresholds, (C) slopes, and (D) lapse rates when the visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between conditions.
The dependent variables were separately submitted to a 2 (condition: same, different) × 2 (task: color, number) × 2 (frequency: 5, 10 Hz) repeated-measures ANOVA. If memory limitations interfere with processing when both visual and auditory targets occur in the same versus different intervals, we should see a main effect of condition or an interaction with condition. If the congruency between auditory and visual presentation rate matters, we should see an interaction between condition and frequency, with a significant effect at 10 Hz but not other frequencies. Statistical analyses failed to reveal any such effects in our data. There were no significant effects of condition (same or different interval) on auditory FM threshold, F(1, 6) = 0.164, p = 0.699, Display FormulaImage not available = 0.027; slopes, F(1, 6) = 0.478, p = 0.515, Display FormulaImage not available = 0.074; or lapse rates, F(1, 6) = 1.261, p = 0.304, Display FormulaImage not available = 0.174. Furthermore, there were no significant effects of the interaction between condition and frequency (ps > 0.220) or condition and task (ps > 0.115) on any dependent variables tested, except lapse rate. There was an effect of the interaction between condition and frequency on lapse rate, F(1, 6) = 6.223, p = 0.047, Display FormulaImage not available = 0.509: Lapse rate was higher in the same condition at 5 Hz (mean difference between conditions = −0.0223) and higher in the different condition at 10 Hz (mean difference between conditions = 0.0123), t(6) = −2.495, p = 0.047. Furthermore, the effect of condition on visual accuracy was not significant, F(1, 6) = 6.012, p = 0.050, Display FormulaImage not available = 0.005, and there was no significant interaction between condition and frequency, F(1, 6) = 0.178, p = 0.687, Display FormulaImage not available < 0.001.  
Experiment 3: Order effects for AM sounds
Participants
An additional eight adults participated in Experiment 3 (all women; mean age = 23.8 years, SEM = 1.35). As in Experiments 1 and 2, participants were undergraduate or graduate students recruited from the University of Massachusetts Boston, with no known psychiatric or neurological history, normal or corrected-to-normal vision, and no known hearing problems. 
Apparatus, stimuli, and procedure
The setup, basic procedure, and visual and auditory stimuli for Experiment 3 were identical to those for Experiment 1, with three exceptions: Only one sound frequency (5 Hz) was presented; participants completed two blocks of auditory-only practice, yielding a total of 300 trials to quantify performance for auditory-only practice trials; and the order of which modality was judged in the dual task varied such that four participants completed a block of trials judging the visual target first followed by a block of trials judging the auditory target first, whereas the other four participants completed a block of trials judging the auditory target first followed by a block of trials judging the visual target first. 
Analysis
Data from Experiment 3 were analyzed as described for Experiment 1
Results
To test whether the influence of visual task on auditory performance might be due to memory-load differences, since the auditory judgment was always made after the visual judgment in Experiments 1 and 2, Experiment 3 varied the order of visual and auditory judgements. We conducted a 2 (visual task: color, number) × 2 (order: visual first, auditory first) repeated-measures ANOVA on auditory thresholds, estimated slope, and lapse rates on data from Experiment 3 (Figure 10). Auditory thresholds were significantly lower when the concurrent visual task was the color task (mean log threshold = −0.796) versus the number task (mean log threshold = −0.697), F(1, 7) = 7.108, p = 0.032, Display FormulaImage not available = 0.504. Auditory thresholds were also significantly lower when the auditory task was reported first (mean log threshold = −0.820) versus second (mean log threshold = −0.673), F(1, 7) = 75.369, p < 0.001, Display FormulaImage not available = 0.915. Furthermore, slope was significantly steeper when the auditory target was reported first (M = 3.355) versus second (M = 2.223), F(1, 7) = 23.129, p = 0.002, Display FormulaImage not available = 0.768. Finally, we found no significant effects of either task or order on lapse rates (ps > 0.496), and no significant effect of the interaction between task and order on any of the dependent variables (ps > 0.095). This suggests that the effect of load manipulation is independent of memory limitations from reporting the auditory target second.  
Figure 10
 
Auditory performance in the amplitude-modulation (AM) experiment for a 5-Hz sound in a single auditory task (leftmost column) and a dual visual and auditory task where the visual stimulus was judged first (middle column) or second (rightmost column). (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task. Overall, auditory AM thresholds were higher when the concurrent visual task was the number versus the color task, whether or not the auditory task was first or second. (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task.
Figure 10
 
Auditory performance in the amplitude-modulation (AM) experiment for a 5-Hz sound in a single auditory task (leftmost column) and a dual visual and auditory task where the visual stimulus was judged first (middle column) or second (rightmost column). (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task. Overall, auditory AM thresholds were higher when the concurrent visual task was the number versus the color task, whether or not the auditory task was first or second. (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task.
Given that overall auditory thresholds in our dual task were high compared to estimates from related work using a single auditory task in a simpler task design (i.e., Viemeister, 1979), we also estimated performance for the single auditory-only task compared to the dual task in Experiment 3. We found no significant difference between thresholds on the single auditory-only task or the dual color auditory-first task (mean threshold difference = 0.018, Bonferroni-corrected p = 1.000). Thus, in an experimental condition where our task was simplified—presenting only the 5-Hz frequency and allowing auditory judgments to be made first—we found comparable performance, suggesting that higher thresholds in Experiments 1 and 2 are likely to arise from the added difficulty of randomly interleaving four auditory frequencies and requiring auditory stimuli to be judged after visual stimuli. 
General discussion
We examined whether and how cross-modal attention might alter auditory contrast thresholds. To manipulate cross-modal attention, we had observers engage in a visual task that was more demanding (a high-load, visual number task) or less demanding (a low-load, visual color task) while concurrently engaging in an auditory task. Auditory contrast was defined as the MI of AM (Experiment 1) or FM sounds (Experiment 2). Across both AM and FM experiments, auditory contrast thresholds decreased—that is, observers were better able to detect amplitude- and frequency-modulated sounds—when the concurrent visual task was low-load compared to high-load. Our results suggest that a less demanding visual task allowed for the allocation of more attentional resources to a concurrent auditory task, improving auditory contrast detection—that is, decreasing auditory thresholds—for both AM and FM sounds. Our findings confirm previous work on load theory showing that manipulating visual load can alter the availability of attentional resources for processing information in another sensory modality, such as inducing inattentional deafness for detecting auditory tones (Macdonald & Lavie, 2011; Molloy, Griffiths, Chait, & Lavie, 2015; Raveh & Lavie, 2015). Our findings extend such evidence that attention is a limited resource across sensory modalities to the domain of auditory contrast. 
Furthermore, and more importantly, our findings in the auditory domain complement results in the visual domain, providing evidence that the effects of attention on auditory contrast predominantly yield lateral shifts in the psychometric function (contrast gain) rather than changes in asymptote (response gain), with additional changes in slope found primarily for FM but not AM sounds. Future work will need to tease out the unique influences of attention on mechanisms of auditory contrast gain arising from modulations of auditory amplitude (AM) versus frequency (FM), as well as from different attentional manipulations such as cross-modal versus unimodal or exogenous versus endogenous. 
Our results are unlikely to be due to changes in task demands with time on task, loss of behavioral control, or practice effects. Thus, despite auditory discriminations being more difficult at low auditory contrasts, performance on the visual task remained constant across changes in auditory contrast. Visual performance also did not vary with time on task. Such results suggest that we had control over behavior and that altering visual load was an effective manipulation for redirecting attention to the visual stimuli. Our experimental design, with low- versus high-load conditions presented across different blocks, may have allowed performance on the visual task to remain stable over time, since the difficulty of the visual discrimination within a block of trials was predictable. In addition, visual performance may have been stable, as there were fewer memory limitations on the visual task because observers always made visual judgments first. This result is especially relevant for the visual number task, where overall performance was not limited by ceiling effects, as may be the case for the visual color task. Interestingly, although task order can influence performance for dual-task judgments (Töllner, Strobach, Schubert, & Müller, 2012), our results from Experiment 3 suggest that task order could not explain differences between our low- and high-load conditions, confirming previous studies that show no order effects on a related visual/auditory dual task measuring auditory-tone detectability (Raveh & Lavie, 2015). 
Unlike visual performance, auditory performance did improve with time on task: Auditory AM and FM thresholds improved across blocks on the dual task. Such auditory practice effects may have resulted from improvements in auditory perceptual sensitivity rather than the reallocation of attention from the visual to auditory task during the dual task, given the absence of improvements or decrements in visual performance. 
In terms of the influence of time on task, we should also note that we never had a participant complete both the AM and FM conditions. Our pilot studies suggested that differences in auditory thresholds between the low- and high-load tasks were diminished if the participant had already completed the complementary experiment. In fact, the practice effects we did find suggest that auditory thresholds decrease more quickly with practice in the number compared to the color task, such that overall threshold differences converge with time. 
Our results are also unlikely to arise from memory limitations. In general, worse performance in a dual task could arise from interference effects in working memory (Dalton, Santangelo, & Spence, 2009). While this would be a valid concern if we were comparing performance directly between a single and a dual task, here we are comparing performance between a high- and a low-load dual task, both of which require observers to remember two responses for a given trial. Furthermore, to minimize memory interference caused by uncertainty in task procedure, we fixed the order of target report—visual targets were always reported on first—and we presented high- and low-load conditions in separate blocks of trials. Thus, our findings of significant differences in auditory performance between the high- and low-load visual tasks are unlikely to be the result of memory interference. Additional analyses for effects due to memory limitations considered the influence of presenting visual and auditory targets in the same versus different intervals, and revealed no significant differences. More importantly, in Experiment 3 we explicitly altered memory limitations for the auditory task: We minimized working-memory demands by having participants judge auditory stimuli first, before visual stimuli (unlike in Experiments 1 and 2, where visual stimuli were always judged before auditory stimuli). Even when memory limitations were controlled for, in Experiment 3, we found worse auditory contrast thresholds under conditions of high versus low visual load. 
Given the dynamic and temporal nature of our visual and auditory stimuli, we also explored the potential interaction and interference effect between the rate of rapid sequential visual presentation of letters and the rate of amplitude or frequency modulation of auditory stimuli. Previous work has suggested that certain visual spatial frequencies may correspond to certain auditory AM rates (Guzman-Martinez, Ortega, Grabowecky, Mossbridge, & Suzuki, 2012; Orchard-Mills, Van der Burg, & Alais, 2013; Sherman, Grabowecky, & Suzuki, 2013). However, little is known regarding such cross-modal correspondences between visual temporal frequencies and auditory modulation rates. Our analysis of results at 10 Hz, where the modulating frequency of the auditory stimuli matched the rate of the rapid serial visual presentation of the visual letters, failed to find evidence for cross-modal interactions; we found no differences when visual and auditory targets were presented in the same versus different intervals. 
Visual load can manipulate cross-modal attention
We manipulated visual-task difficulty or load to redirect attention. Load theory (Lavie, 1995, 2005; Lavie & Tsal, 1994) is a prominent model of selective attention that argues that attention, a limited resource, is deployed automatically to process all stimuli (relevant as well as irrelevant to the task) until capacity is depleted. Thus, a low-load task should not engage attention fully and should leave more attentional resources for processing other task-relevant or task-irrelevant stimuli, whereas a high-load task should engage attention more fully and should leave less attentional resources for processing other stimuli. 
Within the same modality, changing visual load (Lavie, 1995; Lavie & Cox, 1997; Lavie, Ro, & Russell, 2003) or auditory load (referred to in Spence & Santangelo, 2010) has been shown to modulate the processing of task-irrelevant stimuli. Across modalities, Rees, Frith, and Lavie (2001) found no effect of auditory-task load on task-irrelevant visual-motion processing, as measured by a visual-motion aftereffect and positron-emission tomography activation in motion-related visual areas. Similarly, Parks, Hilimire, and Corballis (2011) found no effect of increasing visual load on steady-state evoked potentials to unattended auditory stimuli (but see Jacoby, Hall, & Mattingley, 2012). These results suggest limited attentional resources within, but not between, modalities. However, evidence also suggests that attentional resources may be limited between modalities. Klemen, Büchel, and Rose (2009) found reduced blood-oxygen-level-dependent responses to unattended visual stimuli in the lateral occipital complex for high compared to low auditory load, and Berman and Colby (2002) found reduced blood-oxygen-level-dependent responses to unattended visual-motion information in the medial temporal area (MT+) for high-load visual or auditory tasks (see also Houghton, Macken, & Jones, 2003). Furthermore, more recent work using a dual task, where both visual and auditory stimuli were relevant, has found diminished detectability of an auditory tone under high versus low visual load (inattentional deafness; Macdonald & Lavie, 2011; Raveh & Lavie, 2015) and suppression of auditory evoked responses (Molloy et al., 2015). 
Our manipulation of visual load to investigate cross-modal attention also involves a dual task where both visual and auditory stimuli were relevant. Our results confirm and extend previous work suggesting that visual-task demands can influence the processing of auditory information, here auditory contrast, on an unrelated concurrent task. This provides further support that manipulations of visual load can be effective in redirecting cross-modal attention and suggests that attentional resources between vision and audition may be shared. 
Long-standing questions in studies of cross-modal attention are whether and under what conditions attention may be a resource shared across different sensory modalities, and to what extent it is modality specific and independent across modalities. Traditionally studies have compared performance when two tasks in different modalities are performed individually versus concurrently to examine if attention is a shared resource, with the underlying assumption that if two tasks require the same underlying resource, doing both tasks concurrently should degrade performance relative to performing either task alone. Some studies suggest that visual or auditory performance can deteriorate when observers perform a dual audiovisual task (e.g., Arnell & Jolicoeur, 1999; Bonnel & Hafter, 1998; Jolicoeur, 1999; but see also Arnell & Jenkins, 2004; Soto-Faraco et al., 2002) as opposed to a single task, with related studies suggesting shared underlying neuronal mechanisms (e.g., Ciaramitaro, Buračas, & Boynton, 2007; Störmer, McDonald, & Hillyard, 2009). Furthermore, increasing visual load on a dual task has been shown to worsen performance on auditory processing, again suggesting shared mechanisms of attention (Macdonald & Lavie, 2011; Molloy at al., 2015; Raveh & Lavie, 2015). However, other studies have suggested the opposite, with performance on an audiovisual dual task no worse than on a single task or a within-modality dual task (e.g., see Alais, Morrone, & Burr, 2006; Duncan, Marten, & Ward, 1997; Talsma, Doty, Strowd, & Woldorff, 2006), with related studies suggesting distinct, modality-specific underlying neuronal mechanisms (e.g. Alais et al., 2006, Rees et al., 2001; Woodruff et al., 1996). Furthermore, the sharing of resources can vary, such that dual tasks initially providing evidence for shared resources can become independent with practice (Hazeltine, Teague, & Ivry, 2002; Ruthruff et al., 2003; Ruthruff, Johnston, & Van Selst, 2001; Ruthruff, Van Selst, Johnston, & Remington, 2004). Interestingly, while we find that visual-task difficulty can influence auditory contrast thresholds, we also find that performance on the visual task (our primary task and the first judgment reported) does not depend on auditory-task difficulty (our secondary task and the second judgment reported). Thus, timing within a dual task may also be an important consideration. 
Limitations
Our study using a dual-task design and manipulating visual attentional load has two main limitations. First, although observers were instructed to perform both auditory and visual tasks to ensure that both stimuli were attended, we cannot know with certainty how they distributed their attention across the two tasks. We speculate that visual performance may have been prioritized as the more important task when visual judgments were made before auditory judgments, as in Experiments 1 and 2, and that auditory performance may have been prioritized when auditory judgments were made before visual judgments, as in Experiment 3
Second, while it is obvious that the visual color and number tasks differ in load, as demonstrated by better visual performance in the color task, it is not known which of multiple factors contribute to differences in task difficulty. For instance, our visual tasks differed in the type of attention required: Judging the color of the letters requires attention to global features of the letters, which are likely to pop out, given our use of very different colors (black versus white), whereas judging the number of occurrences of a given letter requires attention to the local features of each letter. They also differed in the number of distractors present within an interval, with no distractors in the color task but many in the number task, where observers had to rule out letters that were not A. The tasks differed yet again in how attention might be allocated over time during the two-interval forced choice: Observers could deduce which interval contained the white letters in the color task even if they missed seeing one of the two intervals, but could not similarly deduce the correct response for the number task. Furthermore, these factors are not mutually exclusive, and a combination of them could be driving performance differences in our experiment. While this does not necessarily change our result that visual-task load can influence the allocation of attention for auditory contrast, future research would be required to tease apart these differences and decipher specific mechanisms. 
A common mechanism of attention for visual and auditory contrast
Attention often does not act as a general-purpose mechanism. Many studies highlight important differences in attending to different stimulus dimensions, such as spatial location, objectness, or individual features, in a different sensory modality or across sensory modalities. In fact, it is common practice to compare and contrast between mechanisms implicated in spatial attention, object-based attention, feature-based attention, cross-modal attention, and paying attention in time (e.g., Donohue, Roberts, Grent-'t-Jong & Woldorff, 2011; Egner et al., 2008; Kimura, Katayama, & Murohashi, 2008; Schenkluhn, Ruff, Heinen, & Chambers, 2008). Even within an attentional dimension, studies have found interesting mechanistic differences depending on sensory modality. A case in point is provided by the example of visual versus auditory spatial attention, where evidence suggests that distinct neuronal coding strategies, not common multimodal cortical maps, are used by spatial attention for processing auditory versus visual stimuli (Kong et al., 2014). 
Yet in select situations, a more parsimonious account may hold, in which a unitary mechanism of attention may operate within or across modalities. It has previously been suggested that feature-based and space-based attention may be “simply different sides of the same coin” (Maunsell & Treue, 2006, p. 319), and physiologically motivated computational models (Reynolds & Heeger, 2009) have highlighted a common mechanism, formulating important parameters that, when accounted for, can reconcile seemingly conflicting results across experimental conditions. The behavioral data we present show a consistent effect of cross-modal attention on auditory contrast, improving auditory thresholds for both AM and FM sounds. Our results provide an important first step toward investigating whether a common mechanism might explain how attention may alter not only visual but also auditory contrast, which we define as the modulation index for AM and FM sounds. Interestingly, psychometric functions fitted to our data (see Figures 2 and 7) suggest a lateral shift in the contrast response, rather than a shift in asymptote, which is suggestive of attention acting via a mechanism of contrast gain as opposed to response gain or activity gain (Huang & Dobkins, 2005; Ling & Carrasco, 2006; Pestilli & Carrasco, 2005; Pestilli, Viera, & Carrasco, 2007). Of note, there may be additional changes in slope due to attention, which may differ for AM and FM sounds. 
Future work will require a more quantitative examination of how attention acts to alter contrast-tuning functions for auditory stimuli and how such attentional effects may quantitatively and mechanistically differ for visual and auditory contrast and depending on the size of the attentional field. Furthermore, it remains to be seen how the conceptualization of auditory contrast presented here—amplitude and frequency modulation—may extend to other conceptualizations of auditory contrast (e.g., see Barbour & Wang, 2003). 
Acknowledgments
The authors thank Daniel Jentzen for help with an earlier version of this experiment and Adam Jacobs for helpful discussions regarding the generation of auditory stimuli and for comments and edits on previous versions of this manuscript. We also thank the anonymous reviewers for helpful comments on this manuscript. This work was supported by a Healey grant to VMC from the University of Massachusetts Boston. 
Commercial relationships: none. 
Corresponding author: Vivian M. Ciaramitaro. 
Address: Department of Psychology, Developmental and Brain Sciences, University of Massachusetts, Boston, MA, USA. 
References
Alais, D., Morrone, C., & Burr, D. (2006). Separate attentional resources for vision and audition. Proceedings of the Royal Society B: Biological Sciences, 273 (1592), 1339– 1345.
Altmann, C. F., & Gaese, B. H. (2014). Representation of frequency-modulated sounds in the human brain. Hearing Research, 307, 74– 85.
Arnell, K. M., & Jenkins, R. (2004). Revisiting within-modality and cross-modality attentional blinks: Effects of target–distractor similarity. Perception & Psychophysics, 66 (7), 1147– 1161.
Arnell, K. M., & Jolicoeur, P. (1999). The attentional blink across stimulus modalities: Evidence for central processing limitations. Journal of Experimental Psychology: Human Perception and Performance, 25 (3), 630– 648.
Barbour, D. L., & Wang, X. (2003). Auditory cortical responses elicited in awake primates by random spectrum stimuli. The Journal of Neuroscience, 23 (18), 7194– 7206.
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4. Retrieved from http://arxiv.org/abs/1406.5823
Berman, R. A., & Colby, C. L. (2002). Auditory and visual attention modulate motion processing in area MT+. Cognitive Brain Research, 14 (1), 64– 74.
Bonnel, A.-M., & Hafter, E. R. (1998). Divided attention between simultaneous auditory and visual signals. Perception & Psychophysics, 60 (2), 179– 190.
Brainard, D. (1997). The Psychophysics Toolbox. Spatial Vision, 10 (4), 433– 436.
Buracas, G. T., & Boynton, G. M. (2007). The effect of spatial attention on contrast response functions in human visual cortex. The Journal of Neuroscience, 27 (1), 93– 97.
Cameron, E. L., Tai, J. C., & Carrasco, M. (2002). Covert attention affects the psychometric function of contrast sensitivity. Vision Research, 42 (8), 949– 967.
Carrasco, M. (2006). Covert attention increases contrast sensitivity: Psychophysical, neurophysiological and neuroimaging studies. Progress in Brain Research, 154, 33– 70.
Carrasco, M., Eckstein, M., Verghese, P., Boynton, G., & Treue, S. (2009). Visual attention: Neurophysiology, psychophysics and cognitive neuroscience. Vision Research, 49 (10), 1033– 1036.
Carrasco, M., Ling, S., & Read, S. (2004). Attention alters appearance. Nature Neuroscience, 7 (3), 308– 313.
Ciaramitaro, V. M., Buračas, G. T., & Boynton, G. M. (2007). Spatial and cross-modal attention alter responses to unattended sensory information in early visual and auditory human cortex. Journal of Neurophysiology, 98 (4), 2399– 2413.
Dalton, P., Santangelo, V., & Spence, C. (2009). The role of working memory in auditory selective attention. The Quarterly Journal of Experimental Psychology, 62 (11), 2126– 2132.
Di Russo, F., Spinelli, D., & Morrone, M. C. (2001). Automatic gain control contrast mechanisms are modulated by attention in humans: Evidence from visual evoked potentials. Vision Research, 41, 2435– 2447.
Donohue, S. E., Roberts, K. C., Grent-'t-Jong, T., & Woldorff, M. G. (2011). The cross-modal spread of attention reveals differential constraints for the temporal and spatial linking of visual and auditory stimulus events. The Journal of Neuroscience, 31 (22), 7982– 7990.
Duncan, J., Martens, S., & Ward, R. (1997). Restricted attentional capacity within but not between sensory modalities. Nature, 387 (6635), 808– 810.
Egner, T., Monti, J. M. P., Trittschuh, E. H., Wieneke, C. A., Hirsch, J., & Mesulam, M.-M. (2008). Neural integration of top-down spatial and feature-based information in visual search. The Journal of Neuroscience, 28 (24), 6141– 6151.
Guzman-Martinez, E., Ortega, L., Grabowecky, M., Mossbridge, J., & Suzuki, S. (2012). Interactive coding of visual spatial frequency and auditory amplitude-modulation rate. Current Biology, 22 (5), 383– 388.
Hart, H. C., Palmer, A. R., & Hall, D. A. (2003). Amplitude and frequency-modulated stimuli activate common regions of human auditory cortex. Cerebral Cortex, 13 (7), 773– 781.
Hazeltine, E., Teague, D., & Ivry, R. B. (2002). Simultaneous dual-task performance reveals parallel response selection after practice. Journal of Experimental Psychology: Human Perception and Performance, 28 (3), 527– 545.
Herholz, S. C., & Zatorre, R. J. (2012). Musical training as a framework for brain plasticity: Behavior, function, and structure. Neuron, 76 (3), 486– 502.
Herrmann, K., Montaser-Kouhsari, L., Carrasco, M., & Heeger, D. J. (2010). When size matters: Attention affects performance by contrast or response gain. Nature Neuroscience, 13 (12), 1554– 1559.
Houghton, R. J., Macken, W. J., & Jones, D. M. (2003). Attentional modulation of the visual motion aftereffect has a central cognitive locus: Evidence of interference by the postcategorical on the precategorical. Journal of Experimental Psychology: Human Perception and Performance, 29 (4), 731– 740.
Huang, L., & Dobkins, K. R. (2005). Attentional effects on contrast discrimination in humans: Evidence for both contrast gain and response gain. Vision Research, 45 (9), 1201– 1212.
Jacoby, O., Hall, S. E., & Mattingley, J. B. (2012). A crossmodal crossover: Opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli. NeuroImage, 61 (4), 1050– 1058.
Jolicoeur, P. (1999). Restricted attentional capacity between sensory modalities. Psychonomic Bulletin & Review, 6 (1), 87– 92.
Kay, R. H. (1982). Hearing of modulation in sounds. Physiological Reviews, 62 (3), 894– 975.
Kimura, M., Katayama, J., & Murohashi, H. (2008). Effects of feature and spatial attention on visual change detection: NeuroReport, 19 (3), 389– 392.
Kleiner, M., Brainard, D., & Pelli, D. (2007). What's new in Psychtoolbox-3? Perception, 36 (14), 1– 16.
Klemen, J., Büchel, C., & Rose, M. (2009). Perceptual load interacts with stimulus processing across sensory modalities. European Journal of Neuroscience, 29 (12), 2426– 2434.
Kong, L., Michalka, S. W., Rosen, M. L., Sheremata, S. L., Swisher, J. D., Shinn-Cunningham, B. G., & Somers, D. C. (2014). Auditory spatial attention representations in the human cerebral cortex. Cerebral Cortex, 24 (3), 773– 784.
Lavie, N. (1995). Perceptual load as a necessary condition for selective attention. Journal of Experimental Psychology: Human Perception and Performance, 21 (3), 451– 468.
Lavie, N. (2005). Distracted and confused? Selective attention under load. Trends in Cognitive Sciences, 9 (2), 75– 82.
Lavie, N., & Cox, S. (1997). On the efficiency of visual selective attention: Efficient visual search leads to inefficient distractor rejection. Psychological Science, 8 (5), 395– 396.
Lavie, N., Ro, T., & Russell, C. (2003). The role of perceptual load in processing distractor faces. Psychological Science, 14 (5), 510– 515.
Lavie, N., & Tsal, Y. (1994). Perceptual load as a major determinant of the locus of selection in visual attention. Perception & Psychophysics, 56 (2), 183– 197.
Ling, S., & Carrasco, M. (2006). Sustained and transient covert attention enhance the signal via different contrast response functions. Vision Research, 46 (8–9), 1210– 1220.
Lu, T., Liang, L., & Wang, X. (2001). Temporal and rate representations of time-varying signals in the auditory cortex of awake primates. Nature Neuroscience, 4 (11), 1131– 1138.
Luo, H., Wang, Y., Poeppel, D., & Simon, J. Z. (2006). Concurrent encoding of frequency and amplitude modulation in human auditory cortex: MEG evidence. Journal of Neurophysiology, 96 (5), 2712– 2723.
Macdonald, J. S. P., & Lavie, N. (2011) Visual perceptual load induces inattentional deafness. Attention, Perception, & Psychophysics, 73, 1780– 1789.
Mäkelä, J. P., Hari, R., & Linnankivi, A. (1987). Different analysis of frequency and amplitude modulations of a continuous tone in the human auditory cortex: A neuromagnetic study. Hearing Research, 27 (3), 257– 264.
Martínez-Trujillo, J., & Treue, S. (2002). Attentional modulation strength in cortical area MT depends on stimulus contrast. Neuron, 35 (2), 365– 370.
Maunsell, J. H. R., & Treue, S. (2006). Feature-based attention in visual cortex. Trends in Neurosciences, 29 (6), 317– 322.
Molloy, K., Griffiths, T. D., Chait, M., & Lavie, N. (2015). Inattentional deafness: Visual load leads to time-specific suppression of auditory evoked responses. The Journal of Neuroscience, 35 (49), 16046– 16054.
Morrone, M. C., Denti, V., & Spinelli, D. (2002). Color and luminance contrasts attract independent attention. Current Biology, 12 (13), 1134– 1137.
Orchard-Mills, E., Van der Burg, E., & Alais, D. (2013). Amplitude-modulated auditory stimuli influence selection of visual spatial frequencies. Journal of Vision, 13 (3): 6, 1– 17, doi:10.1167/13.3.6. [PubMed] [Article]
Parks, N. A., Hilimire, M. R., & Corballis, P. M. (2011). Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition. Journal of Cognitive Neuroscience, 23 (5), 1113– 1124.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10 (4), 437– 442.
Pestilli, F., & Carrasco, M. (2005). Attention enhances contrast sensitivity at cued and impairs it at uncued locations. Vision Research, 45 (14), 1867– 1875.
Pestilli, F., Carrasco, M., Heeger, D. J., & Gardner, J. L. (2011). Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. Neuron, 72 (5), 832– 846.
Pestilli, F., Viera, G., & Carrasco, M. (2007). How do attention and adaptation affect contrast sensitivity? Journal of Vision, 7 (7): 9, 1– 12, doi:10.1167/7.7.9. [PubMed] [Article]
Rabinowitz, N. C., Willmore, B. D. B., Schnupp, J. W. H., & King, A. J. (2011). Contrast gain control in auditory cortex. Neuron, 70 (6), 1178– 1191.
Raveh, D., & Lavie, N. (2015). Load-induced inattentional deafness. Attention, Perception, & Psychophysics, 77, 483– 492.
Rees, G., Frith, C., & Lavie, N. (2001). Processing of irrelevant visual motion during performance of an auditory attention task. Neuropsychologia, 39 (9), 937– 949.
Regan, D., & Tansley, B. W. (1979). Selective adaptation to frequency-modulated tones: Evidence for an information-processing channel selectively sensitive to frequency changes. The Journal of the Acoustical Society of America, 65 (5), 1249– 1257.
Reynolds, J. H., & Heeger, D. J. (2009). The normalization model of attention. Neuron, 61 (2), 168– 185.
Reynolds, J. H., Pasternak, T., & Desimone, R. (2000). Attention increases sensitivity of V4 neurons. Neuron, 26 (3), 703– 714.
Ro, T., Friggel, A., & Lavie, N. (2009). Musical expertise modulates the effects of visual perceptual load. Attention, Perception, & Psychophysics, 71 (4), 671– 674.
Ruthruff, E., Johnston, J. C., & Van Selst, M. (2001). Why practice reduces dual-task interference. Journal of Experimental Psychology: Human Perception and Performance, 27 (1), 3– 21.
Ruthruff, E., Johnston, J. C., Van Selst, M., Whitsell, S., & Remington, R. (2003). Vanishing dual-task interference after practice: Has the bottleneck been eliminated or is it merely latent? Journal of Experimental Psychology: Human Perception and Performance, 29 (2), 280– 289.
Ruthruff, E., Selst, M. V., Johnston, J. C., & Remington, R. (2004). How does practice reduce dual-task interference: Integration, automatization, or just stage-shortening? Psychological Research, 70 (2), 125– 142.
Schenkluhn, B., Ruff, C. C., Heinen, K., & Chambers, C. D. (2008). Parietal stimulation decouples spatial and feature-based attention. The Journal of Neuroscience, 28 (44), 11106– 11110.
Schütt, H. H., Harmeling, S., Macke, J. H., & Wichmann, F. A. (2016). Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data. Vision Research, 122, 105– 123.
Shechter, B., & Depireux, D. A. (2012). Contrast tuned responses in primary auditory cortex of the awake ferret. European Journal of Neuroscience, 35 (4), 550– 561.
Sherman, A., Grabowecky, M., & Suzuki, S. (2013). Auditory rhythms are systemically associated with spatial-frequency and density information in visual scenes. Psychonomic Bulletin & Review, 20 (4), 740– 746.
Soto-Faraco, S., Spence, C., Fairbank, K., Kingstone, A., Hillstrom, A. P., & Shapiro, K. (2002). A crossmodal attentional blink between vision and touch. Psychonomic Bulletin & Review, 9 (4), 731– 738.
Spence, C., & Santangelo, V. (2010). Auditory attention. In Plack C. J. (Ed.), Oxford handbook of auditory science ( pp. 249– 270). New York: Oxford University Press.
Störmer, V. S., McDonald, J. J., & Hillyard, S. A. (2009). Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli. Proceedings of the National Academy of Sciences, USA, 106, 22456– 22461.
Talsma, D., Doty, T. J., Strowd, R., & Woldorff, M. G. (2006). Attentional capacity for processing concurrent stimuli is larger across sensory modalities than within a modality. Psychophysiology, 43 (6), 541– 549.
Töllner, T., Strobach, T., Schubert, T., & Müller, H. J. (2012). The effect of task order predictability in audio-visual dual task performance: Just a central capacity limitation? Frontiers in Integrative Neuroscience, 6, 75.
Viemeister, N. F. (1979). Temporal modulation transfer functions based upon modulation thresholds. The Journal of the Acoustical Society of America, 66 (5), 1364– 1380.
Wakefield, G. H., & Viemeister, N. F. (1984). Selective adaptation to linear frequency-modulated sweeps: Evidence for direction-specific FM channels? The Journal of the Acoustical Society of America, 75 (5), 1588– 1592.
Wichmann, F. A., & Hill, N. J. (2001). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63 (8), 1293– 1313.
Williford, T., & Maunsell, J. H. R. (2006). Effects of spatial attention on contrast response functions in macaque area V4. Journal of Neurophysiology, 96 (1), 40– 54.
Willmore, B. D. B., Cooke, J. E., & King, A. J. (2014). Hearing in noisy environments: Noise invariance and contrast gain control. The Journal of Physiology, 592 (16), 3371– 3381.
Woodruff, P. W., Benson, R. R., Bandettini, P. A., Kwong, K. K., Howard, R. J., Talavage, T., & Rosen, B. R. (1996). Modulation of auditory and visual cortex by selective attention is modality-dependent. NeuroReport, 7 (12), 1909– 1913.
Zwicker, E. (1962). Direct comparisons between the sensations produced by frequency modulation and amplitude modulation. The Journal of the Acoustical Society of America, 34 (9B), 1425– 1430.
Footnotes
1  To maintain uniformity, we transformed all auditory AM data by taking the log10 of threshold measures. Such a transformation tends to produce data that are more normally distributed. Effects in our data remain similar if we do not transform the data or if we use nonparametric statistics, which tend to provide a more conservative measure. We also find similar trends if we transform each data set optimally, such as using inverse log or the reciprocal of the square root of threshold.
Footnotes
2  Data collected for the AM condition at a frequency of 0.5 Hz are not reported, because we could not estimate a threshold value reliably. No participant could obtain 75% correct on even the easiest auditory contrast, for either visual task. Auditory-stimulus duration (500 ms) was not sufficient for participants to properly detect the stimulus.
Footnotes
3  To maintain uniformity, we transformed all auditory FM data by taking the log10 of threshold measures. Such a transformation tends to produce more normally distributed data. Our effects remain similar if we do not transform the data, if we use nonparametric statistics, or if we transform each data set optimally, as described previously.
Footnotes
4  Data collected for the FM sound at 0.5 and 1 Hz are not reported because we could not estimate a threshold value reliably. No participant could obtain 75% correct on even the easiest auditory contrast, for either visual task.
Figure 1
 
Stimuli and psychophysical procedure. Trials began with a fixation stimulus, followed by the simultaneous presentation of a visual stream of letters (rapid serial visual presentation), just above central fixation, and an auditory stimulus for 500 ms (first interval). Following a 200-ms delay, another visual stream of letters and another auditory stimulus were presented for 500 ms (second interval). At the end of each trial, participants had a limited time (1000 ms) to report via button press which interval contained the visual target—white letters in the visual color task (low-load condition) or more of the letter A in the visual number task (high-load condition)—and then to report which interval contained the auditory target, which was an amplitude-modulated sound (Experiment 1) or a frequency-modulated sound (Experiment 2). Feedback was provided after each visual and each auditory response. Participants completed blocks of trials for a given condition (high or low load) across several days, in either Experiment 1 or Experiment 2.
Figure 1
 
Stimuli and psychophysical procedure. Trials began with a fixation stimulus, followed by the simultaneous presentation of a visual stream of letters (rapid serial visual presentation), just above central fixation, and an auditory stimulus for 500 ms (first interval). Following a 200-ms delay, another visual stream of letters and another auditory stimulus were presented for 500 ms (second interval). At the end of each trial, participants had a limited time (1000 ms) to report via button press which interval contained the visual target—white letters in the visual color task (low-load condition) or more of the letter A in the visual number task (high-load condition)—and then to report which interval contained the auditory target, which was an amplitude-modulated sound (Experiment 1) or a frequency-modulated sound (Experiment 2). Feedback was provided after each visual and each auditory response. Participants completed blocks of trials for a given condition (high or low load) across several days, in either Experiment 1 or Experiment 2.
Figure 2
 
Auditory performance in amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast, for white-noise sounds presented at 1, 5, and 10 Hz. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory AM thresholds increased when the concurrent visual task was the number versus the color task. This was significant in six of seven observers and across observers for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625). (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task and frequency.
Figure 2
 
Auditory performance in amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast, for white-noise sounds presented at 1, 5, and 10 Hz. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory AM thresholds increased when the concurrent visual task was the number versus the color task. This was significant in six of seven observers and across observers for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625). (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task and frequency.
Figure 3
 
Visual performance in the amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the amplitude modulation for the concurrent auditory task at 1, 5, and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average visual percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625).
Figure 3
 
Visual performance in the amplitude-modulation (AM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the amplitude modulation for the concurrent auditory task at 1, 5, and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average visual percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each modulating frequency (n = 7 observers; total number of trials = 21,010; mean number of trials per task per modulating frequency = 2,625).
Figure 4
 
Practice effects on visual and auditory performance in the amplitude-modulation (AM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for auditory AM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving performance (decreasing AM thresholds) as a function of time on task.
Figure 4
 
Practice effects on visual and auditory performance in the amplitude-modulation (AM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for auditory AM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving performance (decreasing AM thresholds) as a function of time on task.
Figure 5
 
Performance when visual and auditory targets are in the same versus different intervals in the amplitude-modulation (AM) experiment. (A) Percent correct on the visual task and auditory AM (B) thresholds, (C) slopes, and (D) lapse rates when visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between the two conditions.
Figure 5
 
Performance when visual and auditory targets are in the same versus different intervals in the amplitude-modulation (AM) experiment. (A) Percent correct on the visual task and auditory AM (B) thresholds, (C) slopes, and (D) lapse rates when visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between the two conditions.
Figure 6
 
Auditory performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the FM sound as a function of the FM index, our measure of auditory contrast, for the 5- and 10-Hz sounds. (B) Individual and average FM threshold across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory thresholds increased when the concurrent visual task was the number versus the color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588). Individual and average auditory FM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory FM lapse rates across observers (± standard error of the mean across observers) for each visual task and auditory frequency.
Figure 6
 
Auditory performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the FM sound as a function of the FM index, our measure of auditory contrast, for the 5- and 10-Hz sounds. (B) Individual and average FM threshold across observers (± standard error of the mean across observers) for each visual task and auditory frequency. Overall, auditory thresholds increased when the concurrent visual task was the number versus the color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588). Individual and average auditory FM slopes across observers (± standard error of the mean across observers) for each visual task and auditory frequency. (D) Individual and average auditory FM lapse rates across observers (± standard error of the mean across observers) for each visual task and auditory frequency.
Figure 7
 
Visual performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the FM index for the concurrent auditory task at 5 and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588).
Figure 7
 
Visual performance in the frequency-modulation (FM) experiment. (A) Data from a sample observer plotting percent correct detection of the visual target as a function of the FM index for the concurrent auditory task at 5 and 10 Hz. Performance on the visual task was stable across auditory contrast. (B) Average percent correct for each visual task (± standard error of the mean across observers), collapsed across auditory contrast, for each frequency. Overall, the visual number task showed worse performance (was more difficult) than the visual color task. This was significant in each observer and across observers, for each frequency (n = 7 observers; total number of trials = 20,418; mean number of trials per task per modulating frequency = 2,588).
Figure 8
 
Practice effects on visual and auditory performance in the frequency-modulation (FM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for FM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving auditory performance (decreasing FM threshold) as a function of time on task.
Figure 8
 
Practice effects on visual and auditory performance in the frequency-modulation (FM) experiment. (A) Slope estimates for the visual task are plotted for each frequency and each visual task. Data were binned into first, second, and third blocks of ∼500 trials and fitted with a line to estimate slope. (B) Slope estimates for FM thresholds are plotted for each frequency and each visual task, using the same binning convention as for the visual task. While there were no significant effects of practice on either visual task, there was a significant effect of improving auditory performance (decreasing FM threshold) as a function of time on task.
Figure 9
 
Performance when visual and auditory targets are in the same versus different intervals in the frequency-modulation (FM) experiment. (A) Percent correct on the visual task and auditory FM (B) thresholds, (C) slopes, and (D) lapse rates when the visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between conditions.
Figure 9
 
Performance when visual and auditory targets are in the same versus different intervals in the frequency-modulation (FM) experiment. (A) Percent correct on the visual task and auditory FM (B) thresholds, (C) slopes, and (D) lapse rates when the visual and auditory targets were in the same versus different intervals, for each frequency and each visual task. There were no significant differences between conditions.
Figure 10
 
Auditory performance in the amplitude-modulation (AM) experiment for a 5-Hz sound in a single auditory task (leftmost column) and a dual visual and auditory task where the visual stimulus was judged first (middle column) or second (rightmost column). (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task. Overall, auditory AM thresholds were higher when the concurrent visual task was the number versus the color task, whether or not the auditory task was first or second. (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task.
Figure 10
 
Auditory performance in the amplitude-modulation (AM) experiment for a 5-Hz sound in a single auditory task (leftmost column) and a dual visual and auditory task where the visual stimulus was judged first (middle column) or second (rightmost column). (A) Data from a sample observer plotting percent correct detection of the AM sound as a function of AM index, our measure of auditory contrast. Thresholds were computed as the auditory AM contrast supporting 75% correct performance. (B) Individual and average auditory AM thresholds across observers (± standard error of the mean across observers) for each visual task. Overall, auditory AM thresholds were higher when the concurrent visual task was the number versus the color task, whether or not the auditory task was first or second. (C) Individual and average auditory AM slopes across observers (± standard error of the mean across observers) for each visual task. (D) Individual and average auditory AM lapse rates across observers (± standard error of the mean across observers) for each visual task.
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×