Free
Research Article  |   September 2007
The temporal dynamics of selective attention of the visual periphery as measured by classification images
Author Affiliations
Journal of Vision September 2007, Vol.7, 10. doi:10.1167/7.12.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Steven S. Shimozaki, Kelly Y. Chen, Craig K. Abbey, Miguel P. Eckstein; The temporal dynamics of selective attention of the visual periphery as measured by classification images. Journal of Vision 2007;7(12):10. doi: 10.1167/7.12.10.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

This study estimates the temporal dynamics of selective attention with classification images, a technique assessing observer information use by tracking how responses are correlated with external noise added to the stimulus. Three observers performed a yes/no discrimination of a Gaussian signal that could appear at one of eight locations (eccentricity—4.6°). During the stimulus duration (300 ms), a peripheral cue indicated the potential signal location with 100% validity, and stimuli were presented in frames (37.5 ms/frame) of independently sampled Gaussian luminance image noise. Stimuli were presented either with or without a succeeding masking display (100 ms) of high-contrast image noise, with mask presence having little effect. The results from the classification images suggest that observers were able to use information at the cued location selectively (relative to the uncued locations), starting within the first (0–37.5 ms) or second (37.5–75 ms) frame. This suggests a selective attention effect earlier than those found in previous behavioral and event-related potential (ERP) studies, which generally have estimated the latency for selective attention effects to be 75–100 ms. We present a deconvolution method using the known temporal impulse response of early vision that indicates how the classification image results might relate to previous behavioral and ERP results. Applying the model to the classification images suggests that accounting for the known temporal dynamics could explain at least part of the difference in results between classification images and the previous studies.

Introduction
Visual spatial attention might be generally defined as the process that selects information across the visual field (for a review, see Pashler, 1998). Predominantly, directing visual attention to select a particular location (or number of locations) has been through the use of cues signifying those locations (e.g., Palmer, 1995; Posner, 1980; for reviews, see Pashler, 1998; Wright & Ward, 1998). One aspect of interest is the temporal dynamics of visual attention; in other words, how quickly can visual attention select locations for visual processing after receiving information (i.e., a cue) on which location(s) to select (for a review, see Wright & Ward, 1998)? 
In this study, we assessed the temporal dynamics of selective attention with classification images (Ahumada & Lovell, 1971). The classification image technique (Ahumada & Lovell, 1971, see also Eckstein & Ahumada, 2002; Gold, Shiffrin, & Elder, 2006) estimates the spatial information used in a task by assessing how response outcomes depend upon the external or image noise added to the stimulus. For example, a typical classification image analysis of a yes/no detection task assumes that the false alarms (erroneous “yes” responses on no signal trials) are driven at least partly by the external noise. The “false alarm” classification image is the average of the noise fields leading to false alarms and represents the overall information profile the observer used to judge signal presence. It is the behavioral equivalent of the reverse correlation technique that has been used to assess the response characteristics of single neurons (DeAngelis, Ohzawa, & Freeman, 1993a, 1993b; Marmarelis & Marmarelis, 1978; Nykamp & Ringach, 2002; Reid, Victor, & Shapley, 1997; Ringach, Sapiro, & Shapley, 1997). Under linear model assumptions, the classification image is a direct estimate of the template (or filter) an observer uses in a particular visual task (Ahumada, 2002; Murray, Bennett, & Sekuler, 2002). However, an assumption of linearity may not be appropriate in all cases (for discussions of nonlinearity, see Abbey & Eckstein, 2002, 2006; Neri, 2004; Tjan & Nandy, 2006; Victor, 2005). Recently, this technique has been employed to assess a variety of visual phenomena, including Vernier acuity (Ahumada, 1996; Beard & Ahumada, 1998) and other position discrimination judgments (Levi & Klein, 2002), Kaniza squares and other illusory contours (Gold, Murray, Bennett, & Sekuler, 2000), contrast effects (Dakin & Bex, 2003; Shimozaki, Eckstein, & Abbey, 2005), saccadic decisions (Rajashekar, Bovik, & Cormack, 2006), and visual attention (Eckstein, Pham, & Shimozaki, 2004; Eckstein, Shimozaki, & Abbey, 2002). 
More recently, the classification image technique has been expanded to assess temporal characteristics (time series) of visual phenomena in what have been called “classification movies.” Neri and Heeger (2002) used time series analysis of classification images to study the spatiotemporal properties of detecting and identifying bar stimuli, and Xing and Ahumada (2002) performed another early study of time series of classification images in a discrimination of grating patterns (Gabors). Other studies assessing the temporal dynamics with time series of classification images include the temporal effect of feedback on the detection of pulses (Knoblach, Thomas, & D'Zmura, 1999), orientation discrimination (Mareschal, Dakin, & Bex, 2006), perceptual rivalry (Lankheet, 2006), illusory contours (Gold & Shubel, 2006), biological motion (Lu & Liu, 2006), saccadic decisions (Caspi, Beutter, & Eckstein, 2004; Ludwig, Gilchrist, McSorley & Baddeley, 2005), and attention to motion coherence (Ghose, 2006). 
In this study, we used a time series of classification images to assess the temporal characteristics of selective attention in its use of cued information. Observers performed a yes/no detection task on the presence of a small Gaussian signal appearing on half the trials at one out of eight locations ( Figure 1). The signal was presented for a total of 300 ms. The 300-ms stimulus interval with the signal was divided into eight frames (37.5 ms/per frame), and different independent samples of external image noise were added to the stimulus in each frame. A square cue indicated the potential signal location with 100% validity; it simultaneously appeared with the stimulus and disappeared with the stimulus. Studies were performed either with or without a high-contrast noise mask to assess the effect of masking on the temporal dynamics. A classification image was calculated for each frame and location, resulting in a time series of classification images for each location. From these time series, we assessed the dynamics of information use at the cued location compared to the uncued locations, with particular interest on the evidence of the first instance of the selective use of information. 
Figure 1
 
Stimulus sequence. Observer judged upon the presence of the signal (yes/no) at the cued location. Each stimulus frame (37.5 ms) contained an independent sample of Gaussian-distributed luminance image noise. Stimulus duration = 300 ms (eight frames). Studies were run with the mask (Mask) and without a mask (No Mask).
Figure 1
 
Stimulus sequence. Observer judged upon the presence of the signal (yes/no) at the cued location. Each stimulus frame (37.5 ms) contained an independent sample of Gaussian-distributed luminance image noise. Stimulus duration = 300 ms (eight frames). Studies were run with the mask (Mask) and without a mask (No Mask).
Attention has been generally classified as being comprised of two attentional systems, exogenous or endogenous (e.g., Posner, 1978; for a review, see Pashler, 1998; Wright & Ward, 1998); other or similar comparable distinctions for attentional systems include peripheral versus central, voluntary versus involuntary (e.g., Luria, 1973) or goal-driven versus stimulus-driven (Wright & Ward, 1998). The temporal response of the exogenous system has been suggested to be more rapid than that of the endogenous system, with latencies estimated at 50–100 ms (Lyon, 1990; Mackeben & Nakayama, 1993; Müller & Findlay, 1988; Nakayama & Mackeben, 1989; Shepherd & Müller, 1989; Tsal, 1983; for a review, see Wright & Ward, 1998) compared to 100–300 ms (Müller & Findlay, 1988; Remington & Pierce, 1984; Shepherd & Müller, 1989; Shulman, Remington, & McLean, 1979; Weichselgartner & Sperling, 1987; for a review, see Wright & Ward, 1998). The simultaneous onset of the cue with the signal and the peripheral cue at the signal location would suggest that this task would predominately access the exogenous attentional system; however, the use of the fully valid cue might imply some endogenous influence (Wright & Ward, 1998). Given the presumed predominantly exogenous attentional employment of the task, we might expect evidence of a relatively rapid attentional response, on the order of 50–100 ms. 
Several studies with event-related potentials (ERPs) have measured the time course of selective attention (Clark & Hillyard, 1996; Di Russo, Martínez, & Hillyard, 2003; Doallo et al., 2004; Harter, Miller, Price, LaLonde, & Keyes, 1989; Hillyard, Teder-Sälejärvi, & Münte, 1998; Hopfinger & Mangun, 1998; Luck, Hillyard, Mouloua Woldorff, Clark, & Hawkins, 1994; Mangun, 1995; Mangun & Hillyard, 1991; Martínez et al., 1999; Nobre, Sebestyen, & Miniussi, 2000). These studies have found first evidence of an attentional effect, measured as a difference between the ERP waveforms (specifically, P1) for attended and unattended targets, at about 75–100 ms. 
There are some aspects of this study that might make comparisons to the previous studies problematic. One aspect is that previous behavioral studies of the temporal properties of attention typically have measured performance, such as reaction times, accuracy, or threshold, for attended and unattended conditions, and have manipulated the timing between the cue and the stimulus (the cue–stimulus onset asynchrony; SOA). The temporal dynamics of attention were then estimated by assessing differences in attended and unattended performance for each SOA. In this study, the classification images were not direct measures of performance, but were rather estimates of information use for each frame. Thus, this study assessed information use for only a single cue SOA (0, or a simultaneous cue) and continuously for the eight frames throughout the stimulus duration (300 ms). These differences in methodology could lead to differences in the assessment of attentional temporal dynamics as well. 
Another general aspect is that this study assessed attentional temporal dynamics from a behavioral viewpoint, which clearly differs from the ERP studies. Attentional ERP studies reflect direct neural responses across a relatively large areas of the brain, typically as the differences in the EEG signals in attended and unattended conditions. This might be stated as the attentional ERP studies operating on an absolute time scale, whereas studies with classification images (and other behavioral methods) operate on an indirect or inferred time scale. The behavioral temporal dynamics may or may not reflect the actual temporal dynamics of the underlying neural mechanisms, depending on the type of temporal transformations within the visual system. For example, a delay in the neural processing would lead to delays in the ERP studies, but not necessarily in the behavioral studies. Thus, the unknown relationship between the behavioral (inferred) time frame of this study and the neural (direct) time frame of the ERP studies might lead to differences in the results. 
There might be several aspects to the differences between classification image and ERP estimates of attentional temporal dynamics (further discussed in the Discussion section). This study assessed one aspect, the blurring of the signal by the visual system, due to the temporal properties of basic visual processes. This may be characterized by the response to a single impulse or the impulse response function of the visual system (see Figure 2, middle graph; for details, see 1). The particular impulse response function shown in Figure 2 is a model proposed by Watson (1986). It is believed to be a good characterization of the visual system's temporal properties, as it accounts well for several studies on human temporal responses (De Lange, 1958; Robson, 1966; Roufs and Blommaert, 1981). 
Figure 2
 
Responses to a single impulse as a result of an impulse response function and an attentional weighting function. The pulse is convolved with the impulse response function from Watson (1986; τ = time delay constant = 6 ms) and then weighted with an attentional weighting function. The temporal blurring from the impulse response function leads to information use prior to the beginning of the selective weighting function (attentional latency). This is demonstrated in the bottom graph with the impulse, the impulse response, and the hypothesized selective attention functions presented on the same graph and the overlap in the impulse response and hypothesized selective attention function.
Figure 2
 
Responses to a single impulse as a result of an impulse response function and an attentional weighting function. The pulse is convolved with the impulse response function from Watson (1986; τ = time delay constant = 6 ms) and then weighted with an attentional weighting function. The temporal blurring from the impulse response function leads to information use prior to the beginning of the selective weighting function (attentional latency). This is demonstrated in the bottom graph with the impulse, the impulse response, and the hypothesized selective attention functions presented on the same graph and the overlap in the impulse response and hypothesized selective attention function.
The difficulty that this temporal blurring might be present in interpreting the classification results is illustrated in Figure 2. A brief flash of light (impulse, Figure 2, left) enters the visual system and is transformed, as described by (a convolution by) the impulse response function ( Figure 2, middle). A later mechanism selects information from the transformed signal ( Figure 2, right); this later selection is proposed to be the effect of selective attention. (We have assumed a rectangular waveform for the temporal profile of attention in this figure only for expositional purposes. A more realistic temporal profile for attention is in Figures 4 and 5.) As shown in the bottom middle graph of Figure 2, there is potential overlap between the impulse response and the attentional window. Thus, although the hypothesized attentional selection of information could begin after cue onset, the information used within that attentional window could include information at the time of the impulse ( t = 0) or before selection because of the temporal blurring of the visual information. 
Figures 3 and 4 expand the circumstances in Figure 2 from a single impulse to the situation in this study, in which the stimuli were sequences of intervals at different intensities. Figure 3 shows a typical linear systems approach connecting behavioral responses (judgments of “yes” and “no”) and a classification image time series without the effect of the impulse response (Ahumada, 2002; Murray et al., 2002). In this figure, the 300-ms stimulus interval for a single trial g(t) is denoted as a series of 8 frames (37.5 ms/frame), with each frame represented as a single value for that frame. The single value for each frame corresponds to the summary statistic used in this study, the integral of the stimulus (or classification image) over a given area (within two standard deviations of center of the Gaussian signal). Because of the added external image noise, these single summary values vary randomly from frame to frame. For the observer judgments (upper part of Figure 3), it is assumed that a linear combination of stimulus weights with the stimulus (dot product) leads to a decision variable (λ), which is then compared to a criterion for a judgment of “yes” (λ > crit) or “no” (λ < crit). For the classification images, the same calculation of the decision variable and the decision rule leads to a sorting of stimuli by response outcome (in this case, no signal trials leading to “yes” responses, or false alarms). The classification image from the false alarms, or the average of the false alarm noise fields, is an estimate of the stimulus weights used to calculate the decision variable (λ). 
Figure 3
 
Modeling the generation of behavioral responses and classification images with standard classification images. Upper—behavioral responses. The dot product of the stimulus (g(t)) and the stimulus weights leads to a decision variable (λ), which is compared to a criterion for a judgment of signal presence or signal absence. Lower—classification images, For a series of no signal trials, the same decision variable as above (λ) leads to a judgment of signal presence or signal absence. If λ > crit, the observer judges the signal to be present, which is an erroneous false alarm for a no signal trial. The noise fields leading to a false are averaged to create the classification image. The resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable (λ).
Figure 3
 
Modeling the generation of behavioral responses and classification images with standard classification images. Upper—behavioral responses. The dot product of the stimulus (g(t)) and the stimulus weights leads to a decision variable (λ), which is compared to a criterion for a judgment of signal presence or signal absence. Lower—classification images, For a series of no signal trials, the same decision variable as above (λ) leads to a judgment of signal presence or signal absence. If λ > crit, the observer judges the signal to be present, which is an erroneous false alarm for a no signal trial. The noise fields leading to a false are averaged to create the classification image. The resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable (λ).
Figure 4
 
Modeling the generation of behavioral responses and classification images with the addition of the impulse response function. Upper—behavioral responses. The stimulus ( g( t) is convolved with the impulse response function ( h( t)), leading to an internal response. The dot product of the internal response with the cue-selective weighting function ( w( t))) leads to a decision variable ( λ), which is compared to a criterion for a judgment of signal presence or signal absence (last step not shown in this figure, see Figure 3). Lower—classification images. The calculation of the classification image from the decision variable ( λ) is the same as in Figure 3, see Figure 3 caption for details. As in Figure 3, the resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable ( λ). In this case, this is the convolution of the impulse response function ( h( t)) and the cue-selective weighting function ( w( t)).
Figure 4
 
Modeling the generation of behavioral responses and classification images with the addition of the impulse response function. Upper—behavioral responses. The stimulus ( g( t) is convolved with the impulse response function ( h( t)), leading to an internal response. The dot product of the internal response with the cue-selective weighting function ( w( t))) leads to a decision variable ( λ), which is compared to a criterion for a judgment of signal presence or signal absence (last step not shown in this figure, see Figure 3). Lower—classification images. The calculation of the classification image from the decision variable ( λ) is the same as in Figure 3, see Figure 3 caption for details. As in Figure 3, the resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable ( λ). In this case, this is the convolution of the impulse response function ( h( t)) and the cue-selective weighting function ( w( t)).
Figure 4 shows the same connection between a behavioral response and the classification image time series in Figure 3, except that it includes the effect of a temporal impulse response function upon the visual information. The upper part of Figure 4 shows how a behavioral response is generated within this framework. First, the stimulus ( g( t)) is transformed into an internal response by convolving the stimulus with the impulse response function ( h( t)). The decision variable ( λ) arises from the dot product of the weight vector ( w( t)) and the convolved response, which is then compared to a criterion for a judgment of “yes” ( λ > crit) or “no” ( λ < crit; not shown in Figure 4). In this case, however, the weights represent the effect of a selective attentional mechanism operating at the cued location. (For the uncued locations, it is assumed that this weighting function is zero throughout, if selection is optimal.) We wish to recover this weighting function at the cued location as our estimate of the time course of selective attention, given the constraint of an impulse response function. The calculation of the classification image depicted in the bottom of Figure 4 is the same as in Figure 3. The difference is that the classification image time series now represents the convolution of the impulse response function ( h( t)) with the cue-selective weighting function ( w( t)). Therefore, to recover w( t), we must deconvolve the classification images with the impulse response function ( h( t)), as represented in Figure 5 (for details, see Methods section). 
Figure 5
 
Recovering an estimate of the cue-selective weighting function ( w( t)) from the classification image. The classification image is deconvolved with the impulse response function ( h( t)).
Figure 5
 
Recovering an estimate of the cue-selective weighting function ( w( t)) from the classification image. The classification image is deconvolved with the impulse response function ( h( t)).
We present a method to deconvolve the impulse response of early vision and thus derive the temporal dynamics of attentional information use isolated from these effects of temporal blurring, using the model proposed by Watson (1986) for the impulse response function. This approach does not represent a complete characterization of all the potential issues in comparing the classification image results to other previous results. Instead, we chose this approach to estimate the potential effect of the impulse response function upon the estimate of the attentional temporal profile from the classification images. We felt that making an account of the effect of the impulse response function was advisable, as it represents an accepted inherent property of the visual system and as it has been relatively well characterized. 
The attentional weighting function ( w( t)) operates as a scalar upon the information available at the potential signal locations. Conceptually, it is specifically derived from a Bayesian ideal observer for a cueing task (Eckstein et al., 2002; Shimozaki, Eckstein, & Abbey, 2003), in which the optimal decision is determined by the scalar weighting of the likelihoods by the prior probabilities, or the cue validities. More in line with the potential variation in attentional weights in this study, a simple deviation from the ideal observer has been proposed so that the weights are free to vary (the weighted likelihood model; Eckstein et al., 2002; Shimozaki et al., 2003). There are other examples of visual attention modeled as a scalar weighting function. For example, Kinchla (1974, 1977) and Kinchla, Chen, and Evert (1995) proposed a model for cueing effects that is similar to the ideal observer (for a comparison, see Shimozaki et al., 2003), and Sperling and Weichselgartner (1995) proposed an attentional weighting function for various time-related measures, such as reaction times (Shulman et al., 1979), choice reaction times (Tsal, 1983), and threshold time durations (Lyon, 1987). The study by Eckstein et al. (2002) showed that both an ideal observer and a tuning model in which attention tunes a filter or template at the attended location (Spitzer, Desimone, & Moran, 1988; Yeshurun & Carrasco, 1998, 1999) may be behaviorally modeled as an attentional weighting function. Therefore, an attentional weighting function may be viewed as a relatively neutral and general description of selective attention. 
Methods
Procedure
Figure 1 depicts a single trial of the study. Observers participated in a yes/no contrast discrimination task of a small blurry disk (a two-dimensional Gaussian) presented for 300 ms. The signal disk appeared on half the trials at one of eight locations 4.6° from a central fixation point, and observers judged the presence of the disk with a keypress. Throughout the stimulus presentation, a square cue (2.5°) at one of the eight locations indicated the possible signal location with 100% validity. That is, if the signal appeared, it would only appear at the cued location. Feedback on correctness of the observers' judgments was given after each trial. Observers were instructed to fixate the center throughout the experiment. 
The Gaussian signal had a spatial standard deviation of 8.2 min and a peak contrast 16.2% greater than the mean luminance (25.1 cd/m 2). At each nonsignal location, a low-contrast version of the signal appeared (+11.7% peak contrast). Often this is called a “pedestal,” and also the task may be described as a contrast discrimination (of the pedestal vs. the signal plus pedestal). The stimuli presentations were divided into eight continuous frames (37.5 ms/frame, or 26.7 Hz) of Gaussian-distributed white luminance noise added to the pixels of the eight potential signal locations ( SD = 4.90 cd/m 2), with a new independent sample of noise added to each frame. Also, separate experiments were run with or without a masking display; when present, the mask was composed of high-contrast Gaussian noise ( SD = 7.21 cd/m 2) and was presented for 100 ms after the stimulus display. The yes/no judgments were untimed, and observers were prompted to respond after the mask display (in the mask study) or after the last stimulus frame (in the no mask study). 
Coauthor K.C. participated in both studies (mask and no mask), and D.V. and L.L. participated in one study each (D.V. in the mask study and L.L. in the no mask study). There were 9,000 total trials per observer in the mask study and 8,000 total trials for the no mask study. 
Stimuli were presented on a monochrome monitor (32.51 × 24.38 cm, 1,024 × 768 pixels, 53.3 Hz, Image Systems Corp., Minnetonka, MN) at a distance of 50 cm, with each pixel equal to 0.034° at this distance. Luminance calibrations were performed with software and equipment from Dome Imaging Systems, Inc. (Luminance Calibration System, Waltham, MA). 
Data analysis
Behavioral data
Behavioral data (yes/no responses) were analyzed with standard equal-variance Signal Detection Theory methods (Green & Swets, 1966), accuracy expressed as d′, the common measure of sensitivity. Human performance may be compared to an ideal observer that uses all possible information and perfectly integrates across the different noise samples. In Gaussian luminance noise, the image signal to noise ratio (SNR, which corresponds to the ideal observer d′ for a single frame) can be calculated as follows (Burgess, Wagner, Jennings, & Barlow, 1981): 
S N R = E σ p i x e l ,
(1)
where E = sts
For number of frames f (8), the prediction of d′ for the ideal observer (IO) is given by (Green & Swets, 1966) 
d I O = f S N R .
(2)
 
The standard measure of comparison of the human observer to the ideal observer is efficiency (Barlow, 1956), defined as 
E f f i c i e n c y = ( d h u m a n d I O ) .
(3)
 
Classification image
To calculate a classification image for a single location and a single time frame, we pooled and averaged all the external noise images for that location and time frame across response outcome. In this study, we combine across the “no signal” trials the correct rejections and false alarms using a combination rule proposed by Ahumada (2002) and Murray et al. (2002). We chose not to combine across the signal trials (hit and misses), as certain nonlinearities (i.e., uncertainty about the exact signal location) can lead to a bias toward the signal shape in the signal present trials (Ahumada, 2002; see Eckstein et al., 2002, 2004; Shimozaki et al., 2005). 
For each time frame i,  
C I i , l = C I f a , i , l C I c r , i , l = j = 1 n f a N f a , i , j , l n f a k = 1 n c r N c r , i , k , l n c r .
(4)
 
Note that a separate classification image was created for each location, with one of the locations being the cued location and the other seven being the uncued locations. All uncued classification images were calculated in their positions relative to the cued location, and not their absolute position. Also, a separate classification image was created for each time frame. 
The Gaussian signals were rotationally invariant (circularly symmetric); thus, to reduce the two-dimensional classification images to one-dimensional representations, the classification images were averaged across polar angle for the same radial distance from the signal center (“radial averages”; Abbey & Eckstein, 2002; Eckstein et al., 2002, 2004). Further analyses were restricted to radii within two standard deviations of the Gaussian signal center. 
Statistical tests on the classification images were performed with the two-sample Hotelling T 2 statistic (Harris, 1985; also Abbey & Eckstein, 2002; Eckstein et al., 2002, 2004; Shimozaki et al., 2005), which is the multivariate equivalent of the t statistic. The radial averaged classification images were compared against a null hypothesis of zero mean values and a variance of the added external noise. The two-sample Hotelling T2 statistic is given by the following equation:  
T 2 = n 1 n 0 ( n 1 + n 0 ) [ v 1 v 0 ] t K 1 [ v 1 v 0 ] .
(5)
 
v 0 = a vector containing the null hypothesis classification image (all zeroes);
v 1 = a vector containing the observed radial average classification image;
n 0, n 1 = the number of observations for each classification image (in this case, we assume that n 0 = n 1);
K −1 = the inverse of the pooled covariance matrix for v 0 and v 1;
The independent two-sample Hotelling T 2 may be transformed into an F statistic ( Equation 6) with p (vector length of x 0 and x 1) degrees of freedom in the numerator and n 0+ n 1p − 1 degrees of freedom in the denominator,  
F = n 0 + n 1 p 1 p ( n 0 + n 1 2 ) T 2 .
(6)
 
As a summary statistic for each classification image, we calculated the integral of the raw two-dimensional classification image values for the area within two standard deviations of the center of the Gaussian signal. For frame = i and location = l, the raw classification image CI i,l, σ = standard deviation of the Gaussian signal, and pixel position ( x, y) in the raw classification image,  
f o r ( x x c e n t e r ) 2 + ( y y c e n t e r ) 2 < 2 σ , A r e a i n t e g r a l i , l = x y C I i , l , x , y .
(7)
 
Deconvolution of the classification time series with the impulse response function
Figure 4 illustrates the hypothesized effect of the temporal blurring caused by the impulse response function ( h( t)) upon the measured area integrals of the classification image time series. As discussed earlier, the classification time series is assumed to be the convolution of the impulse response function ( h( t)) and the cue-selective weighting function w( t). As we were interested in the cue-selective weighting function ( w( t)) as the effect of selective attention in this task, we wished to separate the hypothesized effect of the impulse response function ( h( t)) upon the classification image time series to recover w( t). This is illustrated in Figure 5 as a deconvolution of the measured classification image time series with the impulse response function. 
For the impulse response function, we employed the quantitative model proposed by Watson (1986) based on a formulation of the impulse response as a cascade of leaky integrators (for details, see1). He found a set of parameters for his model that fit several studies of temporal sensitivity well (De Lange,1958; Robson,1966; Roufs & Blommaert,1981), and we incorporated those parameter values for the current model. One parameter that varied somewhat in Watson's fits wasτ, a parameter varying the time delay of the impulse response function, with larger values indicating longer delays and slower responses (seeFigure 6). Therefore, in our fits, we chose to varyτ from 4 to 8 ms, covering the range ofτ values that Watson used in his model fits (4.3–6.22 ms). Note that theτ values represent the time constants of a single leaky integrator, and that the impulse response functions are comprised of cascades of several leaky integrators. Thus, the peak response times (approximately 40–72 ms) are considerably longer than 4–8 ms. 
Figure 6
 
Family of impulse response functions from Watson (1986) time constants (τ) from 4 to 8 ms. See text for impulse response function details and parameters.
Figure 6
 
Family of impulse response functions from Watson (1986) time constants (τ) from 4 to 8 ms. See text for impulse response function details and parameters.
Let g( t) be the temporal stimulus integrated over the disk. Because this function will be constant over the duration of a frame, Δ t, we can write the stimulus as the sum of basis functions,  
g ( t ) = i = 1 N g i ϕ i ( t ) ,
(8)
where  
ϕ i ( t ) = { 1 i f ( i 1 ) Δ t < t i Δ t 0 o t h e r w i s e.
(9)
 
As a model of neural processing, we will assume that the external stimulus is processed through a temporal response function and then is weighted and integrated to form a decision variable. If h( t) is the temporal response of the visual system, the internal response of the visual system is  
r ( t ) = 0 t d t g ( t ) h ( t t ) .
(10)
 
The internal decision variable is computed by weighting this response by the function w( t) and integrating with stochastic noise ( ɛ):  
λ = 0 d t w ( t ) r ( t ) + ɛ .
(11)
 
Using Equations 8 and 10, we can rewrite Equation 11 as  
λ = 0 d t w ( t ) 0 t d t h ( t t ) i = 1 N g i ϕ i ( t ) + ɛ = i = 1 N g i [ 0 d t w ( t ) 0 t d t h ( t t ) ϕ i ( t ) ] + ɛ .
(12)
 
Note that Equation 12 shows how the internal weighting function is expressed as a stimulus weight, which is precisely what is measured in the classification time series. 
If c i weight in frame i, then  
c i = 0 d t w ( t ) 0 t d t h ( t t ) ϕ i ( t ) + n i ,
(13)
where n i is the error (or noise) in the classification weight estimate. This equation shows a linear relation between the classification time series and the internal weighting that is mediated by the temporal response function h. We look at approaches to inverting this linear relationship to solve for the internal weighting function w( t). Because the internal weighting function is specified as a continuously defined function and we only have a finite number of measured classification weights, we will have to make some smoothness assumptions for the problem to be tractable. 
For a smooth weighting function, we can discretize the time variable into time points t m = mΔ τ, where Δ τ is the time step that is presumed to be much smaller than Δ t in Equation 9. This allows to rewrite Equation 13 as  
c i = m = 1 M w m j = 1 m h ( t m t j ) ϕ i ( t j ) + n i .
(14)
 
This can in turn be written using matrix–vector notation as  
c = A w + n ,
(15)
where c is an N-element vector of classification weights, w is an M-element vector of internal weights, and n is an N-element vector if classification weight errors. The nonsquare system matrix A is defined by Equation 14 as  
A i , m = j = 1 m h ( t m t j ) ϕ i ( t j ) .
(16)
 
There are two problems to be addressed in solving Equation 15; the system is corrupted by measurement noise (encapsulated in n), and it is underdetermined because Δ τ is presumed to be much smaller than Δ t, leading to more unknowns than equations. We approach both of these problems through regularized least squares (RLS), also known as ridge regression (Golub & Van Loan,1989; Neumaier,1998). 
RLS produces estimates,
^ w
of the elements of w, by the formula  
^ w = ( A t Σ A + β R ) 1 A t c ,
(17)
where Σ is the variance–covariance matrix of errors in the classification weights, R is a regularizing matrix that stabilizes the inverse, and β > 0 is a user defined parameter that is used to adjust the strength of the regularizing matrix. For this application, we use a minimum norm approach in which R is set to be the identity matrix. The parameter β is set so that the condition number of the inverse in Equation 17 is never greater than 100. 
Results
Behavioral and classification image results
Table 1 summarizes the behavioral performance results. Overall, the observer's results were similar to each other, with d′s close to 1.5 and efficiencies of about 0.07–0.010. Figure 7 depicts the raw classification images across the eight frames of the stimulus duration (the classification movies) for the cued location for all observers. On average in the mask study, 1,325 false alarm trials and 3,175 correct rejection trials (average false alarm rate = 0.295) were used to construct the classification movie for each observer. For the no mask study, these average numbers were 1,136 false alarm trials and 2,864 correct rejection trials (average false alarm rate = 0.284). Note that the classification images tend to mimic the signal profile, and the brightness of the center regions appears to increase and then decrease with time. 
Table 1
 
Human behavioral performance. Ideal observer d′ = 4.832 (from Equation 2).
Table 1
 
Human behavioral performance. Ideal observer d′ = 4.832 (from Equation 2).
Condition ( d′) Percent correct Efficiency False alarm rate
Mask
K.C. 1.34 74.8 0.0770 0.281
L.L. 1.46 75.4 0.0917 0.308
No mask
D.V. 1.26 73.0 0.0683 0.313
K.C. 1.51 77.0 0.0980 0.255
Figure 7
 
Raw classification images by frame (classification movies) for the cued location for all observers. At the bottom is a high-contrast version of the signal (contrast higher than experimental conditions).
Figure 7
 
Raw classification images by frame (classification movies) for the cued location for all observers. At the bottom is a high-contrast version of the signal (contrast higher than experimental conditions).
Figure 8 depicts the radial averages of the same classification images of the cued location across the eight frames in Figure 7, with the abscissas indicating the distance from the center of the stimulus, and the ordinate giving the amplitudes of the classification images. The dashed frames indicate those radially averaged classification images that were significantly different from the null (all zeroes) classification image, and Table 2 summarizes these Hotelling T 2 tests for significance, Bonferroni-corrected for number of frames. The first significant (nonzero) cued location classification image after stimulus onset occurred in the first time frame (0–37.5 ms), except for D.V., who had the first significant nonzero classification image after stimulus onset in the second frame (37.5–75 ms). Also, as suggested by the raw classification images, for all observers the amplitudes of the classification images tended to decrease in the last frames. 
Table 2
 
Hotelling T 2 results for classification images. * p value <.01, Bonferroni-corrected for number of frames. df num = 9. For mask, df dem = 8,890. For no mask, df dem = 7,990.
Table 2
 
Hotelling T 2 results for classification images. * p value <.01, Bonferroni-corrected for number of frames. df num = 9. For mask, df dem = 8,890. For no mask, df dem = 7,990.
Mask frame K.C. L.L.
T 2 F p T 2 F p
1 57.10 6.34 <.0001 * 55.19 6.13 <.0001 *
2 170.55 18.93 <.0001 * 46.51 5.16 <.0001 *
3 140.79 15.63 <.0001 * 193.71 21.50 <.0001 *
4 92.79 10.30 <.0001 * 145.03 16.10 <.0001 *
5 28.89 3.21 .0007 * 40.77 4.53 <.0001 *
6 27.93 3.10 .0010 * 44.74 4.97 <.0001 *
7 50.87 5.65 <.0001 * 19.79 2.20 .0195
8 6.90 0.77 .6487 5.28 0.59 .8097

No mask frame D.V. K.C.
T 2 F p T 2 F p
1 22.15 2.46 .0085 31.30 3.48 .0003 *
2 63.95 7.10 <.0001 * 166.19 18.45 <.0001 *
3 64.37 7.15 <.0001 * 164.09 18.21 <.0001 *
4 124.90 13.86 <.0001 * 85.33 9.47 <.0001 *
5 79.23 8.79 <.0001 * 63.77 7.08 <.0001 *
6 33.19 3.68 .0001 * 13.29 1.48 .1506
7 46.46 5.16 <.0001 * 33.28 3.69 .0001 *
8 22.56 2.50 .0074 7.68 0.85 .5671
Figure 8a, 8b
 
The radial averages of the classification images in Figure 7. The dashed lines indicate the shape of the Gaussian signal. Error bars represent the standard errors of the mean for each point (radius). Frames in dashed lines indicate significant (nonzero) classification images, p < .01, Bonferroni-corrected for number of frames. For each observer, the upper line represents the first four frames (0–150 ms), and the lower line represents the last four frames (150–300 ms). (A) Mask study. (B) No mask study.
Figure 8a, 8b
 
The radial averages of the classification images in Figure 7. The dashed lines indicate the shape of the Gaussian signal. Error bars represent the standard errors of the mean for each point (radius). Frames in dashed lines indicate significant (nonzero) classification images, p < .01, Bonferroni-corrected for number of frames. For each observer, the upper line represents the first four frames (0–150 ms), and the lower line represents the last four frames (150–300 ms). (A) Mask study. (B) No mask study.
Across all observers and experiments, for the uncued locations there were relatively few classification images significantly different from the null classification image (14.3%), and no discernable pattern could be found across the frames or the locations. This suggests that all observers were generally able to use the cue to ignore the stimuli at the uncued locations, the appropriate strategy given that the signal had no probability of appearing at those locations. Figures 9 and 10 summarize a subset of the radial averages for the uncued classification images. Figure 9 presents the radial averages for the seven uncued locations for the first frame (as a function of the clockwise position relative to the cued location), and Figure 10 presents the radial averages for the two uncued positions adjacent to the cued location across all eight frames. The results of the cued and the uncued classification images suggest that the first use of information within the first 37.5 ms (for K.C. and L.L.) or 75 ms (for D.V.) was cue selective, or in other words specific to the cued location. 
Figure 9
 
The radial averages for the first frame for all the uncued locations (by clockwise positions relative to the cued location).
Figure 9
 
The radial averages for the first frame for all the uncued locations (by clockwise positions relative to the cued location).
Figure 10
 
The radial averages for the uncued locations adjacent to the cued location, by frame. Clockwise—adjacent uncued location clockwise from cued location. Counterclockwise—adjacent uncued location clockwise from cued location.
Figure 10
 
The radial averages for the uncued locations adjacent to the cued location, by frame. Clockwise—adjacent uncued location clockwise from cued location. Counterclockwise—adjacent uncued location clockwise from cued location.
Area integrals of classification images as a function of time
Figure 11 presents the time series of the area integrals of the classification images, and Table 3 summarizes the results of single-sample t tests for significant (nonzero) area integrals (Bonferroni adjusted for the number of frames). Figure 11 clearly demonstrates that the magnitudes of the time series rose to a peak (at 75–150 ms), and that diminished magnitudes were found at the end of the stimulus presentation (187.5–225 ms after stimulus onset). For the observers excluding D.V., the area integrals were significantly different from zero in the first frame, as in the radially averaged classification images ( Figure 8 and Table 2). For D.V., the area integrals were not significant for the first three frames, similar to his results for the radial averages that also indicated a longer delay for a cue-selective information use. Both with and without the mask, K.C. tended to have peaks earlier in the stimulus duration (about 75 ms) than L.L. and D.V. (about 150 ms). 
Table 3
 
Area integrals. AI—area integral. * p value <0.01, Bonferroni-corrected for number of frames. For mask, df dem = 4,449. For no mask, df dem = 3,999.
Table 3
 
Area integrals. AI—area integral. * p value <0.01, Bonferroni-corrected for number of frames. For mask, df dem = 4,449. For no mask, df dem = 3,999.
Mask frame K.C. L.L.
AI SE t p AI SE t p
1 41.47 8.52 4.87 <.0001 * 47.97 8.50 5.64 <.0001 *
2 87.16 8.57 10.17 <.0001 * 32.47 8.50 3.82 .0001 *
3 89.72 8.50 10.55 <.0001 * 78.55 8.51 9.24 <.0001 *
4 57.31 8.41 6.81 <.0001 * 79.45 8.55 9.30 <.0001 *
5 30.77 8.50 3.62 .0003 * 31.13 8.45 3.68 .0002 *
6 31.74 8.46 3.75 .0002 * 51.90 8.46 6.14 <.0001 *
7 4.65 8.41 0.55 .5804 33.09 8.48 3.90 .0001 *
8 1.59 8.48 0.19 .8516 −1.61 8.53 −.190 .8504

D.V. K.C.
No mask frame AI SE t p AI SE t p
1 20.96 8.49 2.47 .0135 41.88 8.23 5.09 <.0001 *
2 20.65 8.52 2.42 .0154 90.45 8.30 10.90 <.0001 *
3 20.32 8.50 2.39 .0169 97.12 8.27 11.75 <.0001 *
4 72.11 8.52 8.46 <.0001 * 53.55 8.20 6.53 <.0001 *
5 33.29 8.54 3.80 .0001 * 36.24 8.23 4.40 <.0001 *
6 23.45 8.49 2.76 .0057 * 14.13 8.21 1.72 .0854
7 30.10 8.41 3.58 .0003 * 19.40 8.17 2.37 .0176
8 13.63 8.46 1.61 .1072 10.32 8.24 1.25 .2104
Figure 11a, 11b
 
Area integrals (two standard deviations from the center) for the cued location, by frame. (A) Separated by individual. (B) All observers on the same graph. Solid lines—mask study. Dashed lines—no mask study. Circle—K. C., mask and no mask. Square—L. L., mask. Diamond—D. V., no mask. Error bars represent the standard errors of the mean.
Figure 11a, 11b
 
Area integrals (two standard deviations from the center) for the cued location, by frame. (A) Separated by individual. (B) All observers on the same graph. Solid lines—mask study. Dashed lines—no mask study. Circle—K. C., mask and no mask. Square—L. L., mask. Diamond—D. V., no mask. Error bars represent the standard errors of the mean.
As with the radially averaged classification images, relatively few area integrals for the uncued locations significantly differed from zero (9.82% of all the uncued area integrals), and no discernable pattern could be found. Figures 12 and 13 summarize the area integrals for the same subset of uncued classification images in Figures 9 and 10, all the uncued locations for the first frame ( Figures 9 and 12), and the uncued locations adjacent to the uncued locations ( Figures 10 and 13). 
Figure 12
 
Area integrals (two standard deviations from the center) for the first frame of all uncued locations (by clockwise positions relative to the cued location). Error bars represent the standard errors of the mean.
Figure 12
 
Area integrals (two standard deviations from the center) for the first frame of all uncued locations (by clockwise positions relative to the cued location). Error bars represent the standard errors of the mean.
Figure 13
 
Area integrals (two standard deviations from the center) for the adjacent uncued locations, by frame. Clockwise (downward-pointing triangle)—adjacent uncued location clockwise from cued location. Counterclockwise (upward-pointing triangle)—adjacent uncued location clockwise from cued location. Error bars represent the standard errors of the mean.
Figure 13
 
Area integrals (two standard deviations from the center) for the adjacent uncued locations, by frame. Clockwise (downward-pointing triangle)—adjacent uncued location clockwise from cued location. Counterclockwise (upward-pointing triangle)—adjacent uncued location clockwise from cued location. Error bars represent the standard errors of the mean.
Results from separating attentional selective weighting from the temporal impulse response
The evidence from the time series of the area integrals of the classification images indicates that information from the first 37.5 ms can be selectively used at the cued location. To assess the effect of temporal blurring by an impulse response function, we deconvolved the time series for each observer with the impulse response functions described in the methods (as illustrated in Figure 5). Figure 14 gives the results of the estimations for the cue-selective weighting function ( w( t)) from the deconvolution of the time series of the area integrals of the classification images for the observers. 
Figure 14a, 14b
 
Results of model fits to area integrals from the cued locations ( Figure 11). (A) Time delay constants ( τ) from 4 to 8. (B) Time delay constants ( τ) of 5 and 8 only, with error bars representing the standard errors of the mean.
Figure 14a, 14b
 
Results of model fits to area integrals from the cued locations ( Figure 11). (A) Time delay constants ( τ) from 4 to 8. (B) Time delay constants ( τ) of 5 and 8 only, with error bars representing the standard errors of the mean.
Panel A of Figure 14 gives these estimates for each value of τ tested from 4 to 8 ms. As expected, the larger values of τ (or longer delays and slower responses) lead to larger shifts in time of the weighting functions from the area integrals. As seen in Panel A, the fits for τ = 4 ms led to systematic harmonic deviations from the classification images (“ringing”). This suggests that τ = 4 ms was an inappropriate delay to represent the current classification image data, possibly because τ = 4 ms was too short, or because of the noise associated with these particular data. 
In Figure 14B, we present the only extreme values of τ with the estimated errors; because of the ringing for τ = 4 ms, we chose to present τ = 5 ms instead of τ = 4 ms. It can be seen that, after an initial delay, the weighting function for τ = 5 ms initially rises more quickly than the classification image area integrals, whereas the weighting function for τ = 8 ms rises at the same rate. From the standard errors of the area integrals and the estimated standard errors of the weighting function, two-sample independent t tests were calculated for significant differences of the weighting functions from zero, Bonferroni-adjusted for the number of points in the fitted weighting function. This analysis found that the weighting functions significantly differed from zero from 9 to 17 ms for τ = 5 ms and from 13 to 28 ms for τ = 8 ms. Thus, it appears that the model with the impulse response function could account for at least part but not all of the differences between the results from the classification images and the results from previous studies. 
A final aspect of these functions is that a substantial negative component after the stimulus duration was found to account for the decrease in the classification amplitudes in the latter part of the stimulus duration; this suggests an active inhibition of information use at the end of the stimulus duration. 
Discussion
In this study, we assessed the temporal dynamics of the use of information specific to a cue during a contrast discrimination task of a Gaussian signal. The cue was simultaneously presented with the stimuli and indicated with 100% validity which location was relevant to perform the task out of eight possible locations. The classification image analysis suggests that the observers generally used the information specific to the cued location presented as early as the first (0–37.5 ms) frame of the 300-ms stimulus presentation; the exception was observer D.V., who was shown to reliably use information from the second (37.5–70 ms) frame. This estimate of the first effect of selective attention is somewhat less than estimates of 50–100 ms from previous behavioral studies of exogenous attention (Lyon,1990; Mackeben & Nakayama,1993; Müller & Findlay,1988; Nakayama & Mackeben,1989; Shepherd & Müller,1989; Tsal,1983; for a review, see Wright & Ward,1998) and estimates of 75–100 ms from previous ERP studies (Clark & Hillyard,1996; Di Russo et al.,2003; Doallo et al.,2004; Harter et al.,1989; Hillyard et al.,1998; Hopfinger & Mangun,1998; Luck et al.,1994; Mangun,1995; Mangun & Hillyard,1991; Martínez et al.,1999; Nobre et al.,2000). 
Observer D.V. appeared to have a slower temporal response than the other two observers; thus, in theory D.V. had a smaller window of information available to him to perform the task. This decrease in information seems to be is reflected in the lower performance (in terms of d′ and efficiency, see Table 1) of D.V. compared to the other observers. However, the classification images do not seem indicative of the differences in performance among L.L. and K.C. (in her two studies). 
As described in the introduction, the first cue-specific use of information, as assessed by the classification images, may not have directly reflected the actual selection of information by an attentional mechanism. Because of the inherent temporal blurring of the visual system, we would expect the blurring of the stimulus input might lead to the use of information prior to the latency of a later selective mechanism (operating on the visual system's blurred response to the stimuli). From the results of the model fits employing a generally accepted impulse response function by Watson (1986), it appears that such temporal blurring might explain for at least part of the differences between the results from classification images and those from previous studies; however, the temporal blurring did not appear to be a full account of these differences. 
One likely crucial issue not addressed in this study is the response to the cue itself, which would have a temporal response function separate from that of the stimulus. Logically, this response must occur before the response to the stimulus at the cued location, so that selective attention may be directed to the cued information. Also not addressed is any latency after detection/recognition of the cue to begin processing of the cued stimulus information. These two aspects will be the topics of further study. 
One aspect that has varied across studies of attentional temporal dynamics includes the sampling of effects at shorter intervals. For example, in Weichselgartner and Sperling's (1987) classic behavioral study of attentional switching, they examined attentional switching dynamics using a rapid serial visual presentation (RSVP) task with a frame rate of 10–12.5 Hz. In other studies, the shortest cue SOA measured was 50 ms or more (Müller & Findlay,1988; Shepherd & Müller,1989; Tsal,1983). Also, tasks have varied in the number of potential locations. For example, in a study by Nakayama and Mackeben (1989; see also Mackeben and Nakayama,1993), observers judged the presence of a signal defined by a conjunction of orientation and color (white and black). A single location out of 64 was cued as the relevant location for the judgment at varying intervals before the stimulus display. Such parametric differences clearly could have lead to different results. 
For the ERP studies of visual attention (Clark & Hillyard,1996; Di Russo et al.,2003; Doallo et al.,2004; Harter et al.,1989; Hillyard et al.,1998; Hopfinger & Mangun,1998; Luck et al.,1994; Mangun,1995; Mangun & Hillyard,1991; Martínez et al.,1999; Nobre et al.,2000), the two earliest expected recognizable components from a visual signal would be C1 (40–60 ms), which is assumed to reflect lower visual cortical (striate) activity, and P1 (75–125 ms), which is assumed to reflect extrastriate visual areas. Evidence of attentional modulation has been generally found in the P1 component 70–100 ms after signal onset, but not C1 (e.g., Martínez et al.,1999, although see Wu, Chen, & Han,2005). One possible limitation of these studies is that evidence of cue-specific signal processing was likely constrained to be found only in the C1 and P1 components. Theoretically, an effect could be found in neural structures that are active during early processing but do not contribute to the observed C1 or P1. For example, either the organization of the particular brain area or the orientation of the pyramidal neurons relative to the measurement electrodes may not allow for the generation of a sufficient dipole at the scalp (Nunez,1981). One straightforward approach to address these issues would to integrate both approaches and perform a classification image and ERP study simultaneously, allowing for the estimation of the delay from the first selective information use (as estimated by classification images) to the differential ERP signal. 
With respect to the neural time frame of the ERP studies, another aspect not addressed in this study is the transmission time of the neural signal, such as that from the retina to higher cortical areas such as inferotemporal cortex. This transmission time would appear to be difficult to estimate with great precision, as it would likely depend on the type of stimuli and the conditions in which the stimuli are presented (although a reasonable estimate might be about 50–100 ms; e.g., Ballard, Hayhoe, Pook, & Rao,1997). Also, a simple addition of this transmission time into the model seems to be problematic. The impulse response function by Watson (1986) was designed to describe a general visual temporal response, and the neural locus of the processes that are assumed to determine the impulse response function is unspecified. 
There have been two prominent paradigms to study covert visual attention. The first is the visual search task, in which an observer must detect or give the location of a target in a field of potential targets (e.g., Treisman & Gelade,1980). The other is the cueing task, which this study employs, in which the observer must detect a target that could appear in two or more locations with a cue indicating the probable location of the target (e.g., Posner,1980). In many cueing studies, the cue is predictive above chance but less than 100%, whereas in this study, the cue was 100% predictive. The standard result is the cueing effect, in which valid cues (cues that indicate the correct target location) lead to better performance than invalid cues (cues that indicate the incorrect target location). In this study, the lack of classification images at the uncued locations suggests that observers could use the cue effectively in terms of ignoring the irrelevant information in the uncued locations (see also Eckstein et al.,2004), and thus is analogous to the standard behavioral finding of a cueing effect in a cueing study. 
For the present task, the optimal observer would make a decision by weighting all the frames at the cued location equally. However, clearly the cued information use for human observers was not uniform across the series of frames. After the initial latency, information use increased to a peak at 75–150 ms, then approached zero before the end of the trial. This also led to model fits of the weighting functions with a substantial negative component after the stimulus to account for the decrease at the end of the stimulus duration. The classification time series functions strongly resemble those found for attentional effects as a function of cue SOAs in exogenous cueing conditions. Thus, these results might reflect the reflexive and the transient nature of the exogenous attentional system (for a review, see Wright & Ward,1998). However, a study of an orientation discrimination task of grating patterns (Gabors) also using classification movies (Mareshal et al.,2006) found a similar decrease in amplitudes with stimulus duration in their classification movies. The study was not designed as an attentional study, and the stimuli appeared at a single location (central fixation) without spatial cues. Therefore, the results of this study may not have been due wholly to the transient properties of the exogenous attentional system. 
The decrease of information use did not depend upon the presence or the absence of a masking stimulus at the end of the trial, eliminating a masking effect. An argument for an absolute temporal limit for attention's ability to integrate information (an integration “window”) does not seem appropriate, as the stimulus duration was well within previous estimates of the extent of human temporal integration in noise (Eckstein, Whiting, & Thomas,1996). An effect due to saccadic preparation or suppression seems unlikely, given the results (unreported) from a study with the same design except for a longer duration (675 ms) in which eye position was monitored to assure central fixation throughout the stimulus duration. In this study, a decrease was also found at the end of the stimulus duration (450–525 ms). One might posit an effect of inhibition of return (Klein,2000; Posner & Cohen,1984; Posner, Rafal, Choate, & Vaughan,1985), in which it is believed that covert attention tends to orient toward novel relevant locations over recently inspected locations as early as 200–300 ms. However, inhibition of return seems an unlikely explanation in this case, as the 100% valid cue should not have directed attention away from the cued location, and we did not find evidence of such attentional switching to the uncued locations. Finally, one possibility is that the decrease was related to the end of the trial itself and was possibly related to response preparation overlapping with the stimulus presentation. 
Appendix A: Impulse response function
Watson (1986) proposed a linear systems model for a temporal impulse response function as the difference of two stages, with both stages constructed as a cascade or a series of leaky integrators (h1 andh2). Using Watson's notation,  
h 1 ( t ) = u ( t ) [ τ ( n 1 1 ) ! ] 1 ( t / τ ) n 1 1 e t / τ .
(A1)
 
u( t) = unit step function;
n 1 = the number of individual leaky integrators for Stage 1;
τ = time delay constant;
The second stage is identical to the first, except for the number of individual leaky integrators and a longer time delay constant.  
h 2 ( t ) = u ( t ) [ κ τ ( n 2 1 ) ! ] 1 ( t / κ τ ) n 2 1 e t / κ τ .
(A2)
 
n 2 = the number of individual leaky integrators for Stage 2;
κτ = time delay constant, with κ > 1;
The impulse response function is the difference of h 1 and h 2, with ζ being equal to the strength of h 2 relative to h 1, and ξ as a scalar for the overall amplitude of the impulse response function.  
h ( t ) = ξ [ h 1 ( t ) ζ h 2 ( t ) ] .
(A3)
 
The specific parameter values were n 1 = 9, n 2 = 10, κ = 1.33, and ζ = 0.9, and τ was varied from 4 to 8 ms in 1 ms steps. These values were shown to fit a number of empirical studies of temporal sensitivity well (De Lange,1958; Robson,1966; Roufs & Blommaert,1981). 
Acknowledgments
The authors would like to thank Barry Giesbrecht for his advice. Supported by awards to M.E. from NSF-0135118 and NIH 5R01EY15925-2. 
Commercial relationships: none. 
Corresponding author: Steven S. Shimozaki. 
Email: ss373@le.ac.uk. 
Address: School of Psychology, University of Leicester, Lancaster Road, Leicester, LE1 9HN, UK. 
References
Abbey, C. K. Eckstein, M. P. (2002).Classification image analysis: Estimation and statistical inference for two-alternative forced-choice experiments.Journal of Vision,2, (1):5,66–78, http://journalofvision.org/2/1/5/, doi:10.1167/2.1.5. [PubMed] [Article] [CrossRef]
Abbey, C. K. Eckstein, M. P. (2006).Classification images for detection, contrast discrimination, and identification tasks with a common ideal observer.Journal of Vision,6, (4):4,335–355, http://journalofvision.org/6/4/4/, doi:10.1167/6.4.4. [PubMed] [Article] [CrossRef]
Ahumada, A. J. (1996).Perceptual classification images from Vernier acuity masked by noise.Perception,26,18.
Ahumada, A. J.Jr. (2002).Classification image weights and internal noise level estimation.Journal of Vision,2, (1):8,121–131, http://journalofvision.org/2/1/8/, doi:10.1167/2.1.8. [PubMed] [Article] [CrossRef]
Ahumada, A. J. Lovell, J. (1971).Stimulus features in signal detection.Journal of the Acoustical Society of America,49,1751–1756. [CrossRef]
Ballard, D. H. Hayhoe, M. M. Pook, P. K. Rao, R. P. (1997).Deictic codes for the embodiment of cognition.Behavioral and Brain Sciences,20,743–767. [PubMed]
Barlow, H. B. (1956).Retinal noise and absolute threshold.Journal of the Optical Society of America,46,634–639. [PubMed] [CrossRef] [PubMed]
Ahumada, A. (1998).Technique to extract relevant image features for visual tasks.Proceedings of SPIE,3299,79–85.
Burgess, A. E. Wagner, R. F. Jennings, R. J. Barlow, H. B. (1981).Efficiency of human visual signal discrimination.Science,214,93–94. [PubMed] [CrossRef] [PubMed]
Caspi, A. Beutter, B. R. Eckstein, M. P. (2004).The time course of visual information accrual guiding eye movement decisions.Proceedings of the National Academy of Sciences of the United States of America,101,13086–13090. [PubMed] [Article] [CrossRef] [PubMed]
Clark, V. P. Hillyard, S. A. (1996).Spatial selective attention affects early exastriate but not striate components of visual evoked potential.Journal of Cognitive Neuroscience,8,387–402. [CrossRef] [PubMed]
Dakin, S. C. Bex, P. J. (2003).Natural image statistics mediate brightness ‘filling in’.Proceedings of the Royal Society B: Biological Sciences,270,2341–2348. [PubMed] [Article] [CrossRef]
De Lange, H. (1958).Research into the dynamic nature of the human fovea-cortex systems with intermittent and modulated light: I Attenuation characteristics with white and colored light.Journal of the Optical Society of America,48,777–784. [CrossRef] [PubMed]
DeAngelis, G. C. Ohzawa, I. Freeman, R. D. (1993a).Spatiotemporal organization of simple-cell receptive fields in the cat's striate cortex I General characteristics and postnatal development.Journal of Neurophysiology,69,1091–1117. [PubMed]
DeAngelis, G. C. Ohzawa, I. Freeman, R. D. (1993b).Spatiotemporal organization of simple-cell receptive fields in the cat's striate cortex II Linearity of temporal and spatial summation.Journal of Neurophysiology,69,1118–1135. [PubMed]
Di Russo, F. Martínez, A. Hillyard, S. A. (2003).Source analysis of event-related cortical activity during visuo-spatial attention.Cerebral Cortex,13,486–499. [PubMed] [Article] [CrossRef] [PubMed]
Doallo, S. Lorenzo-López, L. Vizoso, C. Rodríguez Holguín, S. Amenedo, E. Bará, S. (2004).The time course of the effects of central and peripheral cues on visual processing: An event-related potentials study.Clinical Neurophysiology,115,199–210. [PubMed] [CrossRef] [PubMed]
Eckstein, M. P. Ahumada, A. J.Jr. (2002).Classification images: A tool to analyze visual strategies.Journal of Vision,2, (1):,
Eckstein, M. P. Pham, B. T. Shimozaki, S. S. (2004).The footprints of visual attention during search with 100% valid and 100% invalid cues.Vision Research,44,1193–1207. [PubMed] [CrossRef] [PubMed]
Eckstein, M. P. Shimozaki, S. S. Abbey, C. K. (2002).The footprints of visual attention in the Posner cueing paradigm revealed by classification images.Journal of Vision,2, (1):3,25–45, http://journalofvision.org/2/1/3/, doi:10.1167/2.1.3. [PubMed] [Article] [CrossRef]
Eckstein, M. P. Whiting, J. S. Thomas, J. P. (1996).Role of knowledge in human visual temporal integration in spatiotemporal noise.Journal of the Optical Society of America A, Optics, image science, and vision,13,1960–1968. [PubMed] [CrossRef] [PubMed]
Ghose, G. M. (2006).Strategies optimize the detection of motion transients.Journal of Vision,6, (4):10,429–440, http://journalofvision.org/6/4/10/, doi:10.1167/6.4.10. [PubMed] [Article] [CrossRef]
Gold, J. M. Murray, R. F. Bennett, P. J. Sekuler, A. B. (2000).Deriving behavioral receptive fields for visually completed contours.Current Biology,10,663–666. [PubMed] [Article] [CrossRef] [PubMed]
Gold, J. M. Shiffrin, R. Elder, J. (2006).Finding visual features: Using stochastic stimuli to discover internal representations.Journal of Vision,6, (4):,
Gold, J. M. Shubel, E. (2006).The spatiotemporal properties of visual completion measured by response classification.Journal of Vision,6, (4):5,356–365, http://journalofvision.org/6/4/5/, doi:10.1167/6.4.5. [PubMed] [Article] [CrossRef]
Golub, G. H. Van Loan, C. F. (1989).Matrix computations.Baltimore, MD:John Hopkins University Press.
Green, D. M. Swets, J. A. (1966).Signal detection theory.New York:Wiley.
Harris, R. J. (1985).A primer of multivariate statistics. (pp.99–118).Orlando, FL:Academic Press.
Harter, M. R. Miller, S. L. Price, N. J. LaLonde, M. E. Keyes, A. L. (1989).Neural processes involved in directing attention.Journal of Cognitive Neuroscience,1,223–237. [CrossRef] [PubMed]
Hillyard, S. A. Teder-Sälejärvi, W. A. Münte, T. F. (1998).Temporal dynamics of early perceptual processing.Current Opinion in Neurobiology,8,202–210. [PubMed] [CrossRef] [PubMed]
Hopfinger, J. Mangun, G. R. (1998).Reflexive attention modulates processing of visual stimuli in human extrastriate cortex.Psychological Science,9,441–447. [CrossRef]
Kinchla, R. A. (1974).Detecting target elements in multi-element arrays: A confusability model.Perception & Psychophysics,15,149–158. [CrossRef]
(1977).The role of structural redundancy in the perception of visual targets.Perception & Psychophysics,22,19–30. [CrossRef]
Kinchla, R. A. Chen, Z. Evert, D. (1995).Precue effects in visual search: Data or resource limited?Perception & Psychophysics,57,441–450. [PubMed] [CrossRef] [PubMed]
Klein, R. M. (2000).Inhibition of return.Trends in Cognitive Science,4,138–147. [PubMed] [CrossRef]
Knoblauch, K. Thomas, J. P. D'Zmura, M. (1999).Feedback temporal frequency and stimulus classification.Investigative Ophthalmology & Visual Science,40,4171.
Lankheet, M. J. (2006).Unraveling adaptation and mutual inhibition in perceptual rivalry.Journal of Vision,6, (4):1,304–310, http://journalofvision.org/6/4/1/, doi:10.1167/6.4.1. [PubMed] [Article] [CrossRef] [PubMed]
Levi, D. M. Klein, S. A. (2002).Classification images for detection and position discrimination in the fovea and parafovea.Journal of Vision,2, (1):4,46–65, http://journalofvision.org/2/1/4/, doi:10.1167/2.1.4. [PubMed] [Article] [CrossRef]
Lu, H. Liu, Z. (2006).Computing dynamic classification images from correlation maps.Journal of Vision,6, (4):12,475–483, http://journalofvision.org/6/4/12/, doi:10.1167/6.4.12. [PubMed] [Article] [CrossRef]
Luck, S. J. Hillyard, S. A. Mouloua, M. Woldorff, M. G. Clark, V. P. Hawkins, H. L. (1994).Effects of spatial cuing on luminance detectability: Psychophysical and electrophysiological evidence for early selection.Journal of Experimental Psychology: Human Perception and Performance,20,887–904. [PubMed] [CrossRef] [PubMed]
Mackeben, M. Nakayama, K. (1993).Express attentional shifts.Vision Research,33,85–90. [PubMed] [CrossRef] [PubMed]
Ludwig, C. J. Gilchrist, I. D. McSorley, E. Baddeley, R. J. (2205).The temporal impulse response underlying saccadic decisions.Journal of Neuroscience,25,9907–9912. [PubMed] [Article] [CrossRef]
Luria, A. R. (1973).The working brain.New York:Penguin.
Lyon, D. R. (1987).How quickly can attention affect form perception?.Brooks Air Force Base, TX:Air Force Human Resources Laboratory.
Lyon, D. R. (1990).Large and rapid improvement in form discrimination accuracy following a location precue.Acta Psychologica,73,69–82. [PubMed] [CrossRef] [PubMed]
Mangun, G. R. (1995).Neural mechanisms of visual selective attention.Psychophysiology,32,4–18. [PubMed] [CrossRef] [PubMed]
Mangun, G. R. Hillyard, S. A. (1991).Modulation of sensory-evoked brain potentials indicate changes in perceptual processing during visual-spatial priming.Journal of Experimental Psychology: Human Perception and Performance,17,1057–1074. [PubMed] [CrossRef] [PubMed]
Mareschal, I. Dakin, S. C. Bex, P. J. (2006).Dynamic properties of orientation discrimination assessed by using classification images.Proceedings of the National Academy of Sciences of the United States of America,103,5131–5136. [PubMed] [Article] [CrossRef] [PubMed]
Marmarelis, P. N. Marmarelis, V. Z. (1978).Analysis of physiological systems: The white noise approach.New York:Plenum.
Martínez, A. Anllo-Vento, L. Sereno, M. I. Frank, L. R. Buxton, R. B. Dubowitz, D. J. (1999).Involvement of striate and extrastriate visual cortical areas in spatial attention.Nature Neuroscience,4,364–369. [PubMed] [Article]
Müller, H. J. Findlay, J. M. (1988).The effect of visual attention on peripheral discrimination thresholds in single and multiple element displays.Acta Psychologica,69,129–155. [PubMed] [CrossRef] [PubMed]
Murray, R. F. Bennett, P. J. Sekuler, A. B. (2002).Optimal methods for calculating classification images: Weighted sums.Journal of Vision,2, (1):6,79–104, http://journalofvision.org/2/1/6/, doi:10.1167/2.1.6. [PubMed] [Article] [CrossRef]
Nakayama, K. Mackeben, M. (1989).Vision Research,29,1631–1647. [PubMed] [CrossRef] [PubMed]
Neri, P. (2004).Estimation of nonlinear psychophysical kernels.Journal of Vision,4, (2):2,82–91, http://journalofvision.org/4/2/2/, doi:10.1167/4.2.2. [PubMed] [Article] [CrossRef]
Neri, P. Heeger, D. J. (2002).Spatiotemporal mechanisms for detecting and identifying image features in human vision.Nature Neuroscience,5,812–816. [PubMed] [Article] [PubMed]
Neumaier, A. (1998).Solving ill-conditioned and singular linear systems: A tutorial on regularization.SIAM Review,40,636–666. [CrossRef]
Nobre, A. C. Sebestyen, G. N. Miniussi, C. (2000).The dynamics of shifting visuospatial attention revealed by event-related potentials.Neuropsychologia,38,964–974. [PubMed] [CrossRef] [PubMed]
Nunez, P. L. (1981).Electric fields of the brain.Oxford:Oxford University Press.
Nykamp, D. Q. Ringach, D. L. (2002).Full identification of a linear–nonlinear system via cross-correlation analysis.Journal of Vision,2, (1):1,1–11, http://journalofvision.org/2/1/1/, doi:10.1167/2.1.1. [PubMed] [Article] [CrossRef] [PubMed]
Palmer, J. (1995).Attention in visual search: Distinguishing four causes of set-size effects.Current Directions in Psychological Science,4,118–123. [CrossRef]
Pashler, H. E. (1998).The psychology of attention.Cambridge, MA:MIT Press.
Posner, M. I. (1978).Chronometric explorations of the mind.Hillsdale, NJ:Erlbaum.
Posner, M. I. (1980).Orienting of attention.Quarterly Journal of Experimental Psychology,32,3–25. [PubMed] [CrossRef] [PubMed]
Posner, M. I. Cohen, Y. Bouma, X. H. Bouwhuis, D. G. (1984). Components of visual orienting.Attention and performance X. (pp.531–556).Hillsdale, NJ:Erlbaum.
Posner, M. I. Rafal, R. D. Choate, L. S. Vaughan, J. (1985).Inhibition of return: Neural basis and function.Cognitive Neuropsychology,2,211–228. [CrossRef]
Rajashekar, U. Bovik, A. C. Cormack, L. K. (2006).Visual search in noise: Revealing the influence of structural cues by gaze-contingent classification image analysis.Journal of Vision,6, (4):7,379–386, http://journalofvision.org/6/4/7/, doi:10.1167/6.4.7. [PubMed] [Article] [CrossRef]
Reid, R. C. Victor, J. D. Shapley, R. M. (1997).The use of m-sequences in the analysis of visual neurons: Linear receptive field properties.Visual Neuroscience,14,1015–1027. [PubMed] [CrossRef] [PubMed]
Remington, R. Pierce, L. (1984).Moving attention: Evidence for time-invariant shifts of visual selective attention.Perception & Psychophysics,35,393–399. [PubMed] [CrossRef] [PubMed]
Ringach, D. L. Sapiro, G. Shapley, R. (1997).A subspace reverse-correlation technique for the study of visual neurons.Vision Research,37,2455–2464. [PubMed] [CrossRef] [PubMed]
Robson, J. G. (1966).Spatial and temporal contrast sensitivity functions of the visual system.Journal of the Optical Society of America,56,1141–1142. [CrossRef]
Roufs, J. A. Blommaert, F. J. (1981).Temporal impulse and step responses of the human eye obtained psychophysically by means of a drift-correcting perturbation technique.Vision Research,21,1203–1221. [PubMed] [CrossRef] [PubMed]
Shepherd, M. Müller, H. J. (1989).Movement versus focusing of visual attention.Perception & Psychophysics,46,146–154. [PubMed] [CrossRef] [PubMed]
Shimozaki, S. S. Eckstein, M. P. Abbey, C. K. (2003).Comparison of two weighted integration models for the cueing paradigm: Linear and likelihood.Journal of Vision,3, (3):5,209–229, http://journalofvision.org//3/3/3/, doi:10.1167/3.3.3. [PubMed] [Article] [CrossRef]
Shimozaki, S. S. Eckstein, M. P. Abbey, C. K. (2005).Spatial profiles of local and nonlocal effects upon contrast detection/discrimination from classification images.Journal of Vision,5, (1):5,45–57, http://journalofvision.org/5/1/5/, doi:10.1167/5.1.5. [PubMed] [Article] [CrossRef]
Shulman, G. L. Remington, R. W. McLean, J. P. (1979).Moving attention through visual space.Journal of Experimental Psychology: Human Perception and Performance,5,522–526. [PubMed] [CrossRef] [PubMed]
Sperling, G. Weichselgartner, E. (1995).Episodic theory of the dynamics of spatial attention.Psychological Review,102,503–522. [CrossRef]
Spitzer, H. Desimone, R. Moran, J. (1988).Increased attention enhances both behavioral and neuronal performance.Science,240,338–340. [PubMed] [CrossRef] [PubMed]
Taylor, J. R. (1982).An introduction to error analysis: The study of uncertainty in physical measurements.Mill Valley, CA:University Science Books.
Tjan, B. S. Nandy, A. S. (2006).Classification images with uncertainty.Journal of Vision,6, (4):8,387–413, http://journalofvision.org/6/4/8/, doi:10.1167/6.4.8. [PubMed] [Article] [CrossRef] [PubMed]
Treisman, A. M. Gelade, G. (1980).A feature integration theory of attention.Cognitive Psychology,12,97–136. [PubMed] [CrossRef] [PubMed]
Tsal, Y. (1983).Movements of attention across the visual field.Journal of Experimental Psychology: Human Perception and Performance,9,523–530. [PubMed] [CrossRef] [PubMed]
Victor, J. D. (2005).Analyzing receptive fields, classification images and function images: Challenges with opportunities for synergy.Nature Neuroscience,8,1651–1656. [PubMed] [Article] [CrossRef] [PubMed]
Watson, A. B. Boff,, K. Kaufman,, L. Thomas, J. P. (1986). Temporal sensitivity.Handbook of perception and human performance.New York:Wiley.
Weichselgartner, E. Sperling, G. (1987).Dynamics of automatic and controlled visual attention.Science,238,778–780. [PubMed] [CrossRef] [PubMed]
Wright, R. D. Ward, L. M. Wright, R. D. (1998). The control of visual attention.Visual attention. (pp.132–186).New York:Oxford University Press.
Wu, Y. Chen, J. Han, S. (2005).Neural mechanisms of attentional modulation of perceptual grouping by collinearity.Neuroreport,16,567–570. [PubMed] [CrossRef] [PubMed]
Xing, J. Ahumada, A. J. (2002).Estimation of human-observer templates in temporal varying noise [Abstract].Journal of Vision,2, (7):343, [CrossRef]
Yeshurun, Y. Carrasco, M. (1998).Attention improves or impairs visual performance by enhancing spatial resolution.Nature,396,72–76. [PubMed] [CrossRef] [PubMed]
Yeshurun, Y. Carrasco, M. (1999).Spatial attention improves performance in spatial resolution tasks.Vision Research,39,293–306. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Stimulus sequence. Observer judged upon the presence of the signal (yes/no) at the cued location. Each stimulus frame (37.5 ms) contained an independent sample of Gaussian-distributed luminance image noise. Stimulus duration = 300 ms (eight frames). Studies were run with the mask (Mask) and without a mask (No Mask).
Figure 1
 
Stimulus sequence. Observer judged upon the presence of the signal (yes/no) at the cued location. Each stimulus frame (37.5 ms) contained an independent sample of Gaussian-distributed luminance image noise. Stimulus duration = 300 ms (eight frames). Studies were run with the mask (Mask) and without a mask (No Mask).
Figure 2
 
Responses to a single impulse as a result of an impulse response function and an attentional weighting function. The pulse is convolved with the impulse response function from Watson (1986; τ = time delay constant = 6 ms) and then weighted with an attentional weighting function. The temporal blurring from the impulse response function leads to information use prior to the beginning of the selective weighting function (attentional latency). This is demonstrated in the bottom graph with the impulse, the impulse response, and the hypothesized selective attention functions presented on the same graph and the overlap in the impulse response and hypothesized selective attention function.
Figure 2
 
Responses to a single impulse as a result of an impulse response function and an attentional weighting function. The pulse is convolved with the impulse response function from Watson (1986; τ = time delay constant = 6 ms) and then weighted with an attentional weighting function. The temporal blurring from the impulse response function leads to information use prior to the beginning of the selective weighting function (attentional latency). This is demonstrated in the bottom graph with the impulse, the impulse response, and the hypothesized selective attention functions presented on the same graph and the overlap in the impulse response and hypothesized selective attention function.
Figure 3
 
Modeling the generation of behavioral responses and classification images with standard classification images. Upper—behavioral responses. The dot product of the stimulus (g(t)) and the stimulus weights leads to a decision variable (λ), which is compared to a criterion for a judgment of signal presence or signal absence. Lower—classification images, For a series of no signal trials, the same decision variable as above (λ) leads to a judgment of signal presence or signal absence. If λ > crit, the observer judges the signal to be present, which is an erroneous false alarm for a no signal trial. The noise fields leading to a false are averaged to create the classification image. The resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable (λ).
Figure 3
 
Modeling the generation of behavioral responses and classification images with standard classification images. Upper—behavioral responses. The dot product of the stimulus (g(t)) and the stimulus weights leads to a decision variable (λ), which is compared to a criterion for a judgment of signal presence or signal absence. Lower—classification images, For a series of no signal trials, the same decision variable as above (λ) leads to a judgment of signal presence or signal absence. If λ > crit, the observer judges the signal to be present, which is an erroneous false alarm for a no signal trial. The noise fields leading to a false are averaged to create the classification image. The resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable (λ).
Figure 4
 
Modeling the generation of behavioral responses and classification images with the addition of the impulse response function. Upper—behavioral responses. The stimulus ( g( t) is convolved with the impulse response function ( h( t)), leading to an internal response. The dot product of the internal response with the cue-selective weighting function ( w( t))) leads to a decision variable ( λ), which is compared to a criterion for a judgment of signal presence or signal absence (last step not shown in this figure, see Figure 3). Lower—classification images. The calculation of the classification image from the decision variable ( λ) is the same as in Figure 3, see Figure 3 caption for details. As in Figure 3, the resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable ( λ). In this case, this is the convolution of the impulse response function ( h( t)) and the cue-selective weighting function ( w( t)).
Figure 4
 
Modeling the generation of behavioral responses and classification images with the addition of the impulse response function. Upper—behavioral responses. The stimulus ( g( t) is convolved with the impulse response function ( h( t)), leading to an internal response. The dot product of the internal response with the cue-selective weighting function ( w( t))) leads to a decision variable ( λ), which is compared to a criterion for a judgment of signal presence or signal absence (last step not shown in this figure, see Figure 3). Lower—classification images. The calculation of the classification image from the decision variable ( λ) is the same as in Figure 3, see Figure 3 caption for details. As in Figure 3, the resulting classification image is an estimate of the stimulus weights (through time) leading to the calculation of the decision variable ( λ). In this case, this is the convolution of the impulse response function ( h( t)) and the cue-selective weighting function ( w( t)).
Figure 5
 
Recovering an estimate of the cue-selective weighting function ( w( t)) from the classification image. The classification image is deconvolved with the impulse response function ( h( t)).
Figure 5
 
Recovering an estimate of the cue-selective weighting function ( w( t)) from the classification image. The classification image is deconvolved with the impulse response function ( h( t)).
Figure 6
 
Family of impulse response functions from Watson (1986) time constants (τ) from 4 to 8 ms. See text for impulse response function details and parameters.
Figure 6
 
Family of impulse response functions from Watson (1986) time constants (τ) from 4 to 8 ms. See text for impulse response function details and parameters.
Figure 7
 
Raw classification images by frame (classification movies) for the cued location for all observers. At the bottom is a high-contrast version of the signal (contrast higher than experimental conditions).
Figure 7
 
Raw classification images by frame (classification movies) for the cued location for all observers. At the bottom is a high-contrast version of the signal (contrast higher than experimental conditions).
Figure 8a, 8b
 
The radial averages of the classification images in Figure 7. The dashed lines indicate the shape of the Gaussian signal. Error bars represent the standard errors of the mean for each point (radius). Frames in dashed lines indicate significant (nonzero) classification images, p < .01, Bonferroni-corrected for number of frames. For each observer, the upper line represents the first four frames (0–150 ms), and the lower line represents the last four frames (150–300 ms). (A) Mask study. (B) No mask study.
Figure 8a, 8b
 
The radial averages of the classification images in Figure 7. The dashed lines indicate the shape of the Gaussian signal. Error bars represent the standard errors of the mean for each point (radius). Frames in dashed lines indicate significant (nonzero) classification images, p < .01, Bonferroni-corrected for number of frames. For each observer, the upper line represents the first four frames (0–150 ms), and the lower line represents the last four frames (150–300 ms). (A) Mask study. (B) No mask study.
Figure 9
 
The radial averages for the first frame for all the uncued locations (by clockwise positions relative to the cued location).
Figure 9
 
The radial averages for the first frame for all the uncued locations (by clockwise positions relative to the cued location).
Figure 10
 
The radial averages for the uncued locations adjacent to the cued location, by frame. Clockwise—adjacent uncued location clockwise from cued location. Counterclockwise—adjacent uncued location clockwise from cued location.
Figure 10
 
The radial averages for the uncued locations adjacent to the cued location, by frame. Clockwise—adjacent uncued location clockwise from cued location. Counterclockwise—adjacent uncued location clockwise from cued location.
Figure 11a, 11b
 
Area integrals (two standard deviations from the center) for the cued location, by frame. (A) Separated by individual. (B) All observers on the same graph. Solid lines—mask study. Dashed lines—no mask study. Circle—K. C., mask and no mask. Square—L. L., mask. Diamond—D. V., no mask. Error bars represent the standard errors of the mean.
Figure 11a, 11b
 
Area integrals (two standard deviations from the center) for the cued location, by frame. (A) Separated by individual. (B) All observers on the same graph. Solid lines—mask study. Dashed lines—no mask study. Circle—K. C., mask and no mask. Square—L. L., mask. Diamond—D. V., no mask. Error bars represent the standard errors of the mean.
Figure 12
 
Area integrals (two standard deviations from the center) for the first frame of all uncued locations (by clockwise positions relative to the cued location). Error bars represent the standard errors of the mean.
Figure 12
 
Area integrals (two standard deviations from the center) for the first frame of all uncued locations (by clockwise positions relative to the cued location). Error bars represent the standard errors of the mean.
Figure 13
 
Area integrals (two standard deviations from the center) for the adjacent uncued locations, by frame. Clockwise (downward-pointing triangle)—adjacent uncued location clockwise from cued location. Counterclockwise (upward-pointing triangle)—adjacent uncued location clockwise from cued location. Error bars represent the standard errors of the mean.
Figure 13
 
Area integrals (two standard deviations from the center) for the adjacent uncued locations, by frame. Clockwise (downward-pointing triangle)—adjacent uncued location clockwise from cued location. Counterclockwise (upward-pointing triangle)—adjacent uncued location clockwise from cued location. Error bars represent the standard errors of the mean.
Figure 14a, 14b
 
Results of model fits to area integrals from the cued locations ( Figure 11). (A) Time delay constants ( τ) from 4 to 8. (B) Time delay constants ( τ) of 5 and 8 only, with error bars representing the standard errors of the mean.
Figure 14a, 14b
 
Results of model fits to area integrals from the cued locations ( Figure 11). (A) Time delay constants ( τ) from 4 to 8. (B) Time delay constants ( τ) of 5 and 8 only, with error bars representing the standard errors of the mean.
v 0 = a vector containing the null hypothesis classification image (all zeroes);
v 1 = a vector containing the observed radial average classification image;
n 0, n 1 = the number of observations for each classification image (in this case, we assume that n 0 = n 1);
K −1 = the inverse of the pooled covariance matrix for v 0 and v 1;
Table 1
 
Human behavioral performance. Ideal observer d′ = 4.832 (from Equation 2).
Table 1
 
Human behavioral performance. Ideal observer d′ = 4.832 (from Equation 2).
Condition ( d′) Percent correct Efficiency False alarm rate
Mask
K.C. 1.34 74.8 0.0770 0.281
L.L. 1.46 75.4 0.0917 0.308
No mask
D.V. 1.26 73.0 0.0683 0.313
K.C. 1.51 77.0 0.0980 0.255
Table 2
 
Hotelling T 2 results for classification images. * p value <.01, Bonferroni-corrected for number of frames. df num = 9. For mask, df dem = 8,890. For no mask, df dem = 7,990.
Table 2
 
Hotelling T 2 results for classification images. * p value <.01, Bonferroni-corrected for number of frames. df num = 9. For mask, df dem = 8,890. For no mask, df dem = 7,990.
Mask frame K.C. L.L.
T 2 F p T 2 F p
1 57.10 6.34 <.0001 * 55.19 6.13 <.0001 *
2 170.55 18.93 <.0001 * 46.51 5.16 <.0001 *
3 140.79 15.63 <.0001 * 193.71 21.50 <.0001 *
4 92.79 10.30 <.0001 * 145.03 16.10 <.0001 *
5 28.89 3.21 .0007 * 40.77 4.53 <.0001 *
6 27.93 3.10 .0010 * 44.74 4.97 <.0001 *
7 50.87 5.65 <.0001 * 19.79 2.20 .0195
8 6.90 0.77 .6487 5.28 0.59 .8097

No mask frame D.V. K.C.
T 2 F p T 2 F p
1 22.15 2.46 .0085 31.30 3.48 .0003 *
2 63.95 7.10 <.0001 * 166.19 18.45 <.0001 *
3 64.37 7.15 <.0001 * 164.09 18.21 <.0001 *
4 124.90 13.86 <.0001 * 85.33 9.47 <.0001 *
5 79.23 8.79 <.0001 * 63.77 7.08 <.0001 *
6 33.19 3.68 .0001 * 13.29 1.48 .1506
7 46.46 5.16 <.0001 * 33.28 3.69 .0001 *
8 22.56 2.50 .0074 7.68 0.85 .5671
Table 3
 
Area integrals. AI—area integral. * p value <0.01, Bonferroni-corrected for number of frames. For mask, df dem = 4,449. For no mask, df dem = 3,999.
Table 3
 
Area integrals. AI—area integral. * p value <0.01, Bonferroni-corrected for number of frames. For mask, df dem = 4,449. For no mask, df dem = 3,999.
Mask frame K.C. L.L.
AI SE t p AI SE t p
1 41.47 8.52 4.87 <.0001 * 47.97 8.50 5.64 <.0001 *
2 87.16 8.57 10.17 <.0001 * 32.47 8.50 3.82 .0001 *
3 89.72 8.50 10.55 <.0001 * 78.55 8.51 9.24 <.0001 *
4 57.31 8.41 6.81 <.0001 * 79.45 8.55 9.30 <.0001 *
5 30.77 8.50 3.62 .0003 * 31.13 8.45 3.68 .0002 *
6 31.74 8.46 3.75 .0002 * 51.90 8.46 6.14 <.0001 *
7 4.65 8.41 0.55 .5804 33.09 8.48 3.90 .0001 *
8 1.59 8.48 0.19 .8516 −1.61 8.53 −.190 .8504

D.V. K.C.
No mask frame AI SE t p AI SE t p
1 20.96 8.49 2.47 .0135 41.88 8.23 5.09 <.0001 *
2 20.65 8.52 2.42 .0154 90.45 8.30 10.90 <.0001 *
3 20.32 8.50 2.39 .0169 97.12 8.27 11.75 <.0001 *
4 72.11 8.52 8.46 <.0001 * 53.55 8.20 6.53 <.0001 *
5 33.29 8.54 3.80 .0001 * 36.24 8.23 4.40 <.0001 *
6 23.45 8.49 2.76 .0057 * 14.13 8.21 1.72 .0854
7 30.10 8.41 3.58 .0003 * 19.40 8.17 2.37 .0176
8 13.63 8.46 1.61 .1072 10.32 8.24 1.25 .2104
u( t) = unit step function;
n 1 = the number of individual leaky integrators for Stage 1;
τ = time delay constant;
n 2 = the number of individual leaky integrators for Stage 2;
κτ = time delay constant, with κ > 1;
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×