Free
Article  |   April 2014
Variability in visual working memory ability limits the efficiency of perceptual decision making
Author Affiliations
  • Edward F. Ester
    Department of Psychology, University of California, San Diego, La Jolla, CA, USA
    eester@ucsd.edu
  • Tiffany C. Ho
    School of Medicine, University of California, San Francisco, San Francisco, CA, USA
    tiffnie@gmail.com
  • Scott D. Brown
    Department of Psychology, University of Newcastle, Callaghan, NSW, Australia
    scott.brown@newcastle.edu.au
  • John T. Serences
    Department of Psychology, University of California, San Diego, La Jolla, CA, USA
    Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, USA
    jserences@ucsd.edu
Journal of Vision April 2014, Vol.14, 2. doi:https://doi.org/10.1167/14.4.2
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Edward F. Ester, Tiffany C. Ho, Scott D. Brown, John T. Serences; Variability in visual working memory ability limits the efficiency of perceptual decision making. Journal of Vision 2014;14(4):2. https://doi.org/10.1167/14.4.2.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.

Introduction
Humans must frequently make rapid and accurate decisions based on limited sensory information. Theoretical models of decision making—informed by psychophysical (e.g., Brown & Heathcote, 2008; Link & Heath, 1975; Ratcliff & McKoon, 2008; Usher & McClelland, 2001) and neurophysiological (e.g., Purcell et al., 2010; Shadlen & Newsome, 2001) experiments—treat decision making as a problem of inference: successive samples of (noisy) sensory evidence are used to construct and update a “decision variable” that represents the likely state of the world (Gold & Shadlen, 2007). This process continues until an internal response criterion is met, at which point sampling is terminated and a response is issued. This sampling-to-criterion process can be conceived of rather literally as the accumulation of physical evidence (e.g., filling a “bucket” with evidence), or as a process of Bayesian updating, where current sensory evidence is used to compute a posterior probability, and that posterior then serves as a prior for interpreting subsequent sensory evidence (Wald, 1947). In either case, some kind of storage buffer or workspace is needed to represent and update the decision variable based on incoming sensory information. 
One candidate system is working memory (WM). This system enables the storage of information in a durable and readily accessible state for short periods (on the order of seconds). Neurophysiological (e.g., Romo, Brody, Hernandez, & Lemus, 1999; Shadlen & Newsome, 2001) and functional neuroimaging (Heekeren, Marrett, Bandettini, & Ungerleider, 2004) studies suggest a great deal of overlap between cortical regions engaged in WM storage and decision making, particularly in regions of posterior parietal and frontal cortex. For instance, Shadlen and Newsome (2001) recorded from neurons in the lateral intraparietal area (LIP) while monkeys discriminated the global direction (e.g., upward vs. downward) of a moving dot kinetogram by making a saccade to one of two peripheral targets. Firing rates in neurons with receptive fields at the chosen target increased gradually over the course of each trial and reached a maximum immediately before the saccade. Moreover, the rate at which firing rates increased was monotonically related to the proportion of upward or downward moving dots (i.e., the “strength” of the sensory evidence). These findings (see Purcell et al., 2010, for related findings in macaque frontal eye fields) suggest that these LIP neurons represent the evolution of a decision variable over time. However, many LIP (and frontal eye field) neurons also demonstrate sustained increases in firing during delayed saccade tasks—a hallmark of WM storage. In fact, many researchers interested in decision making select neurons based on this property. 
Although WM is critical for many forms of “online” cognitive processing, most researchers acknowledge that this system is subject to a relatively small capacity limit (see Luck & Vogel, 2013, for a recent review). However, WM ability varies substantially across individuals. If decision making depends in part on WM, then one would predict that interindividual variability in WM ability should limit the efficiency of decision making. Here, we provide a test of this claim. To anticipate our results, we find that individual differences in WM capacity are correlated with the efficiency of decision making in simple speeded discrimination tasks (Experiments 1 and 2). Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM and decision-making performance share a common resource (Experiment 3). 
Experiment 1
Methods
Participants
Fifty-three undergraduate students from the University of California, San Diego (UCSD) participated in a single 1.5-hr testing session in exchange for course credit. All participants self-reported normal or corrected-to-normal visual acuity, and all gave both written and oral informed consent in accordance with the Institutional Review Board at UCSD. Data from seven participants were discarded due to chance-level performance on our decision-making task. The data reported here reflect the remaining 46 participants. 
Stimuli and equipment
Stimuli were generated in MATLAB (MathWorks, Natick, MA) and rendered on an 18-in. CRT monitor (with a refresh rate of 60 Hz) via Psychophysics Toolbox software (version 3; Brainard, 1997; Pelli, 1997). Participants were seated approximately 60 cm from the display (head position was unconstrained) and made responses via button presses or mouse clicks (see Procedure). 
Procedure
Each participant completed three tasks. Task order was randomized across participants. 
Color change detection:
A representative trial is shown in Figure 1A. Each trial began with the presentation of four or eight colored squares for 100 ms (the “sample” array). Each square (subtending 1.44°) was assigned a random position within a 12 × 9° rectangle centered at fixation, with the constraints that (a) items were equally distributed across the two visual hemifields, and (b) items were separated by a minimum of 2.08°. Stimulus colors were randomly chosen with replacement from a set including green, yellow, blue, red, white, and black, with the constraint that no color was used more than twice. The sample array was followed by a 1000-ms blank interval and a test display containing a single square. Participants were asked to report whether the color of this “probe” square matched the color of the sample item at the same location via a keyboard response (z = “yes” and / = “no”). Participants were instructed to prioritize accuracy, and no response deadline was imposed. Trials were separated by a 1000-ms blank interval. Each participant completed three blocks of 48 trials. Data were used to derive an estimate of WM “capacity” (K) using an analytical approach described by Cowan (2000):  where N is the number of sample items (i.e., “set size”), and HRN and FAN are the hit and false alarm rates for displays containing N elements. 
Figure 1
 
Behavioral tasks. Top row: color change detection. Bottom row: motion recall. Displays have been enlarged and rescaled for exposition (see Methods).
Figure 1
 
Behavioral tasks. Top row: color change detection. Bottom row: motion recall. Displays have been enlarged and rescaled for exposition (see Methods).
Motion recall:
A representative trial is shown in Figure 1B. Each trial began with the presentation of four small (radius = 1.9°) kinetograms centered 7.07° to the upper and lower left and right of fixation. Each kinetogram contained 100 colored dots (each subtending 0.125°) that moved along a randomly chosen axis (range 0–359°) with 100% coherence for 1000 ms. The color of the dots in each aperture was randomly chosen without replacement from a set including green, red, blue, yellow, purple, and black. This “sample” display was followed by a 2000-ms blank interval. Participants were then cued to recall the direction of a single kinetogram by clicking on the circumference of a probe presented at one of the sample locations. The outer ring of the probe aperture always matched the color of the dots in the kinetogram originally rendered at the probe location; this was done to minimize the occurrence of “transposition errors,” where participants occasionally report the direction of a nonprobed item as the direction of the probed item (see, e.g., Bays, Catalao, & Husain, 2009). Participants were instructed to prioritize accuracy, and no response deadline was imposed. Trials were separated by a 500-ms interval, and each participant completed three blocks of 50 trials. 
To estimate WM capacity, we relied on an analytical strategy developed by Zhang and Luck (2008). Each trial of this task yields a single-point estimate of report error (i.e., the circular distance between the reported and correct directions). We assume that on some trials, the participant successfully remembers the direction of the probed stimulus (albeit imperfectly). On these trials, his or her report errors should be normally distributed around 0°, with few high magnitude errors. This profile can be captured by a von Mises distribution (the circular analogue of a standard Gaussian distribution) with mean μ and concentration (or bandwidth) parameter k. On other trials, however, the participant will fail to remember the direction of the probed stimulus (e.g., due to decay or capacity limits) and will be forced to guess. Across many trials, these guesses will manifest as a uniform distribution with range –π : π and height nr. Critically, these two kinds of trials will be mixed together in the data, so the empirically observed distribution of response errors will resemble a mixture distribution of the form:  where I0 is the modified Bessel function of the first kind of order 0. 
Maximum likelihood estimation (MLE) was used to obtain estimates of u, k, and nr based on each participant's empirical distribution of response errors. As described above, nr reflects the relative proportion of trials where the participant failed to remember the probed stimulus. Thus, one minus this value yields the proportion of trials where the participant successfully remembered the probed stimulus, and multiplying this value by the number of sample items (in this case, 4) yields an estimate of WM capacity. 
Motion discrimination task:
On each trial, participants were shown a dynamic kinetogram containing 800 small (0.1°) black dots. The kinetogram was presented in a circular aperture centered at fixation (with inner and outer radii of 0.75° and 8°, respectively). Each dot moved at a fixed speed of 6° per second and had a lifetime of 83.33 ms (i.e., each dot, regardless of location or trajectory, was randomly replotted every 83.33 ms in order to discourage participants from foveating or tracking just one or two dots). On each trial, a randomly selected 0%, 4%, 16%, or 32% of dots moved leftward or rightward (i.e., 90° or 270°) while the remaining dots were assigned random trajectories (from 0° to 359°). Following earlier work, we refer to the proportion of leftward or rightward moving dots as stimulus “coherence.” Participants were required to report the direction of coherent motion (i.e., rightward or leftward) by pressing the appropriate arrow key on a standard keyboard. Participants were instructed to respond as quickly and accurately as possible, and the trial terminated as soon as a response was made. Coherence levels were mixed within blocks and presented in an unpredictable order. Trials were separated by a 500–1000 ms blank interval (the exact interval was randomly chosen on each trial). Each participant completed eight blocks of 64 trials. 
Data from the motion discrimination task were analyzed using a linear ballistic accumulator model (LBA; Brown & Heathcote, 2008). We chose this model because of its simplicity and computational tractability, but all results generalized when we instead fit the data with a drift diffusion model (see below). A schematic of the LBA is shown in Figure 2. Briefly, this model conceptualizes the decision-making process as a race between N independent accumulators (one per response alternative; here, leftward and rightward) towards a response threshold. The first accumulator to reach threshold determines the participant's response, and the time taken to reach threshold (plus an extra constant time for sensory and motor processes) determines the response latency. On each trial, each accumulator is assigned a starting value on a uniform interval [0,A]. During the trial, activity in each accumulator increases linearly, and a response is made as soon as one accumulator crosses a response boundary (b). The time taken to reach this threshold—plus a short nondecision time (for sensory and motor responses) denoted t0—determines the response latency on that trial. The rate at which each accumulator approaches the response threshold is called that accumulator's “drift rate” (v). Importantly, drift rates are determined by the relative quality of sensory information present in the display. For example, if a display contains 80% coherent rightward motion, then the “rightward” and “leftward” accumulators shown in Figure 3 will be assigned large and small values, respectively. Drift rates are drawn (on a trial-by-trial basis) from independent normal distributions with means v1, v2, … vn and standard deviation s (here fixed at constant value of 1). The drift rate parameter estimated by the model is the mean drift rate for a given accumulator and difficulty condition across all trials. 
Figure 2
 
Schematic representation of the LBA. Each evidence accumulator (one per response alternative) is assigned a starting point on a uniform interval [0,A]. Activity in each accumulator increases until one crosses a response threshold (b). The “winning” accumulator determines the behavioral response, and response latencies are simply the time it takes the winning accumulator to reach threshold (plus an extra constant nondecision time, denoted t0). The rate at which each accumulator approaches threshold is termed that accumulator's “drift rate” (v).
Figure 2
 
Schematic representation of the LBA. Each evidence accumulator (one per response alternative) is assigned a starting point on a uniform interval [0,A]. Activity in each accumulator increases until one crosses a response threshold (b). The “winning” accumulator determines the behavioral response, and response latencies are simply the time it takes the winning accumulator to reach threshold (plus an extra constant nondecision time, denoted t0). The rate at which each accumulator approaches threshold is termed that accumulator's “drift rate” (v).
Figure 3
 
Drift rates are robustly correlated with estimates of WM capacity. Motion coherence indicated by bold number in top right of each panel.
Figure 3
 
Drift rates are robustly correlated with estimates of WM capacity. Motion coherence indicated by bold number in top right of each panel.
Our implementation of the LBA contains a total of four parameters: A (the range of starting points for each accumulator), b (the response threshold), t0 (nondecision time), and v (drift rate). Because motion coherence levels were randomly and unpredictably mixed across trials, we had little a priori reason to suspect that LBA parameters other than drift rate would vary with stimulus strength. Thus, we fixed these values across conditions. In addition, a model that allowed all four parameters to vary was found to be less parsimonious than a model that allowed only drift rate to vary. Specifically, the Bayesian information criterion (BIC; Schwarz, 1978) for the reduced model was 31.37 ± 12.82 units smaller than the criterion for the full model (given a finite set of models, the alternative with the smallest BIC is preferred; see Schwarz, 1978). Thus, we used the simpler model for all analyses. 
We also analyzed participants' motion discrimination performance using a variant of Ratcliff's (1978) “drift diffusion model” (DDM; specifically, the Diffusion Model Analysis Toolbox [DMAT] implementation developed by Vandekerckhove & Tuerlinckx, 2008; available for download at: http://ppw.kuleuven.be/okp/software/dmat/). In this model, sensory evidence is represented by a single accumulator that drifts towards an upper or lower response boundary (where each boundary corresponds to a unique response alternative). Unlike the LBA, the DDM permits stochastic changes in the step size and direction of the accumulator per unit time. However, this flexibility can also be a disadvantage: fitting times are typically much longer than the LBA. Nevertheless, the DDM captures many benchmark phenomena in speeded two-alternative, forced choice (2AFC) tasks, and thus we expected a correlation between memory capacity and drift rates estimated using this model as well. 
Results and discussion
Data from the color change detection and motion recall tasks were used to obtain estimates of WM capacity using standard approaches (see Methods). Capacity estimates for the color change detection (set size 8) and motion recall tasks were strongly correlated (r = 0.59, p < 0.001; correlations of similar magnitude have also been reported elsewhere; see e.g., Zhang & Luck, 2008), so estimates were pooled and averaged across tasks. However, all results reported here generalized when we considered these tasks separately (see below). 
Mean composite memory capacity was 2.61 (±1 SEM = 0.14), with a range of 0.58 to 4.30 items. Mean response times and accuracy from the motion discrimination task are shown in Table 1. As expected, accuracy increased (and response latency decreased) monotonically with increases in stimulus coherence. Next, the LBA was used to estimate drift rates for accumulators matching and mismatching the direction of motion presented on each trial (hereafter referred to as the “correct” and “error” accumulators, with corresponding drift rates denoted vc and ve). These values are listed in Table 2. In the absence of sensory evidence (i.e., 0% coherence) estimates of vc and ve were statistically indistinguishable, t(45) = 1.27, p = 0.21. However, estimates of vc and ve increased and decreased (respectively) monotonically with increases in motion coherence. Next, we computed a measure of decision “efficiency” for each participant by subtracting estimates of ve from vc (separately for each coherence level). These values are plotted as a function of memory capacity in Figure 3. Robust positive correlations were observed during 4% coherence (r = 0.42, p < 0.01, 95% CI = 0.15–0.63), 16% coherence (r = 0.47, p < 0.001, 95% CI = 0.21–0.67), and 32% coherence (r = 0.48, p < 0.001, 95% CI = 0.22–0.68) trials, but not during 0% coherence trials (where there is no “evidence” to accumulate; r = 0.06, p = 0.69, 95% CI = −0.23–0.34). Qualitatively similar results were obtained when memory capacity was defined using only change detection performance (averaged across set sizes 4 and 8; r = 0.22, 0.39, 0.40, and 0.41 for the 0%, 4%, 16%, and 32% coherence conditions, respectively; p = 0.14 for the 0% condition, p < 0.05 for the remaining conditions) or motion recall performance (r = −0.04, 0.39, 0.50, and 0.55 for the 0%, 4%, 16%, and 32% coherence conditions, respectively; p > 0.4 for the 0% conditions, p < 0.05 for the remaining conditions) Qualitatively identical findings were also obtained when we plotted estimates of vc as a function of memory capacity, indicating that these findings are not idiosyncratic to our ad hoc “efficiency” measure. 
Table 1
 
Mean (±1 SEM) accuracy and response latency in the motion discrimination task of Experiment 1.
Table 1
 
Mean (±1 SEM) accuracy and response latency in the motion discrimination task of Experiment 1.
0% 4% 16% 32%
Accuracy 0.49 (0.01) 0.65 (0.01) 0.88 (0.02) 0.93 (0.02)
Latency (s) 1.59 (0.07) 1.44 (0.07) 1.07 (0.07) 0.91 (0.06)
Table 2
 
Mean (±1 SEM) estimates of vc and ve returned by the LBA model.
Table 2
 
Mean (±1 SEM) estimates of vc and ve returned by the LBA model.
0% 4% 16% 32%
vc 0.82 (0.06) 1.17 (0.07) 2.27 (0.13) 2.87 (0.16)
ve 0.85 (0.06) 0.50 (0.06) −0.60 (0.13) −1.21 (0.16)
Difference (vc – ve) −0.03 (0.02) 0.67 (0.07) 2.87 (0.24) 4.08 (0.30)
Several studies (e.g., Ackerman, Beier, & Boyle, 2002; Kyllonen & Christal, 1990) have documented moderately strong positive correlations between WM ability and measures of “perceptual” or “processing speed” (e.g., how quickly an observer can encode or respond to a stimulus, or how quickly an observer can compare two stimuli). One possibility is that the correlations shown in Figure 3 reflect this well-known link. Alternately, the correlations could simply reflect the fact that some subjects tried harder on the WM and perceptual decision making (PDM) tasks, and that differences in this type of “general effort” explain the correlations. By either account, one might also expect individual differences in WM capacity to correlate with mean response latencies. However, no correlations between capacity and mean response times (RTs) were found (r < 0.15 for all four coherence levels, all ps > 0.32). Thus, it seems unlikely that the correlations shown in Figure 3 can be explained solely by assuming that high-capacity individuals were simply faster at responding or simply tried harder. Instead, high capacity subjects seem to be more efficient at accumulating sensory evidence, which is computed based on a combination of RT and accuracy in the PDM task (as indexed by the LBA). However, the issue of the relationship between “general effort” accounts and correlations of performance on different tasks is complex and nuanced, and we return to this issue in the General discussion
On a related point, we emphasize that generic response latency measures reflect the total time from stimulus onset to response execution, and that total time is comprised of many cognitive operations. The purpose of the LBA (and related models of speeded 2AFC performance) is to estimate a set of latent variables thought to determine overall response latency (i.e., the amount of evidence needed to reach a decision criterion, the rate at which evidence is “accumulated,” etc.). There are many reasons why a given participant might respond more slowly (or quickly) in the PDM task relative to others. For example, two participants might sample evidence at an equivalent rate, but one might have a conservative response threshold (i.e., he or she might require a lot of sensory evidence before committing to a decision). Alternately, these participants might have comparable thresholds, but vary in how quickly they are able to sample or accumulate information (i.e., drift rates). By decomposing overall response latency (and accuracy) measures into latent variables, one can evaluate the selective relationship(s) between each latent parameter and WM ability. Here, we found that evidence accumulation rates—but not raw response latencies or other LBA parameters—correlated with WM ability, suggesting that these processes draw upon a common resource. 
Finally, we also examined whether correlations between drift rate and WM capacity were idiosyncratic to the LBA. To do so, we reanalyzed data from the motion discrimination task using a drift diffusion model (specifically, the DMAT implementation developed by Vandekerckhove & Tuerlinckx, 2008; see Methods). Estimates of drift rate returned by this model were also positively correlated with WM capacity for the 16% and 32% coherence levels (r = −0.16, 0.13, 0.30, and 0.37 for 0%, 4%, 16%, and 32% coherence trials, respectively; p < 0.05 for 16%–32% trials). 
Experiment 2
In Experiment 2, we examined whether putative correlations between memory capacity and drift rate would generalize to a different decision-making task (speeded letter discrimination). 
Methods
Participants
Sixteen undergraduate students from the University of California, San Diego, completed a single one-hour testing session in exchange for course credit. All participants self-reported normal or corrected-to-normal visual acuity, and all gave both written and oral informed consent in accordance with the Institutional Review Board at UCSD. 
Stimuli and apparatus
As in Experiment 1, stimuli were generated in MATLAB (MathWorks) and rendered on an 18-in. CRT monitor (with a refresh rate of 60 Hz) via Psychophysics Toolbox software (Brainard, 1997; Pelli, 1997). Participants were seated approximately 60 cm from the display (head position was unconstrained) and made responses via a standard keyboard. 
Procedure
On each trial, two “reference” letters (subtending approximately 0.86° × 1.24° in 36-point Arial font) were rendered 2.5° above and ±4° to the left and right of fixation. Following a random blank interval (400–700 ms, randomly chosen from a uniform distribution on each trial) a target letter matching one of the reference letters was displayed 2.5° directly above fixation for 16.67, 33.33, 50, 67.67, 83.33, or 100 ms then masked with an “X” for 500 ms. A probe “?” was then presented for a maximum of 900 ms. Participants were instructed to indicate which reference letter the target matched as quickly and as accurately as possible, pressing the “Z” key with their left hand to indicate the left letter and the “/” key with their right hand to indicate the right letter. Trials were aborted if no response was issued within 1500 ms, and a warning message encouraged participants to respond more quickly if the response time on a given trial was ≥ 1000 ms. Trials were separated by a 700–1200 ms interval (randomly chosen from a uniform distribution on each trial). Reference letters were drawn from sets including [E,C], [C,P], and [C,F]. The letter pairs, as well as their positions on the screen (i.e., which letter was assigned to the left vs. right side of the screen) were randomly chosen at the start of each block. Each participant completed 18 blocks of 48 trials. 
Results and discussion
Mean response latencies and accuracy are listed as a function of cue-target stimulus onset asynchrony (SOA) in Table 3. As expected, accuracy increased monotonically with SOA, though response latencies were largely unchanged. Accuracy and latency data were then modeled with the LBA. As in Experiment 1, we defined a measure of decision-making efficiency by computing the differences between estimates of vc and ve returned by the model (separately for each SOA). These are plotted as a function of memory capacity (obtained using the same color change detection task that was employed in Experiment 1) in Figure 4. Significant positive correlations between WM capacity and decision efficiency were observed for 67.67, 83.33, and 100 ms letter exposure durations (r = 0.53, 0.53, 0.56, respectively, p < 0.05), but not for 16.67, 33.33, and 50 ms exposure durations (r = 0.20, 0.12, and 0.07, respectively, p > 0.40). Precisely why significant correlations only manifest at longer SOAs is unclear. Nevertheless, the results of this experiment suggest that links between WM capacity and decision efficiency are not idiosyncratic to the motion discrimination task used in Experiment 1
Figure 4
 
Results of the letter discrimination task. Panels A–F plot the correlation between memory capacity and drift rates for letter exposure durations of 16.67, 33.33, 50, 67.67, 83.33, and 100 ms, respectively.
Figure 4
 
Results of the letter discrimination task. Panels A–F plot the correlation between memory capacity and drift rates for letter exposure durations of 16.67, 33.33, 50, 67.67, 83.33, and 100 ms, respectively.
Table 3
 
Mean (±1 SEM) accuracy and response latency in the letter discrimination task of Experiment 2.
Table 3
 
Mean (±1 SEM) accuracy and response latency in the letter discrimination task of Experiment 2.
16.67 ms 33.33 ms 50 ms 67.67 ms 83.33 ms 100 ms
Accuracy 0.76 (0.02) 0.82 (0.02) 0.88 (0.02) 0.93 (0.01) 0.94 (0.01) 0.95 (0.02)
Latency (s) 0.58 (0.01) 0.56 (0.01) 0.55 (0.01) 0.54 (0.01) 0.54 (0.02) 0.55 (0.01)
Experiment 3
In Experiment 3, we asked whether there was a causal relationship between WM capacity and drift rates. To evaluate this possibility, a new group of participants completed an experiment in which they performed (in randomized order) a color change detection task, a motion discrimination task, or a “dual task” that required participants to make speeded motion direction discriminations while maintaining a concurrent memory load. If decision making draws upon the same mnemonic resources that support WM, then participants' performance on a WM task should suffer when they are required to make a perceptual discrimination during the WM retention interval. 
Methods
Participants
Thirty-two undergraduate students from the University of California, San Diego, completed a single 1.5-hr testing session in exchange for course credit. All participants reported normal or corrected-to-normal visual acuity, and all gave both written and oral informed consent in accordance with the Institutional Review Board at UCSD. 
Stimuli and apparatus
As in Experiments 1 and 2, stimuli were generated in MATLAB (MathWorks) and rendered on an 18-in. CRT monitor (refreshing at a rate of 60 Hz) via Psychophysics Toolbox software (Brainard, 1997; Pelli, 1997). Participants were seated approximately 60 cm from the display (head position was unconstrained) and made responses via a standard keyboard. 
Procedure
Color change detection:
On each trial, participants were presented with two or six colored squares (randomly selected without replacement from a set including red, green, blue, black, white, yellow, and violet) for 100 ms. Each square subtended 3° and was presented at one of six possible locations (0° : 300° in 60° increments) on the perimeter of an imaginary circle (radius = 6°) centered at fixation. After a 1000 ms blank interval, a 0% coherent kinetogram (inner and outer radii of 0.75° and 8°, respectively) containing 800 dots (each 0.1°) moving at 4°/s (with a limited lifetime of 100 ms) was presented for 3000 ms. Participants were told that this stimulus was irrelevant and to focus on remembering the colors of the squares presented at the beginning of the trial. The kinetogram was followed by another 1000 ms interval and the presentation of a test array. Here, a single square replaced one of the sample items, and participants reported whether the color of this new square matched the color of the item in the same location at the start of the trial (pressing “Z” to indicate “yes,” and “/” to indicate “no”). Participants were instructed to prioritize accuracy, and no response deadline was imposed. Each participant completed two blocks of 48 trials in this task. 
Motion discrimination:
On each single-task motion discrimination trial, participants were shown two or six white squares for 100 ms. Participants were explicitly told to ignore these stimuli. After a 1000 ms blank interval, a kinetogram was rendered for 3000 ms. On each trial, 8% of 24% of the dots in the kinetogram moved towards the left or right side of the stream; participants were asked to report the direction of coherent motion as quickly and accurately as possible (pressing “Z” for left and “/” for right). The kinetogram was followed by a 1000 ms blank interval and the presentation of a “probe” array containing a single white square. No response to this display was required; participants simply pressed the spacebar when they were ready to begin the next trial. 
Dual task:
This procedure combined aspects of the color change detection and motion discrimination tasks. Specifically, participants were required to discriminate the direction of an 8% or 24% coherent moving dot stimulus while remembering the colors of two or six squares. As in the primary tasks, participants pressed the “Z” and “/” keys to indicate leftward and rightward motion during the motion discrimination period, and used the same keys to indicate “same” vs. “different” (respectively) upon presentation of the color square probe. 
Results
A 2 (single vs. dual task) by 2 (motion coherence: 8% vs. 24%) repeated measures ANOVA on drift rates obtained from the motion discrimination tasks revealed a main effect of coherence, F(1, 31) = 149.41, p < 0.001, η2partial = 0.29, but no main effect of task type nor an interaction between these factors (both Fs < 1; see Figure 5A). Conversely, a 2 (single vs. dual task) by 2 (set size: 2 or 6 items) repeated measures ANOVA on WM capacity estimates revealed a main effect of task type, F(1, 31) = 55.72, p < 0.001, η2 partial = 0.16, a main effect of set size, F(1, 31) = 70.29, p < 0.001, η2 partial = 0.20, and a significant interaction between these factors, F(1, 31) = 38.08, p < 0.001, η2 partial = 0.07. During set size 2 trials, WM capacity decreased by an average of 0.31 (±0.06) items (M = 1.89 and 1.57 for single- and dual-task trials, respectively). This cost increased to 1.36 (±0.19) items during set size 6 trials (M = 3.37 vs. 2.02; see Figure 5B). Note also that performance on the motion discrimination task was equivalent for single- and dual-tasks. This suggests that dual task costs on memory performance were not driven by “state-level” factors (e.g., changes in alertness or anxiety) that should have a negative effect on performance in both tasks. Thus, performing a task that required evidence accumulation interfered with concurrent WM storage. 
Figure 5
 
Performing a motion discrimination task interferes with WM storage. (A) Drift rates estimated from accuracy and response latency in a motion discrimination task are plotted as a function of stimulus coherence (8% or 24%), and whether the task was performed alone (“single task”) or while maintaining a concurrent memory load (“dual task”). The latter factor had no influence on performance. (B) Estimates of WM capacity obtained in a color change detection task are plotted as a function of set size (2 vs. 6 items) and the task type (single vs. dual). Performing a concurrent motion discrimination task that requires the integration of sensory evidence had a deleterious impact on performance. Error bars reflect ±1 SEM across subjects.
Figure 5
 
Performing a motion discrimination task interferes with WM storage. (A) Drift rates estimated from accuracy and response latency in a motion discrimination task are plotted as a function of stimulus coherence (8% or 24%), and whether the task was performed alone (“single task”) or while maintaining a concurrent memory load (“dual task”). The latter factor had no influence on performance. (B) Estimates of WM capacity obtained in a color change detection task are plotted as a function of set size (2 vs. 6 items) and the task type (single vs. dual). Performing a concurrent motion discrimination task that requires the integration of sensory evidence had a deleterious impact on performance. Error bars reflect ±1 SEM across subjects.
Discussion
Here, we show that an interleaved decision-making task interferes with performance on a concurrent WM task, presumably because decision making draws on the same pool of mnemonic resources that support WM. However, it should be noted that there are many other “resource independent” factors that could account for these effect (see, e.g., Navon, 1984; Duncan, 1980). That said, our data do argue against one uninteresting account of this interference; namely, that subjects may have been more anxious or less motivated during the dual-task relative to single-task conditions (e.g., because of an increase in difficulty). Presumably, any such “state-level” factors would reduce performance on both tasks in the dual-task condition relative to the single-task condition (though the effects might be smaller in the task assigned the highest priority). However, the observation of a selective impairment on the WM task in the dual-task condition speaks against these types of general accounts. 
In addition, we did not have a strong a prioi reason to predict that dual-task costs would show up just in the WM task as opposed to the interleaved decision task. One possibility is that participants chose to prioritize the decision task because it required a speeded response. However, participants could have presumably been made to place greater priority on the WM task (e.g., via altered instructions or a directed reward scheme), and this may well have pushed the dual-task costs to the decision-making task. Alternately, it may be that the second task (or last, assuming a situation where a subject performs more than two tasks) is automatically prioritized. While these possibilities cannot be directly addressed with the present data, our main goal was simply to present direct evidence that the WM and PDM tasks draw upon a common set of “resources,” and this notion is supported by the demonstration of the selective deficit in the WM task in the dual-task condition. Thus, even though the direction of the effect might have gone the other way had we differentially prioritized the tasks, the observation of selective interference in one task (WM task) in the dual-task condition argues against the notion of independent WM and PDM resources. 
General discussion
Here, we present data suggesting that individual variability in WM capacity is correlated with decision-making ability. This relationship generalized across different tasks and different models of decision making and cannot be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and evidence accumulation share a common resource. 
In the current study, we quantized WM capacity as a small number of discrete (i.e., independent) “slots,” each capable of storing a single object or “unit” of information (Luck & Vogel, 2013). However, an alternative view proposes that capacity limits are instead determined by a limited resource that can be flexibly allocated to a variable number of items (e.g., Bays et al., 2009; van den Berg, Shin, Chou, George, & Ma, 2012). We take no position on this debate here, but note that one would predict a correlation between WM and decision-making performance regardless of how mnemonic resources are quantized. 
On a related point, there is some ambiguity as to what factor(s) mediate the observed links between WM ability and decision making? One possibility is that “high capacity” individuals have a larger pool of mnemonic “resources” (relative to “low capacity” individuals) that can be deployed in the service of decision making. Alternately, “high” and “low” capacity individuals may differ in how efficiently they deploy these resources. The latter alternative is supported by a growing literature suggesting that individual differences in working memory capacity reflect variability in attentional control rather than the amount of “storage space” an individual possesses (e.g., Kane & Engle, 2003; McNab & Klingberg, 2008; Vogel, McCollough, & Machizawa, 2005). Additional work aimed at determining whether the present correlations are best explained by actual capacity differences or by differences in attention control will provide key insights into the mechanistic link between WM ability and decision-making efficiency. 
The correlations between WM ability and decision-making ability reported here can be explained in at least two different ways. Specifically, one possibility is that both WM and decision making draw upon a general pool of resources that are also recruited by other demanding visual tasks. Alternately, WM and decision making might draw upon a shared pool of resources that are separate from those used to solve other demanding tasks. Unfortunately, the data reported here offer little that discriminates between these alternatives. On the one hand, all tasks may depend on a common pool of resources at some level of processing. For example, two tasks requiring the same motor output should engage a common resource that represents the shared response mappings. However, in less trivial cases there is at least some evidence arguing against entirely domain-general cognitive resources. For example, there is evidence suggesting that separate resource pools mediate the storage of spatial versus object information in WM (e.g., Courtney, Ungerleider, Keil, & Haxby, 1996; Della Sala, Gray, Baddeley, Allamano, & Wilson, 1999; Smith, Jonides, & Koeppe, 1996). That said, the data reported here cannot definitively exclude the possibility that WM and PDM draw upon a domain-general set of cognitive resources shared by other (perhaps all) visual tasks. Future work will need to explore this possibility in greater detail by identifying tasks that selectively interfere/do not interfere with both decision making and WM. 
A recent study by Schmiedek, Oberauer, Wilhelm, Süss, and Wittman (2007) also reported positive correlations between WM ability (measured via complex span tasks) and drift rates. Other work has documented positive correlations between drift rates and intelligence (e.g., van Ravenzwaaij, Brown, & Wagenmakers, 2011), as well as between working memory and intelligence (e.g., Engle, Tuholski, Laughlin, & Conway, 1999). However, clear theoretical explanations for these correlations are lacking. For example, Schmiedek et al. (2007) proposed that links between WM, decision making, and reasoning ability reflect variability in the ability to maintain ad hoc bindings between stimulus–response mappings. Conversely, Kane and colleagues (McVay & Kane, 2012) proposed that these correlations reflect variability in the frequency of attentional lapses. Here, we offer a third alternative: evidence accumulation and WM are linked because both processes draw on the same pool of limited resources. This hypothesis is motivated in part by clear parallels between the neural mechanisms supporting WM and decision making. For example, responses of neurons located in early visual cortex are thought to provide the input to neurons in posterior parietal and frontal cortex that undergo a “ramp-like” increase in activity during decision making (e.g., Gold & Shadlen, 2007), and recent evidence suggests that sustained population-level responses in the same regions of early visual cortex support WM representations (Ester, Anderson, Serences, & Awh, 2013; Harrison & Tong, 2009; Serences, Ester, Vogel, & Awh, 2009). Thus, the quality of sensory representations in these visual areas may set an upper bound on the efficacy of both WM and perceptual decision making. Alternatively, the ramp-like accumulation of sensory evidence in parietal and frontal cortex may be tantamount to the formation of a stable WM representation that can guide behavior. On this account, the common thread between WM and decision making may not be the quality of sensory representations per se, but rather the efficiency with which sensory information is utilized by downstream accumulation mechanisms. We emphasize that these accounts are speculative, and they need not be mutually exclusive. Clearly, further research is needed to delineate putative links between mechanisms of WM and decision making. Nevertheless, our results raise the intriguing possibility that the storage of information in WM and the accumulation of sensory evidence reflect the operation of a common capacity-limited mechanism. 
Acknowledgments
This research was supported by funding from the National Institutes of Health, through grants NIH R01 MH092345 (J. T. S.) and NIH T32 MH020002 (E. F. E.). 
*EFE and JTS conceived and designed the experiments. EFE and TCH collected the data. EFE, TCH, and SDB analyzed the data using software developed by SDB. EFE and JTS wrote the manuscript. 
Commercial relationships: none. 
Corresponding authors: Edward F. Ester; John T. Serences. 
Email: edward.ester01@gmail.com; jserences@ucsd.edu. 
Address: Department of Psychology, University of California, San Diego, La Jolla, CA, USA. 
References
Ackerman P. L. Beier M. E. Boyle M. D. (2002). Individual differences in working memory within a nomological network of cognitive and perceptual speed abilities. Journal of Experimental Psychology: General, 131, 567–589. [CrossRef] [PubMed]
Bays P. M. Catalao R. F. G. Husain M. (2009). The precision of visual working memory is set by allocation of a shared resource. Journal of Vision, 9 (10): 7, 1–11, http://www.journalofvision.org/content/9/10/7, doi:10.1167/9.10.7. [PubMed] [Article] [PubMed]
Brainard D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Brown S. D. Heathcote A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153–178. [CrossRef] [PubMed]
Courtney S. M. Ungerleider L. G. Keil K. Haxby J. V. (1996). Object and spatial visual working memory activate separate neural systems in human cortex. Cerebral Cortex, 6, 39–49. [CrossRef] [PubMed]
Cowan N. (2000). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87–185. [CrossRef]
Della Sala S. Gray C. Baddeley A. Allamano N. Wilson L. (1999). Pattern span: A tool for unwelding visuo-spatial memory. Neuropsychologia, 37, 1189–1199. [CrossRef] [PubMed]
Duncan J. (1980). The demonstration of capacity limitation. Cognitive Psychology, 12, 75–96. [CrossRef]
Engle R. W. Tuholski S. W. Laughlin J. E. Conway A. R. (1999). Working memory, short-term memory, and general fluid intelligence: A latent-variable approach. Journal of Experimental Psychology: General, 128, 309–331. [CrossRef] [PubMed]
Ester E. F. Anderson D. E. Serences J. T. Awh E. (2013). A neural measure of precision in visual working memory. Journal of Cognitive Neuroscience, 25, 754–761. [CrossRef] [PubMed]
Gold J. I. Shadlen M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574. [CrossRef] [PubMed]
Harrison S. A. Tong F. (2009). Decoding reveals the contents of visual working memory in early visual areas. Nature, 458, 632–635. [CrossRef] [PubMed]
Heekeren H. R. Marrett S. Bandettini P. A. Ungerleider L. G. (2004). A general mechanism for perceptual decision making in the human brain. Nature, 431, 859–862. [CrossRef] [PubMed]
Kane M. J. Engle R. W. (2003). Working memory capacity and the control of attention: The contributions of goal neglect, response competition, and task set to Stroop interference. Journal of Experimental Psychology: General, 132, 47–70. [CrossRef] [PubMed]
Kyllonen P. C. Christal R. E. (1990). Reasoning ability is (little more than) working-memory capacity?! Intelligence, 14, 389–433. [CrossRef]
Link S. W. Heath R. A. (1975). A sequential theory of psychological discrimination. Psychometrika, 40, 77–105. [CrossRef]
Luck S. J. Vogel E. K. (2013). Visual working memory capacity: From psychophysics and neurobiology to individual differences. Trends in Cognitive Sciences, 17, 391–400. [CrossRef] [PubMed]
McNab F. Klingberg T. (2008). Prefrontal cortex and basal ganglia control access to working memory. Nature Neuroscience, 11, 103–107. [CrossRef] [PubMed]
McVay J. C. Kane M. J. (2012). Why does working memory capacity predict variation in reading comprehension? On the influence of mind wandering and executive attention. Journal of Experimental Psychology: General, 141, 302–320. [CrossRef] [PubMed]
Navon D. (1984). Resources – a theoretical soup stone? Psychological Review, 91, 216–234. [CrossRef]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Purcell B. A. Heitz R. P. Cohen J. Y. Schall J. D. Logan G. D. Palmeri T. J. (2010). Neurally constrained modeling of perceptual decision making. Psychological Review, 117, 1113–1143. [CrossRef] [PubMed]
Ratcliff R. (1978). A theory of memory retrieval. Psychological Review, 85, 59–108. [CrossRef]
Ratcliff R. McKoon G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neuronal Computation, 20, 873–922. [CrossRef]
Romo R. Brody C. D. Hernandez A. Lemus L. (1999). Neuronal correlates of parametric working memory in the prefrontal cortex. Nature, 399, 470–473. [CrossRef] [PubMed]
Schmiedek F. Oberauer K. Wilhelm O. Süss H. Wittman W.W. (2007). Individual differences in components of reaction time distributions and their relations to working memory and intelligence. Journal of Experimental Psychology: General, 136, 414–429. [CrossRef] [PubMed]
Schwarz G. (1978). Estimating the dimension of a model. Annals of Statistics, 6, 461–464. [CrossRef]
Serences J. T. Ester E. F. Vogel E. K. Awh E. (2009). Stimulus-specific delay activity in human primary visual cortex. Psychological Science, 20, 207–214. [CrossRef] [PubMed]
Shadlen M. N. Newsome W. T. (2001). Neural basis of a perceptual decision in the parietal cortex (area LIP) of rhesus monkey. Journal of Neurophysiology, 86, 1916–1936. [PubMed]
Smith E. E. Jonides J. Koeppe R. A. (1996). Dissociating verbal and spatial working memory using PET. Cerebral Cortex, 6, 11–20. [CrossRef] [PubMed]
Usher M. McClelland J. L. (2001). The time course of perceptual choice: The leaky, competing accumulator model. Psychological Review, 108, 550–592. [CrossRef] [PubMed]
Vandekerckhove J. Tuerlinckx F. (2008). Diffusion model analysis with MATLAB: A DMAT primer. Behavior Research Methods, 40, 61–72. [CrossRef] [PubMed]
van den Berg R. Shin H. Chou W. C. George R. Ma W. J. (2012). Variability in encoding precision accounts for visual short term memory limitations. Proceedings of the National Academy of Sciences, USA, 109, 8780–8785. [CrossRef]
van Ravenzwaaij D. Brown S. Wagenmakers E. (2011). An integrated perspective on the relation between response speed and intelligence. Cognition, 119, 381–393. [CrossRef] [PubMed]
Vogel E. K. McCollough A. W. Machizawa M. G. (2005). Neural measures reveal individual differences in controlling access to working memory. Nature, 438, 500–503. [CrossRef] [PubMed]
Wald A. (1947). Sequential analysis. New York: John Wiley and Sons.
Zhang W. Luck S. J. (2008). Discrete fixed-resolution representations in visual working memory. Nature, 453, 233–235. [CrossRef] [PubMed]
Figure 1
 
Behavioral tasks. Top row: color change detection. Bottom row: motion recall. Displays have been enlarged and rescaled for exposition (see Methods).
Figure 1
 
Behavioral tasks. Top row: color change detection. Bottom row: motion recall. Displays have been enlarged and rescaled for exposition (see Methods).
Figure 2
 
Schematic representation of the LBA. Each evidence accumulator (one per response alternative) is assigned a starting point on a uniform interval [0,A]. Activity in each accumulator increases until one crosses a response threshold (b). The “winning” accumulator determines the behavioral response, and response latencies are simply the time it takes the winning accumulator to reach threshold (plus an extra constant nondecision time, denoted t0). The rate at which each accumulator approaches threshold is termed that accumulator's “drift rate” (v).
Figure 2
 
Schematic representation of the LBA. Each evidence accumulator (one per response alternative) is assigned a starting point on a uniform interval [0,A]. Activity in each accumulator increases until one crosses a response threshold (b). The “winning” accumulator determines the behavioral response, and response latencies are simply the time it takes the winning accumulator to reach threshold (plus an extra constant nondecision time, denoted t0). The rate at which each accumulator approaches threshold is termed that accumulator's “drift rate” (v).
Figure 3
 
Drift rates are robustly correlated with estimates of WM capacity. Motion coherence indicated by bold number in top right of each panel.
Figure 3
 
Drift rates are robustly correlated with estimates of WM capacity. Motion coherence indicated by bold number in top right of each panel.
Figure 4
 
Results of the letter discrimination task. Panels A–F plot the correlation between memory capacity and drift rates for letter exposure durations of 16.67, 33.33, 50, 67.67, 83.33, and 100 ms, respectively.
Figure 4
 
Results of the letter discrimination task. Panels A–F plot the correlation between memory capacity and drift rates for letter exposure durations of 16.67, 33.33, 50, 67.67, 83.33, and 100 ms, respectively.
Figure 5
 
Performing a motion discrimination task interferes with WM storage. (A) Drift rates estimated from accuracy and response latency in a motion discrimination task are plotted as a function of stimulus coherence (8% or 24%), and whether the task was performed alone (“single task”) or while maintaining a concurrent memory load (“dual task”). The latter factor had no influence on performance. (B) Estimates of WM capacity obtained in a color change detection task are plotted as a function of set size (2 vs. 6 items) and the task type (single vs. dual). Performing a concurrent motion discrimination task that requires the integration of sensory evidence had a deleterious impact on performance. Error bars reflect ±1 SEM across subjects.
Figure 5
 
Performing a motion discrimination task interferes with WM storage. (A) Drift rates estimated from accuracy and response latency in a motion discrimination task are plotted as a function of stimulus coherence (8% or 24%), and whether the task was performed alone (“single task”) or while maintaining a concurrent memory load (“dual task”). The latter factor had no influence on performance. (B) Estimates of WM capacity obtained in a color change detection task are plotted as a function of set size (2 vs. 6 items) and the task type (single vs. dual). Performing a concurrent motion discrimination task that requires the integration of sensory evidence had a deleterious impact on performance. Error bars reflect ±1 SEM across subjects.
Table 1
 
Mean (±1 SEM) accuracy and response latency in the motion discrimination task of Experiment 1.
Table 1
 
Mean (±1 SEM) accuracy and response latency in the motion discrimination task of Experiment 1.
0% 4% 16% 32%
Accuracy 0.49 (0.01) 0.65 (0.01) 0.88 (0.02) 0.93 (0.02)
Latency (s) 1.59 (0.07) 1.44 (0.07) 1.07 (0.07) 0.91 (0.06)
Table 2
 
Mean (±1 SEM) estimates of vc and ve returned by the LBA model.
Table 2
 
Mean (±1 SEM) estimates of vc and ve returned by the LBA model.
0% 4% 16% 32%
vc 0.82 (0.06) 1.17 (0.07) 2.27 (0.13) 2.87 (0.16)
ve 0.85 (0.06) 0.50 (0.06) −0.60 (0.13) −1.21 (0.16)
Difference (vc – ve) −0.03 (0.02) 0.67 (0.07) 2.87 (0.24) 4.08 (0.30)
Table 3
 
Mean (±1 SEM) accuracy and response latency in the letter discrimination task of Experiment 2.
Table 3
 
Mean (±1 SEM) accuracy and response latency in the letter discrimination task of Experiment 2.
16.67 ms 33.33 ms 50 ms 67.67 ms 83.33 ms 100 ms
Accuracy 0.76 (0.02) 0.82 (0.02) 0.88 (0.02) 0.93 (0.01) 0.94 (0.01) 0.95 (0.02)
Latency (s) 0.58 (0.01) 0.56 (0.01) 0.55 (0.01) 0.54 (0.01) 0.54 (0.02) 0.55 (0.01)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×