In our static condition (
Figure 3a), visual objects were held static in front of the participant and were removed from view abruptly. This mimics classic laboratory tasks of visual working memory, inside our VR environment.
Figure 3b shows the performance accuracy in our static condition. Not surprisingly, we found a robust effect of memory load,
t(24) = 10.660,
p = 1.393e-10,
d = 2.132, with close-to-ceiling performance in load 2 but not in load 4 (
Figure 3b), consistent with a capacity above two but below four objects. Likewise, when calculating sensitivity, we found higher
d′ values for load 2 (
M = 3.468,
SE = 0.101) than load 4 (
M = 2.228,
SE = 0.09). With the aim to form a bridge to a large literature of prior studies, we also estimated working-memory capacity. We transformed our accuracy scores to capacity scores (denoted
K) by considering the maximum capacity (given the number of visual objects in the memory display) and the number of correct responses that would occur through guessing (given the number of objects to choose from at the probe stage). Like accuracy, capacity (
Figure 3d) was close to ceiling in load 2, with a
K of 1.920 ± 0.014 (
M ±
SE). In the more demanding load 4 condition, in which capacity was further from ceiling, we observed a
K of 3.100 ± 0.083. This observed
K around three reveals that the capacity estimates in our VR task are largely consistent with seminal prior work using similar-colored shape stimuli in 2D displays (e.g.,
Luck & Vogel, 1997;
Vogel & Machizawa, 2004).
In addition to memory capacity, we were interested in the incidental use of space for mnemonic selection. To track such space-based mnemonic selection, we capitalized on our recent demonstrations of reliable directional biases in gaze when participants select visual objects within working memory (
Draschkow et al., 2022;
van Ede, Chekroud, & Nobre, 2019;
van Ede et al., 2020;
van Ede et al., 2021). As shown in
Figures 3e to
3g, in our static condition we confirmed clear gaze biases that are comparable to these studies. Following the color change of the central fixation cross at time 0, gaze position gradually became biased toward (i.e., in the direction of) the memorized location of the cued visual memorandum (
Figure 3e). This is also appreciated by the time courses of horizontal gaze position in
Figure 3f. When the cued memory object occupied a location on the left during encoding (red traces), gaze became biased to the left, whereas when the cued memoranda was on the right during encoding (blue traces) gaze became biased to the right after the selection cue. To facilitate further evaluation of this spatial gaze bias, we reduced the gaze bias to a single measure of “towardness” (as in
Draschkow et al., 2022;
van Ede, Chekroud, & Nobre, 2019;
van Ede et al., 2020;
van Ede et al., 2021). As shown in
Figure 3g, this confirmed that our gaze index of space-based mnemonic selection was highly robust—both when the memory load was two (top row; cluster
p < 0.001) and when the memory load was four (second row; cluster
p < 0.001).
These gaze biases occurred despite the object location never being asked about and despite there being nothing to see at these locations, nor anything expected at these locations in the interval after the cue (response options always appeared 1500 ms after cue onset and appeared below fixation with the target object randomly positioned in one out of six possible locations). Together, these data show how our VR task captures well-established properties of capacity estimates of visual working memory and the use of space for the selection of objects within visual working memory. Having established this, we next turn to our key flow condition of interest.