Abstract
Purpose: Recently, we have shown that direction-of-motion information is stored into high-capacity transient ("sensory") and low-capacity sustained (Visual Short-Term Memory, VSTM) memory systems by use of graded resources (Shooner et al., 2010). Vision is purposeful in that we selectively attend to a subset of stimuli (targets) while ignoring the rest (distractors). Here we investigated how distractors influence the encoding and memorization of direction-of-motion information. Methods: Observers (N=4) viewed one to sixteen 1-deg circular disks in random linear motion (duration=200ms). A subset of the disks (1–9) was tagged as "targets" at the beginning of each trial. The remaining disks were distractors. At the end of the trial, observers reported the perceived direction-of-motion of a cued target. We measured performance as a function of target and distractor set-sizes and cue-delay, and we assessed the performance of four statistical models (Gaussian, Gaussian+guessing, and two versions of Gaussian+guessing+confusion/misbinding). The parameters of the best fitting model (Gaussian+guessing) were used to assess the accuracy, precision, capacity, and dynamics of visual encoding and memory. Results: Encoding and memorization of direction-of-motion information remain accurate over a broad range of target and distractor set-sizes. An increase in target set-size causes a linear decay in performance, indicating a limited capacity for encoding and memorization. Distractor set-size influences only sensory memory; it has no effect on the encoding and VSTM stages. Sensory memory exhibits a rapid exponential decay. The precision of encoding and memory both decline gradually as a function of target set-size. For memory, this decline exhibits a lower bound of ~0.03–0.05deg-1. Conclusions: Our results support an atomic model of resource allocation in which resources are shared until a minimum usable resource level is reached. The resources are vulnerable to distractors only during the transient period when information is transferred from the encoding stage to VSTM.
Meeting abstract presented at VSS 2012