Abstract
The ability to detect coherent motion embedded in random noise has been long-studied in perceptual learning, and performance has been shown to improve reliably with practice (e.g., Ball & Sekuler, 1987). Many motion coherence training paradigms focus on very long-term learning (e.g., Shibata et al., 2012, trained participants for 10 days), but recent evidence suggests that motion direction discrimination learning may undergo consolidation in the hours immediately after training (Ashley & Pearson, 2012), especially if sleep is involved (McDevitt et al., VSS 2012). One common element of many perceptual learning tasks is offline learning—improvement in performance that occurs after formal training is complete and the observer is no longer engaged in the task. Such offline learning my be generalized, but is often retinotopically specific. Here, we demonstrate visual-hemifield-specific offline learning of motion detection. Observers were trained to detect near-threshold coherent motion in a single, non-cardinal direction in one visual hemifield. Random, white-noise motion was presented in the untrained hemifield and fixation was enforced with a central RSVP letter discrimination task. Observers showed only modest improvements in motion discrimination ability over the course of the first training session. However, in a retest 24 hours after training, they showed marked improvement in detection ability for stimuli in the trained hemifield, but only slight improvement in detecting stimuli in the untrained hemifield. These results suggest that motion coherence learning has an important offline component that may well be sleep-dependent and, similar to classic sleep-dependent learning paradigms like the texture discrimination task, this offline learning may be retinotopically specific.
Meeting abstract presented at VSS 2013