Abstract
Visual training improves performance in visually-intact and visually-impaired participants, making it useful as a rehabilitation tool. An interesting question in rehabilitation is whether invoking multisensory integration could increase training efficacy. Four visually-intact subjects (three female) were included in a 10-day training experiment in which they repeatedly performed a 4-way direction discrimination task. In this task, a gaze-contingent, 5degree diameter visual global motion stimulus was presented at 10degree azimuth/elevation in virtual reality (VR), using the HTC Vive Pro Eye head-mounted display with integrated eye-tracking. The stimulus could move along one of four oblique directions. Two subjects trained with only visual stimulation (V group). The other two subjects trained with an accompanying pulsed white-noise auditory cue (AV group) moving in VR along the horizontal component of the visual motion, rendered with the SteamAudio 3D audio spatializer. Visual task difficulty was manipulated with a staircase by changing the range of directions in which dots could move around the principal direction of motion in the visual stimulus. The staircase level was determined only by correct judgment of the vertical component of motion. Direction range thresholds (DRT) were computed daily. The mean (+/- standard deviation) slope of DRT on the overall judgement across 10 days was 2.1+/-2.3degrees for the AV group and 1.9+/-0.9degrees for the V group (n=2 each). Visual-only pre- and post-tests showed an average change in DRT of -3.5+/-0.7degrees for the AV group, and 33.5+/-19.1degrees for the V group. This suggests that AV subjects learned to rely on the auditory cue for overall task performance and failed to improve their skill on the purely visual component. Once the auditory cues were removed, performance in the AV group dropped to pre-training levels. Thus, adding even an informative auditory cue to a visual task may impair rather than enhance visual learning under certain conditions.