Abstract
Background Auditory sensory substitution (SS) devices, such as the vOICe, encode visual information into sound in real time, with the goal of aiding the blind. The vOICe translates an image's horizontal dimension into sound scan-time, vertical dimension into frequency, and pixel brightness into loudness. SS training and use engenders crossmodal plasticity in the blind and sighted so that the auditory input from the device generates early visual region activation during SS tasks. Nevertheless, unlike vision, vOICe interpretation is slow and laborious even after training, and therefore is often assumed to be processed top-down. Method We investigated whether sighted (N = 10) and blind (N = 4) participants can crossmodally activate the visual cortex with vOICe without attention following vOICe training. In a distract fMRI task, participants were distracted by counting backward during the playing of vOICe sounds encoded from white noise images. In a control fMRI task participants detected a flicker in the display of white noise images. Results Early visual regions (i.e. BA 18 and 19) and multisensory regions were activated during a vOICe distraction task in sighted and blind participants (post-vOICe-training – pre-vOICe-training). The automatic visual activation from vOICe (i.e. during a distraction task) is not likely due to visualization, because the neural activations were not correlated with participant post-hoc reports, nor with Vividness of Visual Imagery Questionaire scores. Furthermore, visual deactivation during a visual task was found in sighted participants ([pre-vOICe-training – post-vOICe-training], to highlight deactivation). Visual deactivation significantly correlated with sighted participants vOICe training performance. Discussion Our results indicate that SS can be processed in visual cortical regions without top-down attention in sighted and blind vOICe users. Visual deactivation following vOICe training may indicate that natural vision is weakened by crossmodal training via a competition for dominance between crossmodal and natural visual inputs to visual cortical regions
Meeting abstract presented at VSS 2016