Abstract
Background: Worldwide 45 million people are blind. Sensory substitution (SS) may restore visual function by encoding visual information into a signal perceived by another modality, such as somatosensation or audition. However no devices are commercially available due to long training duration, top-down attention requirements, and resulting low-functionality. Enhancing the intuitiveness of devices is critical to their wider use. Crossmodal correspondences, or intrinsic (crossmodal) mapping, are relations that exist across all modalities (such as high spatial position and high-frequency sounds) and are often used in SS encodings. By initiating SS training with crossmodal correspondence primitives and gradually building complexity, SS may become less attention-intensive and more intuitive. Method: Texture stimuli were distinguished between by naïve (no instruction, nor training on the device encoding) and trained SS users. Subjects listened/viewed to all sounds and image alternatives of each set and then paired a sound with an image (3AFC). Blind subjects used SS sounds and image reliefs. Results: Naïve and trained subjects performed above chance on a majority of simple and complex textures tested (on 15/16 sets), ranging from a set of images of lines of different thickness, a set of circles patterns of different sizes, and a set of natural textures. Texture interfaces (two textures with different border geometries) were also tested; naïve and trained sighted subjects performed above chance on 7 of 8 image sets. Naïve blind were tested on distinguishing lines of different thickness, circles patterns of different sizes and performed above chance. The accuracy difference between trained vs. naïve sighted groups was found to weakly correlate with image complexity (number of brightness levels and an edge-counting metric), and weakly inversely correlate with image repetitiveness. Discussion: Intuitiveness generated by intrinsic crossmodal mapping may be used to improve and shorten training procedures.
Meeting abstract presented at VSS 2013