Abstract
With over 285 million visually impaired people worldwide there is growing interest in sensory substitution -- a non-invasive technology substituting information from one sensory modality (e.g., vision) with another sensory modality (e.g., touch). Previous work has focused primarily on how blind or vision-impaired people discriminate between different types of objects using sensory substitution devices (SSDs). A fraction of this work has explored whether and to what extent SSDs support precise localisation of objects in space; these studies report target location errors of around 8-14 cm. Here we investigated the object localisation ability of visually impaired participants using a visual to auditory (the vOICe) and a visual to tactile (custom built) SSD. In three separate conditions participants had to point to a white disk presented against a black background on a touchscreen. In the first task the SSD conveyed information only about the location of the disk, in the second task the participant's hand was displayed in addition to the disk, and in the third task a white reference border marking the monitor frames was also added to the display. We found participants were slightly more accurate overall than in previous studies (< 6 cm error), however localisation accuracy did not significantly differ across the three conditions. Participants' responses were slower in the "hand" and "reference" conditions, suggesting that the additional information acted like a distractor, rendering the task more difficult. This result suggests that the processing of otherwise visual information via the auditory and tactile modalities is severely limited, especially when multiple objects are presented in parallel, suggesting that filtering of relevant information is critical to enhancing performance of future SSDs.
Meeting abstract presented at VSS 2016