Purchase this article with an account.
Takahisa Kishino, Roberto Marchisio, Ruggero Micheletto; Cross-modal codification of images with auditory stimuli: a language for the visually impaired ?. Journal of Vision 2017;17(10):1356. doi: 10.1167/17.10.1356.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
What is the perception of an image ? Is it something strictly related to the physical perception of light or it is a more general process where objects characteristics can be deduced by a wide sensorial spectrum of information and then organized in something that can be understood as shape? Here, we describe a methodology to realize visual images cognition in the broader sense, by a cross-modal stimulation through the auditory channel. An original algorithm of conversion from bi-dimensional images to sounds has been established and tested on several subjects. Our results show that subjects where able to discriminate with a precision of 95% different sounds corresponding to different test geometric shapes. Moreover, after brief learning sessions on simple images, subjects where able to recognize among a group of 16 complex and never-trained images a single target by hearing its acoustical counterpart. Rate of recognition was found to depend on image characteristics, in 90% of the cases, subjects did better than choosing at random. This study contribute to the understanding of cross-modal visual perception of simple images and shapes. Also it contributes to the realization of systems that use acoustical signals to help visually impaired persons to recognize objects and improve navigation.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only