September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Cross-modal codification of images with auditory stimuli: a language for the visually impaired ?
Author Affiliations
  • Takahisa Kishino
    Graduate School of Nanobioscience, Yokohama City University, JP
  • Roberto Marchisio
    Hicare Research Srl, Environment Park, via Livorno 60 - Torino - Italy
  • Ruggero Micheletto
    Graduate School of Nanobioscience, Yokohama City University, JP
Journal of Vision August 2017, Vol.17, 1356. doi:10.1167/17.10.1356
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Takahisa Kishino, Roberto Marchisio, Ruggero Micheletto; Cross-modal codification of images with auditory stimuli: a language for the visually impaired ?. Journal of Vision 2017;17(10):1356. doi: 10.1167/17.10.1356.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

What is the perception of an image ? Is it something strictly related to the physical perception of light or it is a more general process where objects characteristics can be deduced by a wide sensorial spectrum of information and then organized in something that can be understood as shape? Here, we describe a methodology to realize visual images cognition in the broader sense, by a cross-modal stimulation through the auditory channel. An original algorithm of conversion from bi-dimensional images to sounds has been established and tested on several subjects. Our results show that subjects where able to discriminate with a precision of 95% different sounds corresponding to different test geometric shapes. Moreover, after brief learning sessions on simple images, subjects where able to recognize among a group of 16 complex and never-trained images a single target by hearing its acoustical counterpart. Rate of recognition was found to depend on image characteristics, in 90% of the cases, subjects did better than choosing at random. This study contribute to the understanding of cross-modal visual perception of simple images and shapes. Also it contributes to the realization of systems that use acoustical signals to help visually impaired persons to recognize objects and improve navigation.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×