August 2010
Volume 10, Issue 7
Free
Meeting Abstract  |   August 2010
Human Echolocation I
Author Affiliations
  • Lore Thaler
    Department of Psychology, The University of Western Ontario
  • Stephen R. Arnott
    Rotman Research Institute, Baycrest Centre
  • Melvyn A. Goodale
    Department of Psychology, The University of Western Ontario
Journal of Vision August 2010, Vol.10, 1050. doi:10.1167/10.7.1050
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Lore Thaler, Stephen R. Arnott, Melvyn A. Goodale; Human Echolocation I. Journal of Vision 2010;10(7):1050. doi: 10.1167/10.7.1050.

      Download citation file:


      © 2015 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

It is common knowledge that animals such as bats and dolphins use echolocation to navigate the environment and/or to locate prey. It is less well known, however, that humans are capable of using echolocation as well. Here we present behavioral and fMRI data from two blind individuals (aged 27 and 45 years) who produce mouth-clicks and use click-based-echolocation to go about their everyday activities, which include walking through crowded streets in unknown environments, mountain biking, and other spatially demanding activities. Behavioral testing under regular conditions (i.e. in which each person actively produced clicks) showed that both individuals could resolve the angular position of an object placed in front of them with high accuracy (∼ 2° of auditory angle at a distance of 1.5 meters). This extremely high level of performance is remarkable, but not unexpected, given what they are capable of doing in everyday life. To validate the stimuli we planned to use in fMRI conditions, we took in-ear audio recordings from each individual during active echolocation and played those recordings back using MRI compatible earphones. In these conditions, both individuals were still able to use echolocation to determine with considerable accuracy the angular position, shape (concave vs. flat), motion (stationary vs. moving), and identity (car vs. tree vs. streetlight) of objects. Importantly, during the recordings, none of the objects emitted any sound but simply offered a sound-reflecting surface. We conclude that echolocation, during both active production and passive listening, enables our two participants to perform tasks that are typically considered impossible without vision. To investigate the neural substrates of their echolocation abilities, we employed our passive listening paradigm in combination with fMRI (see Abstract ‘Human Echolocation II’).

Arnott, S. R. Goodale, M. A. (2010). Human Echolocation I [Abstract]. Journal of Vision, 10(7):1050, 1050a, http://www.journalofvision.org/content/10/7/1050, doi:10.1167/10.7.1050. [CrossRef]
Footnotes
 This research was supported by a grant to MAG from the Canadian Institutes of Health Research.
© 2010 ARVO
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×