Abstract
Echolocation is an active sensing strategy that some blind people use to detect, discriminate and localize objects in their surroundings. Trained echolocators emit tongue clicks and may vary their clicking pattern dynamically to improve perception under challenging circumstances. However, it is unknown how echoacoustic information is integrated across individual samples (clicks) and how individual echoes are represented neurally. To address these questions, here we recorded the brain activity of blind and sighted individuals using EEG while they performed an echoacoustic localization task. On each trial, subjects listened to a train of 2, 5, 8 or 11 synthesized mouth clicks, and spatialized echoes from a reflecting object located at azimuths of ±5° to ±25° relative to the midsagittal plane. The task was to report whether the echo reflector was to the left or right of center. We hypothesized that the number of clicks in each trial, in addition to the echo azimuth, would modulate performance. The blind expert performed at over 93%, with lateralization thresholds decreasing linearly from 2- to 8-click trials; sighted controls performed at chance, with no effect of echo eccentricity or click count, although they easily lateralized the echoes when the emitted click was removed. Left vs. right location was reliably decoded from the EEG response in both groups after only one click. In the sighted group, perceptual reports were decoded more reliably in the last two clicks of a trial relative to the first two, suggesting a cumulative perceptual decision-making process independent of the stimulus representation. In proficient blind observers, successive click-echo samples linearly sharpen echoacoustic representations until saturation; in novice sighted controls, the spatial information in the EEG response was unavailable to conscious access. These results suggest that echolocation expertise relies on extracting echoes from other masking sounds and integrating them across samples.