Abstract
Recent research has explored the use of active echolocation by blind individuals, who, by generating mouth-clicks, elicit echoes and use them to perceive and interact with their surroundings. In prior work we showed that expert practitioners can distinguish the positions of objects separated by as little as ~1.5°, the approximate threshold of visual letter recognition at 35° retinal eccentricity. They can also echolocate household-sized objects, then distinguish them haptically from a distractor with significantly above-chance accuracy (~60%). Here we investigated whether the spatial resolution of crossmodal echo-haptic object discrimination is similar to that measured for localization. We found that blindfolded sighted participants tested on the same crossmodal match-to-sample design performed similarly, but with greater inter-individual variability. Performance was similar for both common household objects and novel (Lego) objects of arbitrary shape. This suggests that some coarse object information a) is available to both expert blind and novice sighted echolocators, b) transfers from auditory to haptic modalities, c) is not dependent on prior object familiarity, and d) may require a larger angular size than was subtended by our test objects. Thus, we repeated the match-to-sample experiments using stimuli enlarged by 50% along each dimension. Preliminary results do not show improved performance with larger object size; feedback after each trial in future sessions may improve accuracy. Next, we aimed to directly estimate the equivalent visual resolution of echoic object perception. In a pilot experiment, sighted participants examined target objects visually at 35° eccentricity and, subsequently, identified the target haptically. Performance was ~85%, suggesting that haptic recognition is better informed by visual object information at 35° than by object echoes at the scales we tested. Manipulating visual blur to equate visual and echoic performance will reveal more precisely the spatial resolution of echo-based object perception.