Abstract
Vision loss poses challenges for people's mobility. Previous reports in the literature have found that echo-acoustics can be useful for avoidance of obstacles, but these studies did not dissociate between objects at head and ground level, or considered use of other mobility methods such as long cane in combination with echolocation, or the role played by echolocation expertise. Here we tested a sample of 7 blind echolocation experts, and 14 sighted, and 7 blind people new to echolocation. The task was to use either mouth-click based echolocation, long cane, or both, to avoid an obstacle (60x60cm polystyrene board) that could be either at head or ground level. 3D-movement and sound data were acquired simultaneously using a vicon motion-capture system and head worn microphones. We found that use of echolocation significantly decreased collisions and impact speed with obstacles at head, but not ground level for all participants. This may result from acoustic masking of ground obstacles via floor reflections. In contrast, the cane significantly decreased collisions with obstacles at ground, but not head level. The combined use of echolocation and cane resulted in fewest collisions overall. The main difference between echolocation experts and other participants was that experts had significantly higher walking speeds in all conditions. In fact, walking speed of experts was not significantly different from walking speed of sighted participants using vision. Further analyses suggest that echolocation experts, as compared to people new to echolocation, tended to make more head movements during clicking. These results confirm previous reports showing that echolocation improves mobility for people who are blind. Importantly, they also highlight that care must be taken in instructing potential users due to low efficiency of echolocation for detection of ground obstacles. The results also present a first attempt at characterizing dynamic human echolocation (synchronous sound and movement data)
Meeting abstract presented at VSS 2017