Abstract
PURPOSE Are verbal descriptions as effective as visual input for learning spatial layouts? We asked whether people can learn building layouts through exploration of computer-based virtual displays that use synthetic speech to describe layout geometry. If learning with these verbal displays transfers to efficient navigation in real buildings, such displays could be included in speech-based indoor navigation systems for blind people. We also compared learning performance for spatial layouts conveyed via verbal and visual displays. This comparison addresses the equivalence of the cognitive maps developed from verbal and visual spatial learning. METHODS 16 Ss in the visual experiment and 19 Ss in the verbal experiment were blindfolded and presented with either a visual or verbal computer-based virtual building layout (each matched for amount of available geometric information). Ss used the keyboard to explore the layout, with each key press updating the user's location by visually displaying or verbally describing layout information. The task was to explore the layouts and find 4 target locations. The amount of training was matched between conditions and learning was evaluated by a transfer test in which Ss were taken to the corresponding real floor and asked to walk routes between target locations. RESULTS Target localization accuracy in the transfer test was significantly better (p<0.0001) for visual learning (M=85%, SE=5.21%) than for verbal learning (M=50.3%, SE=8.08%). Both were significantly above chance (∼2.5%, defined as 1 over the number of possible target locations). For correctly localized targets, efficient paths were chosen following both visual and verbal learning. The mean ratios of shortest path length to actual path length were 0.95 (verbal) and 0.97 (visual). CONCLUSIONS Our results show that sighted subjects can learn building layouts from virtual verbal displays, but that the spatial learning is inferior to visual learning of the same environments.
Acknowledgments: This research was supported by NIDRR grant H133A011903, NIH training grant 5T32 EY07133 and NIH grant EY-02857.