Abstract
PURPOSE
Little is known about the comparison of learning time and accuracy between visual and haptic exploration of spatial layouts. In this study, we compared performance on the time to learn and perfectly reproduce four small-scale layouts (floor plans) in four different conditions. These included: visually (computer-based) and haptically (Lego-based) environments presented from a global or reduced view depth. In the global conditions, the complete layout (corridor structure) was available for inspection by vision or touch, whereas in the reduced view conditions, participants had to explore the layouts section by section.
Understanding how environmental learning differs as a function of modality and view depth will aid in our development of virtual models for pre-journey navigation through buildings by blind/low-vision people.
METHODS
Eight normally sighted subjects trained and tested in all four of the complex environments. Their task was to learn the layout (42–60 corridor segments) including the location of five targets and then reproduce it on a Lego grid.
RESULTS
This table shows mean learning times and standard errors in minutes. There were significant main effects of modality and viewing condition (p < .001), and a significant interaction. The difference between the view depths in the visual condition was much larger than that for the haptic conditions.
Conditions: Visual Haptic
Global View 5.56 (1.28) 16.87 (2.90)
Reduced View 13.64 (1.85) 21.07 (1.71)
CONCLUSIONS
The results show that it is faster to learn an environment visually than by touch and that vision benefits most from global viewing. Interestingly, view depth had little effect on learning time in the haptic condition, suggesting that touch is slower and shows a weaker effect of stimulus size than vision because of its smaller effective viewing “window”. Results will also be discussed in terms of how performance compares to low-vision navigation and to how well these learned environments transfer to real building navigation.
(Supported by NIH grant EY02857 and NIH training grant 5T32-EYO7133)