Purchase this article with an account.
Farley Norman, Sydney P Wheeler, Lauren E Pedersen; Haptic-visual crossmodal shape matching. Journal of Vision 2019;19(10):198b. doi: https://doi.org/10.1167/19.10.198b.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
A set of two experiments evaluated the crossmodal perception of solid shape. Sixty-six total participants (mean age = 21.2 years) haptically explored a single randomly-selected object on each trial and then indicated which of 12 visible objects possessed the same shape. Three different types of objects were used, two of which possessed natural shapes (bell peppers & sweet potatoes: Capsicum annuum and Ipomoea batatas, respectively), while the third object type was a set of sinusoidally-modulated spheres (SIMS, see Norman, Todd, & Phillips, 1995). Each object was haptically explored with both hands for 7 seconds. Even though the particular object shapes within each object type were mathematically distinct and unique, the participants’ crossmodal matching accuracies varied substantially (F(2, 63) = 128.6, p < .000001, partial eta squared = .80) across the object types (78.7, 60.9, & 18.6 percent correct for sweet potatoes, bell peppers, and sinusoidally-modulated spheres, respectively). The naturally-shaped objects (bell peppers & sweet potatoes) were much more identifiable to vision and haptics, because their distributions of distinctly shaped surface regions (areas of differing Gaussian curvature; e.g., convex or concave hemispherical regions, saddle-shaped regions, cylindrical regions) were heterogeneous. In contrast, the randomly-shaped SIMS were substantially less identifiable, because their distributions of distinctly shaped surface regions were much more homogeneous. The results of the current study document what variations in surface shape produce objects that are highly recognizable to human vision and haptics.
This PDF is available to Subscribers Only