Abstract
Our visual and haptic perceptual systems are responsible for creating our mental representations of 3-D shapes. Alas, it has been shown that the two systems do not always work congruently. Some results hypothesize an advantage to the visual system for some tasks while others suggest the haptic system may contribute more useful information. The specific nature of these discrepancies, especially with respect to complex 3-D shape perception, remain somewhat a mystery. Past studies have used geometrically complex, but statistically ambiguous objects as stimuli, while still other studies have used well-determined yet geometrically simple objects. This study attempts to bridge these two stimulus categories. Complex, natural appearing, noisy 3-D stimuli were statistically specified in the Fourier domain and manufactured using a 3-D printer. A series of paired-comparison experiments examined observers' uni-modal (visual-visual and haptic-haptic) and cross-modal (visual-haptic) perceptual abilities. Performance in the uni-modal conditions were similar to one another and uni-modal presentation fared better than cross-modal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. When visually grouped by observers, the statistical nature of these features explain the groupings, yet there were no such patterns when the stimuli were grouped haptically. The existence of non-universal (i.e. modality-specific) representations would explain the poor cross-modal performance. Our current findings suggest that either each system creates a unique representation or the systems utilize a common representation but each in a different fashion. Vision shows a distinct advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.