Abstract
There are countless candidate 'features' useful for the perceptual discrimination of three-dimensional shape. Human vision and touch use both modality-specific and cross-modal information to accomplish this task. For example, only vision can make diagnostic use of shading, color and optical texture while only touch can detect temperature, vibratory and other proprioceptive information such as joint angle. Some characteristics such as the physical texture of an object provide both visual appearance and tactile roughness information. When attempting to determine the 3D shape of an object its structural geometric information underlies most if not all of the useful features used by both vision and touch, individually or in concert. It is an open question as to what specific geometric information is essential or useful when performing discrimination tasks that involve vision, touch or their interaction. This research investigates the use of statistical differential geometric information while performing detection and discrimination tasks, both within and across perceptual modalities. We use eye- and hand-tracking to determine which parts of an object our subjects explore while making shape discrimination and differentiation decisions. We correlate these high-exploration regions with the objects' underlying differential geometric structure. We find that object regions with high curvature contrast are useful across both modalities as they define 'sharp' linear structures. Similarly, areas with high relative curvedness provide useful point landmarks. We further show that some geometric structures are more useful within a particular modality than another. As a result the worst-performing modality limits cross-modal use of this information but simultaneous presentation is facilitative. Finally, the statistical distribution of differential geometric structures serves to define diagnostic 'features' available to either touch or vision. The relative occurrence of features and their magnitude determine their usefulness within and across modalities.
Meeting abstract presented at VSS 2015