Abstract
Recently, it has been shown that humans could efficiently combine multi-modal information to estimate an environment. In most research, however, information to be processed was temporally static. We conducted two experiments to investigate the process of combining dynamic multi-modal information. Participants estimated the amount of compressive deformation of a virtual cylinder through only haptic or visual (uni-modal trials) or both haptic and visual (multi-modal trials) cues. There was inconsistency between the amount or the timing of visual and haptic deformations in some of the multi-modal trials. The virtual surface of the cylinder was presented haptically to the index finger using a force-feedback device (PHANToMTM). The visual stimulus was a cylinder-shaped white random patched texture on black background without shading nor stereo projection. The task was to identify the odd stimulus among three sequentially presented deformations (experiment 1), or to identify which was the larger between two deformations (experiment 2). The threshold of multi-modal estimation was lower than uni-modal estimation if there was no inconsistency between modalities. In addition, in experiment 1, in multi-modal trials where participants could use only haptic information to identify the odd stimulus because visual deformation of it was identical with the other stimuli due to inconsistencies, the estimation threshold was higher than in uni-modal haptic trials. However, the threshold in multi-modal trials where only visual information could be used was almost the same as in uni-modal visual trials. This suggested that there was a bias toward visual information. Experiment 2 showed that the performance drastically fell if the timing of deformations was inconsistent, even when the difference was only 125ms. Humans can efficiently combine not only static but also dynamic multi-modal information, and geometrical and temporal consistencies between modalities are important for efficient combination.
This study was supported by PRESTO from JST, a Grant-in-Aid for Scientific Research, Ministry of Education, Science, and Culture of Japan, no. 16200020, and the 21st Century COE Program from MEXT (D-2 to Kyoto University).