September 2005
Volume 5, Issue 8
Vision Sciences Society Annual Meeting Abstract  |   September 2005
Combining multi-modal information of a deformation of an object
Author Affiliations
  • Kohske Takahashi
    Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University
  • Jun Saiki
    PRESTO, Japan Science and Technology Agency, and Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University
Journal of Vision September 2005, Vol.5, 753. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kohske Takahashi, Jun Saiki; Combining multi-modal information of a deformation of an object. Journal of Vision 2005;5(8):753.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Recently, it has been shown that humans could efficiently combine multi-modal information to estimate an environment. In most research, however, information to be processed was temporally static. We conducted two experiments to investigate the process of combining dynamic multi-modal information. Participants estimated the amount of compressive deformation of a virtual cylinder through only haptic or visual (uni-modal trials) or both haptic and visual (multi-modal trials) cues. There was inconsistency between the amount or the timing of visual and haptic deformations in some of the multi-modal trials. The virtual surface of the cylinder was presented haptically to the index finger using a force-feedback device (PHANToMTM). The visual stimulus was a cylinder-shaped white random patched texture on black background without shading nor stereo projection. The task was to identify the odd stimulus among three sequentially presented deformations (experiment 1), or to identify which was the larger between two deformations (experiment 2). The threshold of multi-modal estimation was lower than uni-modal estimation if there was no inconsistency between modalities. In addition, in experiment 1, in multi-modal trials where participants could use only haptic information to identify the odd stimulus because visual deformation of it was identical with the other stimuli due to inconsistencies, the estimation threshold was higher than in uni-modal haptic trials. However, the threshold in multi-modal trials where only visual information could be used was almost the same as in uni-modal visual trials. This suggested that there was a bias toward visual information. Experiment 2 showed that the performance drastically fell if the timing of deformations was inconsistent, even when the difference was only 125ms. Humans can efficiently combine not only static but also dynamic multi-modal information, and geometrical and temporal consistencies between modalities are important for efficient combination.

Takahashi, K. Saiki, J. (2005). Combining multi-modal information of a deformation of an object [Abstract]. Journal of Vision, 5(8):753, 753a,, doi:10.1167/5.8.753. [CrossRef]
 This study was supported by PRESTO from JST, a Grant-in-Aid for Scientific Research, Ministry of Education, Science, and Culture of Japan, no. 16200020, and the 21st Century COE Program from MEXT (D-2 to Kyoto University).

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.