August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Haptic shape guides visual search
Author Affiliations
  • Alexandra List
    Department of Psychology, Northwestern University
  • Lucica Iordanescu
    Department of Communication Sciences and Disorders, Northwestern University
  • Marcia Grabowecky
    Department of Psychology, Northwestern University\nInterdepartmental Neuroscience Program, Northwestern University
  • Satoru Suzuki
    Department of Psychology, Northwestern University\nInterdepartmental Neuroscience Program, Northwestern University
Journal of Vision August 2012, Vol.12, 1320. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexandra List, Lucica Iordanescu, Marcia Grabowecky, Satoru Suzuki; Haptic shape guides visual search. Journal of Vision 2012;12(9):1320.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Crossmodal research has shown that information from one sensory modality can influence perceptual and attentional processes in other modalities. Here, we demonstrate a novel crossmodal interaction between haptics and vision, in which haptic shape information influences visual attention. In our study, participants manually explored (unseen) objects and searched for a visual target while we recorded their eye movements. The shape of the manually held object facilitated search for similarly-shaped visual items, whether visual targets were typically graspable in size (e.g., a cell phone, a badge) or not (e.g., a planet, a high rise). This facilitation manifests as a reduction in both overall search times and initial saccade latencies when the haptic shape (e.g., a cone) is consistent with a visual target (e.g., a teepee, a party hat), compared to when it is inconsistent (e.g., a hockey puck, an orange). These haptic-visual facilitative effects occur despite the fact that the manually held shapes are anti-predictive of the visual target's shape, suggesting that this influence is not due to expectation or bias. Additionally, when the haptic shape is consistent with a distracter (instead of the target) in the visual search array, the initial saccades toward the target are disrupted. Together, these results suggest that this crossmodal influence is automatic and demonstrate a robust shape-specific haptic capture of visual attention.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.