August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Using Eye Movements to Investigate Individual Differences in Linguistically Mediated Visual Search
Author Affiliations
  • Sankalita Mandal
    University of Kaiserslautern, Germany
  • Tandra Ghose
    University of Kaiserslautern, Germany
  • Yannik T. H. Schelske
    University of Kaiserslautern, Germany
  • Eric Chiu
    University of California, Merced, USA
  • Michael J. Spivey
    University of California, Merced, USA
Journal of Vision August 2014, Vol.14, 1202. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sankalita Mandal, Tandra Ghose, Yannik T. H. Schelske, Eric Chiu, Michael J. Spivey; Using Eye Movements to Investigate Individual Differences in Linguistically Mediated Visual Search. Journal of Vision 2014;14(10):1202. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

In traditional visual-search, the search-time is strongly affected by the number of distractors when the target is defined by a conjunction of features but not for targets defined by single features. In Linguistically-Mediated Visual-Search (LMVS) it has been demonstrated that the search-time in conjunction search can be made less dependent on the number of distractors by incremental spoken delivery of target features (e.g. "Is there a red vertical?") but not when the audio cue completely preceded the visual display. The former condition is called Audio/Visual-Concurrent (A/V-Concurrent) and the later Auditory-Preceding (A- Preceding). However, previous research found that the efficiency of conjunction search improved for majority of participants but not for all of them. Here we use the paradigm of Spivey et al. (2001) and additionally recorded eye-movements. 30 observers participated in a 96 trial experiment with equal number of target-present and target-absent trials and set sizes of 5, 10, 15, and 20. Both A- Preceding and A/V-Concurrent conditions used speech files with "Is there a..." spliced onto the beginning of each of the four target queries and two descriptive adjectives (color: "red" or "green" and orientation: "vertical" or "horizontal"). 17 participants showed the expected LMVS effect with slope_Precedingslope_Concurrent (13.2, 11.5, p=0.5). The analysis of eye-movement reaction-time (EM-RT), measured by the first fixation close to the target, showed that there was no significant difference in the two conditions for the former group but the slope_EM-RT_Concurrent>slope_EM-RT_Preceding (34.2,12.6,p<.05) for participants not showing the LMVS effect. We demonstrate that the individual differences in LMVS effects can be explained by analysis of EM_RT which indicates that some participants may not integrate audio and visual information online to improve the efficiency of conjunction search.

Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.