Abstract
In traditional visual-search, the search-time is strongly affected by the number of distractors when the target is defined by a conjunction of features but not for targets defined by single features. In Linguistically-Mediated Visual-Search (LMVS) it has been demonstrated that the search-time in conjunction search can be made less dependent on the number of distractors by incremental spoken delivery of target features (e.g. "Is there a red vertical?") but not when the audio cue completely preceded the visual display. The former condition is called Audio/Visual-Concurrent (A/V-Concurrent) and the later Auditory-Preceding (A- Preceding). However, previous research found that the efficiency of conjunction search improved for majority of participants but not for all of them. Here we use the paradigm of Spivey et al. (2001) and additionally recorded eye-movements. 30 observers participated in a 96 trial experiment with equal number of target-present and target-absent trials and set sizes of 5, 10, 15, and 20. Both A- Preceding and A/V-Concurrent conditions used speech files with "Is there a..." spliced onto the beginning of each of the four target queries and two descriptive adjectives (color: "red" or "green" and orientation: "vertical" or "horizontal"). 17 participants showed the expected LMVS effect with slope_Precedingslope_Concurrent (13.2, 11.5, p=0.5). The analysis of eye-movement reaction-time (EM-RT), measured by the first fixation close to the target, showed that there was no significant difference in the two conditions for the former group but the slope_EM-RT_Concurrent>slope_EM-RT_Preceding (34.2,12.6,p<.05) for participants not showing the LMVS effect. We demonstrate that the individual differences in LMVS effects can be explained by analysis of EM_RT which indicates that some participants may not integrate audio and visual information online to improve the efficiency of conjunction search.
Meeting abstract presented at VSS 2014