Abstract
Recent eye-tracking research has demonstrated that visual perception plays an important role in on-line spoken language comprehension. To test for the inverse — an influence of language on visual processing — we modified the basic visual search task by introducing spoken linguistic input. In classic visual search tasks, targets defined by only one feature appear to “pop-out” regardless of the number of distractors, suggesting a parallel search process. In contrast, when the target is defined by a conjunction of features, the number of distractors in the display causes a highly linear increase in search time, suggesting a serial search process. However, we found that when a conjunction target was identified by a spoken query, e.g., “Is there a red vertical?”, delivered concurrently with the visual display, the effect of set size on search time was dramatically reduced (Spivey, Tyler, Eberhard, & Tanenhaus, 2001). It appears that some immediate target feature processing may be possible upon hearing the first adjective. Moreover, the second adjective's featural processing may be limited to the set of objects that exhibit the first adjective's feature. This result was compared to a control condition where the spoken delivery of target identity (using the exact same speech files) entirely preceded display onset, and another control condition where target identity was delivered visually before each trial. In both control conditions, steep linear search functions obtained. In our most recent experiments, we have replicated these effects with triple-conjunction displays. For visual search in particular, these results suggest that the incremental linguistic processing of the spoken target features may allow visual search to process at least certain portions of a conjunction display in a parallel fashion. For vision in general, the results point to a more fluid interaction between visual and linguistic processes than typically acknowledged.