Abstract
Behavioral and electrophysiological studies on bistable images helped to trace the boundary between what perception owes to early visual processing, and what cognition adds to it. They demonstrated that passive sensory processing can be actively modulated in a top-down manner. For example, knowledge and intentional control has been shown to alternate percepts of the same stimulus. One reason for this phenomenon to occur is a change in the allocation of attention in the visual display. How does this affect a task whose successful completion already requires permanent re-allocation of attention? Here, we use the multiple-object tracking (MOT) paradigm to study influences of participants’ knowledge on stimulus behavior to demonstrate that top-down processes affect sensory visual processing of dynamic stimuli. In MOT tasks, participants track and identify a specified set of target objects that move among identical looking distractors. Using pairs of cartoon eyes as stimuli, we integrated a strong exogenous cue that works independently of top-down control. In an experimental series, participants either tracked without further knowledge or were asked to identify the four conditions they were informed about beforehand. The gaze-cue effect was either implemented by the objects constantly “staring” or randomly “flirting” with a single other object (either a target or a distractor). Results: (1) only with explicit explanations about the eyes’ behavior, perception shifts were observed (measured in how often participants marked the gaze-cued object), (2) even when observers failed to identify the eyes’ behavior, the impact of the gaze cue was still strong, and (3) it is not tracking experience per se that enhances the gaze-cue processing. These results not only show that cognitive processing modes determine visual perception in attention-demanding dynamic tasks, but also serve as an approach to explain existing, contradicting results of past research.
Meeting abstract presented at VSS 2015