Abstract
The visual environment is highly structured in terms of how objects appear with respect to each other in time and space. Studies of statistical learning have sought to investigate how such regularities are acquired, but they often employ simplified displays that lack the complexity of natural input. For example, in temporal statistical learning, a sequence containing regularities is typically presented one object at a time, whereas dynamic natural input contains multiple objects from moment-to-moment, where regularities could exist in all possible transitions. In such situations, what determines which regularities we learn? The present study tested the hypothesis that an initial, even idiosyncratic bias in eye fixation could tip the scales. During exposure, participants viewed sequences in which a scene (A) was followed by two objects (B and C). From this, they could learn to expect B and/or C after A. We predicted that whichever object they first fixated in the double array would be more strongly bound to A. At test in Experiment 1, participants viewed sequences in which an exposure scene (A) was followed by one object (B or C), and they had to categorize the object as quickly as possible. Faster RT was taken as evidence that the scene-object pair had been learned and that the scene set up an expectation of the object. Consistent with the hypothesis, whichever object tended to be fixated first during exposure was categorized more quickly. To examine learning another way, Experiment 2 used a familiarity test in which participants discriminated two sequences: a scene-object pair from exposure (e.g., A-B) and a "foil" sequence (e.g., A-D). Discrimination was only reliable for pairs containing the initially fixated object. These findings provide evidence that attention and eye movements constrain which of all possible regularities in the world we learn.
Meeting abstract presented at VSS 2016