Abstract
Visual statistical learning (VSL) occurs when stimuli consistently co-occur in space or time. Subsequent to such exposure, even in the absence of awareness, subjects recognize such associations. The roles of stimulus similarity, categories, and tasks in shaping VSL are poorly understood and understudied. In previous work from our lab (Vickery, Park, Gupta & Berryhill, 2018), we found that subjects learned same-category pairings better than different-category pairings during exposure to temporal VSL streams containing such contingencies. This only occurred when the task was to categorize stimuli. It was not clear, however, whether visual similarity or categorization played the predominant role in this effect. In the current work, participants saw a stream of either fractal images (Experiment 1) or face and scene images (Experiment 2). The stream of images was composed of AB pairs of stimuli, such that image A always preceded image B within the stream. Subjects were instructed to learn arbitrary group mappings, with half of images associated with one group (‘z’ key response) and the other half with the other (‘m’ key response). Half of the pairs were within-group while the other half were between-group. In Experiment 1, subjects showed much greater recognition for within-than between-group pairings (p < .01), despite the fact that similarity was equalized by random assignment of fractal image to pairing and group. Experiment 2 replicated this effect (p < .001). In addition, Experiment 2 pairs were also equally composed of same or different natural category (face or scene) images equally divided between same and different arbitrary groupings (‘z’ and ‘m’ categories). Natural categories still played a role (p < .001), with better recognition for within-category pairings (no interaction between grouping and category). Our results strongly suggest that both arbitrary groupings and natural categories play strong roles in determining the strength of VSL.
Acknowledgement: NSF OIA 1632849 NSF BCS 1558535