Abstract
The human mind compresses rich visual experiences into simpler constructs. At the interface between vision and cognition, such compression may engage categorical representations (e.g., RED, AGENT, CHASING) and may tap into logically structured categories (e.g., ALL, EACH). To shed light on the nature and ontogenesis of the mental computations that link vision with abstract logical concepts, we tested adult’s and 10-month-olds infant's capacity to encode visual scenes of exhaustive-collective actions (e.g., All of the wolves chased a sheep together) or exhaustive-individual actions (e.g., Each of the wolves cased a sheep by himself).
In Experiment 1, adults were asked to describe movies in which chevron shapes were seen to "chase" moving balls in a MOT design (see Fig.1). Adults spontaneously used the word "All" to describe movies where the chevrons all pursued a single ball together and "Each" for movies where each chevron chased its own ball. Crucially, the use of "Each", but not of "All", significantly decreased when there were more than three chasers. This suggests that "Each" piggybacked on the representations of multiple discrete individual events – i.e., MOT (within the capacity of working memory), while "All" piggybacked on the representation of a single collective event – i.e., ensembles.
In Experiment 2, we asked if these representations are in place early in life, using visual habituation to test 10-month-olds (see Fig.2). Infants who were habituated to the "All" movies with three chasers successfully dishabituated to the "Each" movies with three chasers, and vice versa. We are currently testing the limits of infants’ representations of “All” and “Each” actions by habituating them to movies with five chevrons.
These findings begin to reveal that the concepts expressed by “All” and “Each” integrate with rich visual scenes trough distinct computations and that these computations might be in place early in life.