October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Who’s chasing who? Adults' and infants' engagement of quantificational concepts (“Each” and “All”) when representing visual chasing events.
Author Affiliations
  • Nicolò Cesana-Arlotti
    Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD
  • Tyler Knowlton
    Department of Linguistics, University of Maryland, College Park, MD
  • Jeffrey Lidz
    Department of Linguistics, University of Maryland, College Park, MD
  • Justin Halberda
    Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD
Journal of Vision October 2020, Vol.20, 1549. doi:https://doi.org/10.1167/jov.20.11.1549
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicolò Cesana-Arlotti, Tyler Knowlton, Jeffrey Lidz, Justin Halberda; Who’s chasing who? Adults' and infants' engagement of quantificational concepts (“Each” and “All”) when representing visual chasing events.. Journal of Vision 2020;20(11):1549. https://doi.org/10.1167/jov.20.11.1549.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The human mind compresses rich visual experiences into simpler constructs. At the interface between vision and cognition, such compression may engage categorical representations (e.g., RED, AGENT, CHASING) and may tap into logically structured categories (e.g., ALL, EACH). To shed light on the nature and ontogenesis of the mental computations that link vision with abstract logical concepts, we tested adult’s and 10-month-olds infant's capacity to encode visual scenes of exhaustive-collective actions (e.g., All of the wolves chased a sheep together) or exhaustive-individual actions (e.g., Each of the wolves cased a sheep by himself). In Experiment 1, adults were asked to describe movies in which chevron shapes were seen to "chase" moving balls in a MOT design (see Fig.1). Adults spontaneously used the word "All" to describe movies where the chevrons all pursued a single ball together and "Each" for movies where each chevron chased its own ball. Crucially, the use of "Each", but not of "All", significantly decreased when there were more than three chasers. This suggests that "Each" piggybacked on the representations of multiple discrete individual events – i.e., MOT (within the capacity of working memory), while "All" piggybacked on the representation of a single collective event – i.e., ensembles. In Experiment 2, we asked if these representations are in place early in life, using visual habituation to test 10-month-olds (see Fig.2). Infants who were habituated to the "All" movies with three chasers successfully dishabituated to the "Each" movies with three chasers, and vice versa. We are currently testing the limits of infants’ representations of “All” and “Each” actions by habituating them to movies with five chevrons. These findings begin to reveal that the concepts expressed by “All” and “Each” integrate with rich visual scenes trough distinct computations and that these computations might be in place early in life.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×