August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
‘Visual verbs’: Dynamic event types (such as twisting vs. rotating) are extracted quickly and spontaneously during visual perception
Author Affiliations
  • Huichao Ji
    Yale University
  • Brian Scholl
    Yale University
Journal of Vision August 2023, Vol.23, 5182. doi:https://doi.org/10.1167/jov.23.9.5182
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Huichao Ji, Brian Scholl; ‘Visual verbs’: Dynamic event types (such as twisting vs. rotating) are extracted quickly and spontaneously during visual perception. Journal of Vision 2023;23(9):5182. https://doi.org/10.1167/jov.23.9.5182.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The underlying units of visual representation often transcend lower-level properties, for example when we see objects in terms of a small number of generic stimulus types (e.g. animals, plants, faces, etc.). There has been much less attention, however, to the possibility that we also represent dynamic information in terms of a small number of primitive *event types* — such as twisting, rotating, bouncing, rolling, etc. (In models that posit a “language of vision”, these would be the foundational visual *verbs*.) We explored the possibility that such ‘event type’ representations are formed quickly and spontaneously during visual perception — even when they are entirely task-irrelevant. We did so by exploiting the phenomenon of *categorical perception* — wherein the differences between two stimuli are more readily noticed when they are represented in terms of different underlying categories. Observers simply viewed pairs of images or animations (presented very briefly, one at a time), and reported for each pair whether they were the same or different in any way. Cross-Type changes involved switches in the underlying event type (e.g. a towel being *twisted* in someone’s hands, replaced by a towel being *rotated* in someone’s hands), while Within-Type changes maintained the same event type (e.g. a towel being more or less twisted in someone’s hands). Critically, this distinction was always task-irrelevant, and Within-Type changes were always objectively greater in magnitude than were Cross-Type changes. Nevertheless, Cross-Type changes were much more readily noticed. And additional controls confirmed that such effects could not be explained by appeal to lower-level stimulus differences (such as the different hand positions involved in twisting vs. rotating). This spontaneous perception of a potentially continuous range of stimuli in terms of a smaller set of primitive “visual verbs” might promote both generalization and prediction about how events are likely to unfold.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×