Journal of Vision Cover Image for Volume 17, Issue 10
September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Encoding of event roles from visual scenes is rapid, automatic, and interacts with higher-level visual processing
Author Affiliations
  • Alon Hafri
    Department of Psychology, University of Pennsylvania
  • John Trueswell
    Department of Psychology, University of Pennsylvania
  • Brent Strickland
    Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL Research University Institut Jean Nicod (ENS, EHESS, CNRS)
Journal of Vision August 2017, Vol.17, 1094. doi:https://doi.org/10.1167/17.10.1094
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alon Hafri, John Trueswell, Brent Strickland; Encoding of event roles from visual scenes is rapid, automatic, and interacts with higher-level visual processing. Journal of Vision 2017;17(10):1094. https://doi.org/10.1167/17.10.1094.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To successfully communicate about and navigate a perceptually chaotic world, we must not only extract the identities of people, but also the roles that they play in events, i.e. who did what to whom: Boy-hitting-girl is very different from girl-hitting-boy. We routinely categorize Agents (i.e. the actor) and Patients (i.e. the one acted upon) from visual input, but even when attention is otherwise occupied, do we automatically encode such roles? To investigate this question, we employed a "switching cost" paradigm. In several experiments, participants observed a continuous sequence of two-person event scenes and had to rapidly identify the side of a target actor in each (the male/female, or the red/blue-shirted actor). Critically, although role was orthogonal to gender and shirt color, and was never explicitly mentioned, participants responded more slowly when the target's role switched from trial to trial (e.g., the male went from being Patient to Agent). Despite the small absolute magnitude of this role switch cost, it was both significant and robust (all p's < 0.001, Cohen's d's > 0.86), with a majority of subjects and items demonstrating the effect. In an additional experiment, we probed the level of representation at which the role switch cost operates. We ran the same paradigm as before but edited the images such that actors always faced opposite directions ("mirror-flipped"). Thus, actor poses were preserved but their interaction was eliminated. The switch cost here was significantly lower, and additional "active posture" saliency effects emerged. This indicates that the role switch cost in our previous experiments cannot be fully explained by mere pose differences associated with Agents and Patients. Taken together, our experiments demonstrate that the human visual system is automatically engaged in extracting the structure of an event, i.e. who did what to whom, even when attention is directed toward other visual features.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×