August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Format-independent cortical representations of interactive events
Author Affiliations
  • Alon Hafri
    Department of Psychology, University of Pennsylvania
  • John Trueswell
    Department of Psychology, University of Pennsylvania
  • Russell Epstein
    Department of Psychology, University of Pennsylvania
Journal of Vision September 2016, Vol.16, 1185. doi:10.1167/16.12.1185
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Alon Hafri, John Trueswell, Russell Epstein; Format-independent cortical representations of interactive events. Journal of Vision 2016;16(12):1185. doi: 10.1167/16.12.1185.

      Download citation file:


      © 2017 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

The social world can be understood in terms of interactive events: people do things to one another and outside forces act upon them. These events can be classified into categories (kicking, brushing) that abstract away from inessential particulars such as the setting, identities of the actors, and perceptual details. What are the neural systems that support visual event categorization? To address this question, we used fMRI to identify brain regions that represent event categories in a format-independent way. We scanned participants while they viewed two-participant interactions (slap, kick, shove, bite, pull, brush, massage, tap) and performed an orthogonal 1-back task. Crucially, we included two stimulus formats, in separate runs: (1) carefully controlled videos of actors performing these events; and (2) visually varied photographs of these events, which were selected from Google Image to maximize visually dissimilarity amongst exemplars within each category (as assessed by hue, saturation, and value features, and GIST model features). Thus we were able to investigate neural representations of event categories both within- and across-format. Within the video format, a searchlight analysis of multivoxel patterns revealed widespread decodability of event category (e.g. kick) across occipital, parietal, and temporal cortex, and included regions known to respond to visual features relevant for distinguishing actions, such as the extrastriate body area, hMT+, and biological motion areas in the superior temporal sulcus. Within the image format, event category was decodable in a smaller set of brain loci, including bilateral supramarginal gyri and right posterior middle temporal gyrus. Notably, cross-format decoding was largely restricted to these same loci. Furthermore, the similarity structure among event category representations in these regions was reliably consistent across subjects. We propose that these brain regions constitute a link between visual recognition systems and conceptual systems that allow flexible, complex thought about who did what to whom.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×