Journal of Vision Cover Image for Volume 16, Issue 12
August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
A song of scenes & sentences: signatures of shared cortical resources between visual perception and language revealed by representational similarity analysis
Author Affiliations
  • Peer Herholz
    Laboratory for Multimodal Neuroimaging (LMN), Department of Psychiatry, University of Marburg
  • Verena Schuster
    Laboratory for Multimodal Neuroimaging (LMN), Department of Psychiatry, University of Marburg
  • Melissa Vo
    Scene Grammar Lab, Goethe University Frankfurt
  • Andreas Jansen
    Laboratory for Multimodal Neuroimaging (LMN), Department of Psychiatry, University of Marburg
Journal of Vision September 2016, Vol.16, 124. doi:https://doi.org/10.1167/16.12.124
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Peer Herholz, Verena Schuster, Melissa Vo, Andreas Jansen; A song of scenes & sentences: signatures of shared cortical resources between visual perception and language revealed by representational similarity analysis . Journal of Vision 2016;16(12):124. https://doi.org/10.1167/16.12.124.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous imaging studies, investigating the domain specificity of cortical networks, have indicated some common principles of processing across different cognitive functions and therefore shared cortical resources, e.g. the processing of hierarchical structures ("syntax") or contextual meaning ("semantics"). Whereas the majority of research focused on language and music, recent studies also emphasized comparable principles in visual perception. However, little is known about the degree of possibly shared cortical resources between vision and language. To overcome this existing gap, we created a paradigm consisting of two modalities, visual (natural scenes) and auditory (sentences) stimuli, equally divided into consistent, semantically inconsistent, and syntactically inconsistent. Twenty participants either viewed images or listened to sentences while BOLD-responses were recorded. We assessed cortical activation patterns for semantic and syntactic language processing, applying the general linear model in each participant's native space, thus creating participant and semantic/syntax specific functional ROIs (pfROIs). Subsequently we conducted a representational similarity analysis (RSA) within those pfROIs including activation patterns from all conditions and modalities to investigate the relationship between activation patterns of language and visual perception more closely. Both language conditions activated the expected left-lateralized networks, compromising IFG, STS/MTG and IPL (semantic) and IFG, as well as STS/STG (syntax). RSA in all pfROIs revealed distinguishable patterns between modalities. Focusing on the patterns more closely we found highest similarities across modalities for both, semantic and syntactic processing in their respective pfROIs. In particular, the semantic pfROIs showed highest similarity between the activation patterns for semantic processing of language and vision, whereas syntactic processing revealed most similar activation patterns in the syntactic pfROIs. These results underline a specific and distinct processing for semantic and syntax, additionally giving a first insight on common principles between vision and language, as the resulting activation patterns for either semantic or syntax processing were most similar across modalities.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×