September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Using voxel-wise encoding models to study occipito-temporal representations of the animacy, semantic and affective content of natural images.
Author Affiliations
  • Samy Abdel-Ghaffar
    Department Psychology, UC Berkeley
  • Jack Gallant
    Department Psychology, UC Berkeley HWNI Neuroscience Institute, UC Berkeley
  • Alex Huth
    HWNI Neuroscience Institute, UC Berkeley
  • Dustin Stansbury
    Program in Vision Science, UC Berkeley
  • Alan Cowen
    Department Psychology, UC Berkeley
  • Sonia Bishop
    Department Psychology, UC Berkeley HWNI Neuroscience Institute, UC Berkeley
Journal of Vision September 2015, Vol.15, 508. doi:https://doi.org/10.1167/15.12.508
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Samy Abdel-Ghaffar, Jack Gallant, Alex Huth, Dustin Stansbury, Alan Cowen, Sonia Bishop; Using voxel-wise encoding models to study occipito-temporal representations of the animacy, semantic and affective content of natural images.. Journal of Vision 2015;15(12):508. https://doi.org/10.1167/15.12.508.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It has been argued that animate emotional stimuli are biologically prepared. That is, as a result of evolutionary significance, they are processed rapidly, tend to capture attention and are better recalled. Here, we tested the prediction that we may have especially distinct representations of these stimuli. We investigated this by performing voxel-wise modeling on functional magnetic resonance imaging (fMRI) data acquired while participants (n=6) viewed natural images of varying semantic and affective content. Thirty fMRI runs of 7.5min duration (1440 images, each shown twice) per participant were used to estimate voxel-wise models. Twenty fMRI runs of 6min duration (180 images, each repeated 9 times) were used to validate the models and test prediction accuracy. Models coding for animacy, semantic content and the interaction of these features with image valence (negative, neutral, positive) were fit to the estimation data using a regularized ridge regression procedure. This resulted in sets of weights that described how the features in each model influenced Blood Oxygen Level Dependent (BOLD) activity in each voxel, for each individual participant. OT cortex was selected as a focus of investigation given its known semantic selectivity and role in object and scene recognition. Our results indicated that the valence of animate, but not inanimate, stimuli is represented in single voxels within OT cortex. This held even when animate and inanimate images were subdivided down into specific semantic classes (e.g. insects, mammals, human faces etc. versus household objects, indoor buildings etc.) Differentiation of the representation of animate stimuli as a function of affective valence within OT cortex might facilitate recognition and subsequent processing (e.g. action selection, encoding into long-term memory) of stimuli of biological relevance.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×