September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
A computational feed-forward model predicts categorization of masked emotional body language for longer, but not for shorter latencies
Author Affiliations
  • Bernard Stienen
    Laboratory of Cognitive and Affective Neuroscience, Tilburg University, Tilburg, The Netherlands
  • Konrad Schindler
    Photogrammetry and Remote Sensing, Insititute of Geodesy and Photogrammetry, ETH Zurich, Switzerland
  • Beatrice de Gelder
    Laboratory of Cognitive and Affective Neuroscience, Tilburg University, Tilburg, The Netherlands
    Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts, USA
Journal of Vision September 2011, Vol.11, 606. doi:https://doi.org/10.1167/11.11.606
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bernard Stienen, Konrad Schindler, Beatrice de Gelder; A computational feed-forward model predicts categorization of masked emotional body language for longer, but not for shorter latencies. Journal of Vision 2011;11(11):606. https://doi.org/10.1167/11.11.606.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Schindler, van Gool and de Gelder (2008) showed that a computational neural model which exclusively modeled feed-forward processing and was engineered to fulfill the computational requirements of recognition was capable of categorizing a set of 7 different emotional bodily expressions in very much the same way as human observers did. However, since there was no time limit on the presentation time of the bodily expressions in the human categorization task it was likely that feedback processes were triggered. In this study the performance of the neural model is compared with the human performance when feedback processes are blocked by presenting participants with five masked emotional bodily expressions using a parametric backward masking procedure. These emotional expressions were fed into the model as well. Results show that the longer the SOA latency the closer the performance of the human subjects was to the predicted values by this model. On short SOA latencies, however, the human performance deteriorated, but the categorization of the emotional expressions was still above baseline. This shows that the model is very good at predicting the human performance, but that different processes seem to be playing a role when the visibility of the target is low and the subjects are confronted with emotional information. We concluded that either the feed-forward mechanism has not always sufficient time to efficiently categorize the stimulus (it maybe needs 100 or more milliseconds to work properly), or there is another mechanisms aiding the participant to classify the emotions. The latter could be hinting to the subcortico-cortical mechanism increasing the signal-to-noise ratio when feed-forward processes cannot be efficiently used.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×