Abstract
Schindler, van Gool and de Gelder (2008) showed that a computational neural model which exclusively modeled feed-forward processing and was engineered to fulfill the computational requirements of recognition was capable of categorizing a set of 7 different emotional bodily expressions in very much the same way as human observers did. However, since there was no time limit on the presentation time of the bodily expressions in the human categorization task it was likely that feedback processes were triggered. In this study the performance of the neural model is compared with the human performance when feedback processes are blocked by presenting participants with five masked emotional bodily expressions using a parametric backward masking procedure. These emotional expressions were fed into the model as well. Results show that the longer the SOA latency the closer the performance of the human subjects was to the predicted values by this model. On short SOA latencies, however, the human performance deteriorated, but the categorization of the emotional expressions was still above baseline. This shows that the model is very good at predicting the human performance, but that different processes seem to be playing a role when the visibility of the target is low and the subjects are confronted with emotional information. We concluded that either the feed-forward mechanism has not always sufficient time to efficiently categorize the stimulus (it maybe needs 100 or more milliseconds to work properly), or there is another mechanisms aiding the participant to classify the emotions. The latter could be hinting to the subcortico-cortical mechanism increasing the signal-to-noise ratio when feed-forward processes cannot be efficiently used.