Purchase this article with an account.
Kandan Ramakrishnan, H. Steven Scholte, Arnold Smeulders, Sennay Ghebreab; Mapping human visual representations by deep neural networks. Journal of Vision 2016;16(12):373. doi: 10.1167/16.12.373.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
A number of recent studies have shown that deep neural networks (DNN) map to the human visual hierarchy. However, based on a large number of subjects and accounting for the correlations between DNN layers, we show that there is no one-to-one mapping of DNN layers to the human visual system. This suggests that the depth of DNN, which is also critical to its impressive performance in object recognition, has to be investigated for its role in explaining brain responses. On the basis of EEG data collected from a large set of natural images we analyzed different DNN architectures a 7 layer, 16 layer and a 22 layer DNN network using weibull distribution for the representations at each layer. We find that the DNN architectures reveal temporal dynamics of object recognition, with early layers driving responses earlier in time and higher layers driving the responses later in time. Surprisingly the layers from the different architectures explain brain responses to a similar degree. However, by combining the representations of the DNN layers we observe that in the higher brain areas we explain more brain activity. This suggests that the higher areas in the brain are composed of multiple non-linearities that are not captured by the individual DNN layers. Overall, while DNNs form a highly promising model to map the human visual hierarchy, the representations in the human brain go beyond the simple one-to-one mapping of the DNN layers to the human visual hierarchy.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only