Second, DNNs have shown great promise for modeling human psychophysical tasks, such as image recognition (e.g.,
Geirhos, Rubisch, Michaelis, Bethge, Wichmann, & Brendel, 2018;
Su, Vargas, & Kouichi, 2019; Geirhos, Meding, & Wichmann,
2020; Geirhos, Narayanappa, Mitzkus, Thieringer, Bethge, Wichmann, & Brendel,
2021) or crowding, a breakdown of object recognition in the presence of surrounding objects (
Volokitin, Roig, & Poggio, 2017;
Doerig, Bornet, Choung, & Herzog, 2020a;
Lonnqvist, Clarke, & Chakravarthi, 2020). However, even though DNNs show close to human-like object recognition performance, their processing can be highly different than that of humans. For example, ImageNet-trained DNNs prefer textural information rather than the shape-based information that humans prioritize (
Geirhos et al., 2018). The trial-by-trial performance of DNNs in perceptual tasks is also consistently different than that of humans (
Geirhos et al., 2020; Geirhos et al.,
2021). Likewise, although on a category-to-category basis the response patterns of DNNs may appear similar to those of humans, the specific images on which DNNs make misclassifications are often different from the images on which humans make misclassifications (
Geirhos et al., 2020). This suggests systematic differences in categorization ability. Even the specifically brain-inspired recurrent CORnet-S shows response patterns that are similar to those of other DNNs and dissimilar to human response patterns (
Rajalingham, Issa, Bashivan, Kar, Schmidt, & DiCarlo, 2018;
Geirhos et al., 2020). This indicates that the function they compute to solve a task, regardless of architectural specifics, remains largely different from that of humans. Hence, even though performance of DNNs and humans may be similar, the computation underlying the performance may be very different.