Abstract
Previously, we have shown that DNNs that have been trained to recognize objects in extreme levels of visual noise can attain performance levels that match human observers (Jang & Tong, VSS, 2018). However, is it necessarily the case that these noise-trained DNNs are processing visual information in a more human-like manner? We evaluated this question in 4 ways. First, we asked whether the critical signal-to-signal-plus-noise threshold, at which individual images of objects become recognizable, might be better correlated across noise-trained DNNs and human observers. Second, we asked whether extended training at recognizing objects in visual noise, such as animals vs. vehicles, would lead to category-specific or generalized benefits of learning for humans and DNNs. Third, we evaluated whether noise-trained DNNs rely on common diagnostic regions as human observers to recognize individual object images in noise. Fourth, we tested whether noise-trained DNNs can recognize objects in real-world noisy viewing conditions, such as rain or snow, given that human observers excel at such challenging tasks. We performed two behavioral learning experiments with human participants, in which they were trained by ImageNet validation dataset with noise, one on full 16 categories and the other on a subset of the 16 categories (i.e., either 8 animate or 8 inanimate categories). As a result, noise-trained DNNs showed more similar patterns of signal-to-signal-plus-noise thresholds to those of human observers. From the second experiment, human observers and noise-trained DNNs both failed to generalize to untrained categories with noise, indicating noise training is category-specific. Furthermore, noise-trained DNNs relied on more similar regions-of-interest with humans when recognizing objects in noise. Lastly, noise-trained DNNs significantly outperformed pretrained DNNs given real-world noisy images. To conclude, our results support the notion that noised-trained DNNs process noisy object images in a more human-like manner.