Abstract
Previous studies suggest that individuals who were treated for congenital cataracts early in life are impaired at configural face processing, displaying difficulties detecting subtle differences in the spatial arrangement of facial features. While most accounts of such deficits have focused on the role of a critical period, we recently proposed a theory suggesting such deficits may result from the abnormally high initial acuity that these individuals experience upon sight onset. According to this theory, the initial low acuity period of the normally developing visual system may play a key role in developing the expertise in configural processing that is required for face individuation later in life, by forcing the visual system to integrate information over large image-patches in order to resolve diagnostic information from low-resolution input. To test the computational soundness of this theory, we trained two instances of a convolutional neural network (CNN) in order to model how early neural layers of a theoretical visual system may develop when trained on low vs. high-resolution face images. We found that the CNN instance trained on low-resolution face images developed larger wavelet patches in the first convolutional layer, leading to better generalization performance on test images, regardless of their resolution. Our findings support the perhaps counter-intuitive idea that training a visual system on optimal (high-resolution) input may actually be detrimental to the development of face individuation, as it may not force the visual system to integrate information over larger receptive fields, a process that is particularly crucial for configural processing.
Meeting abstract presented at VSS 2017