Abstract
Classification images (CIs) derived from human responses show strong evidence that humans use the shape of illusory contours in the “fat/thin” illusory figure discrimination task (Gold, Murray, Bennett, & Sekuler, 2000). Previously, we found that a convolutional neural network (CNN), Alexnet, failed to perceive illusory contours from partial circle inducing elements, a process readily and automatically performed in the human visual system. One limitation of Alexnet is its inclusion of only feed forward connections. We explored a different network architecture, a denoising autoencoder, in which a noisy input image is encoded or compressed to a more abstract (compact) representation and then decoded to reproduce the original image without noise. Although feed-forward, the encoding/decoding process can be taken to resemble a form of feedback in that the final, decoded representation is the result of information being sent “backward” from a higher-level, abstract representation (the encoded representation). By passing an image repeatedly through the autoencoder, a type of recurrent processing can be approximated. This recursive architecture perhaps resembles more closely human-like processing of visual information between low-level and middle-level visual areas. This network was trained on noisy fat/thin images with real contours. When given images with illusory contours, it “filled-in” the contours in the decoded image. Recurrent processing improved the clarity of the reconstructed contours. This denoised image was then fed into a five-layer convolutional network for making fat/thin classification decision. Noise patterns were then used to form CIs. Our simulation showed that the resulting CIs from the autoencoder network recovered the illusory contours, showing similar results as humans. This result suggests that recursive processing simulating both feed forward and feedback connections may be important in illusory contour perception or in recognition from partial information.
Acknowledgement: NSF grant BCS-1655300 to HL