July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Development Model of Face and Object Recognition Using Modular Neural Network
Author Affiliations
  • Panqu Wang
    Department of Electrical and Computer Engineering, University of California, San Diego
  • Garrison Cottrell
    Department of Computer Science and Engineering, University of California, San Diego
Journal of Vision July 2013, Vol.13, 174. doi:https://doi.org/10.1167/13.9.174
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Panqu Wang, Garrison Cottrell; Development Model of Face and Object Recognition Using Modular Neural Network. Journal of Vision 2013;13(9):174. doi: https://doi.org/10.1167/13.9.174.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Extensive research effort has been put onto building computational models for face and object recognition. However, how to best combine a recognition model with the development of the human visual system remains an open question. Research in contrast sensitivity shows the infants can only receive low spatial frequency information from visual stimuli, and their ability to receive full frequency information cannot achieve adult levels until 10 years old. Also, the right hemisphere (RH) learns earlier and faster than the left hemisphere (LH), and the RH is dominant in infants. It is also known that face recognition is low-frequency biased and RH lateralized. Combining these observations, we propose a developmental model of object recognition using a modular neural network based on (Dailey and Cottrell, 1999). In this model, each visual stimuli is preprocessed through Gabor filter banks followed by PCA at each spatial frequency. The neural network has two modules to represent the two hemispheres and one hidden layer for each module. The output of the neural network is modulated by a gating network, which learns to gate the contribution of each module to the output, based on their contribution to performance. To model changes in infant acuity, we low-pass filter the data set, and gradually increase fidelity over training. To model the asymmetric developmental pattern, we give the two modules different learning rate over time. The right hemisphere bias for face processing emerges naturally from this process as the gating node value of the right hemisphere always prevails over the left hemisphere for face images. Hence we propose that the RH bias for faces arises from the interaction of these two developmental trends.[/blockquote]

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.