Abstract
Object recognition invariant to translation or scale is effortless for humans, but computationally very difficult. Many algorithms have been developed, most of which do not work well in real environments, lacking the ability to handle situations not programmed explicitly. In contrast to this algorithmic schema, a better way to understanding brain and vision is to start from basic self-organization principles, and let the system create optimal algorithms itself, through adaptations to its environment. This organic computing schema has been demonstrated in a very successful face recognition system by Dynamic Link Matching (DLM), a recognition method based on self-organization of a 1-1 mapping between corresponding points in an image and a model. Dynamic links are rapidly switching synapses whose dynamics are controlled by cooperation from neighboring synapses. However, DLM is too slow as it needs thousands of iterations. Here we extend DLM by allowing system dynamics, specifically cooperative strengths, to change as a function of local transformations between image and model, estimated from the system state. The adapted system converges much faster to a stable state because of more specific and longer range cooperation. This change is mediated by control units, groups of synapses consistent with each other in terms of transformation parameters. They represent synaptic arrangements that, once acquired, can apply to any object. We also showed they can be learned from experience. A face recognition system based on this extension of DLM is shown to be much faster and can deal with scale and in-plane rotation in addition to shift without much extra cost. The recognized model is picked already after 3 iterations as the one with the best similarity points linked in the mapping. Recognition rate on 110 rotated faces against 110 frontal ones is 93%, as compared to 66.4% in the original DLM on the same database (Wiskott & Malsburg 1996).