Abstract
Neurally-inspired self-organizing maps typically use a symmetric spatial function such as a Gaussian to scale synaptic changes within the neighborhood surrounding a maximally stimulated node (Kohonen, 1984). This type of unsupervised learning scheme can work well to capture the structure of data sets lying in low-dimensional spaces, but is poorly suited to operate in a neural system, such as the neocortex, in which the neurons representing multiple distinct feature maps must be physically intermingled in the same block of tissue. This type of “multi-map” is crucial in the visual system because it allows multiple feature types to simultaneously analyze every point in the visual field. The physical interdigitation of different features types leads to the problem, however, that neurons can't “learn together” within neighborhoods defined by a purely spatial criterion, since neighboring neurons often represent very different image features. Co-training must therefore also depend on feature-similarity, that is, should occur in neurons that are not just close, but also like-activated. To explore these effects, we have studied SOM learning outcomes using (1) pure spatial, (2) pure featural, and (3) hybrid spatial-featural learning criteria. Preliminary results for a 2-dimensional data set (of L-junctions) embedded in a high-dimensional space of local oriented edges features show that the hybrid approach produces significantly better organized maps than do either pure spatial or non-spatial learning functions, where map quality is quantified in terms of smoothness and coverage of the original data set.
This work is supported by NEI grant EY016093.