For each
ith neuron,
ni, in Layer 4, the bottom-up weight vector of the neuron,
, and the bottom-up input to the neuron,
, are normalized and then multiplied. Dot product is used to multiply the two vectors, as it measures the cosine of the angle between the vectors—a measure of similarity and match between two vectors
We call
the initial or
preresponse of the
ith neuron before lateral interactions in the layer. The lateral interactions, which yield the response of the neuron, consist of lateral inhibition and lateral excitation. In the current version of the model, there are no explicit lateral connections which make the algorithms more computationally efficient by avoiding oscillations necessary to stabilize lateral signals while getting essentially the same effects. Lateral inhibition is roughly modeled by the top-
k winner rule, i.e., the
k ≥ 1 neurons with the highest preresponse inhibit all the other neurons with lower preresponse from firing by setting their response values to zero. This process simulates the lateral competition process and was proposed by Fukai and Tanaka (
1997) and O'Reilly (
1998), among others, who used the term
k-winner-takes-all (kWTA). The preresponse of these top-
k winners are then multiplied by a linearly declining function of neuron's rank:
where ← denotes the assignment of the value, and 0 ≤
ri <
k is the rank of the neuron with respect to its preresponse value (the neuron with the highest preresponse has a rank of zero, second most active neuron get the rank of one, etc.). Each neuron competes with a number of other neurons for its rank in its local neighborhood in the two-dimensional grid of neurons of the layer. A parameter called
competition window size,
ω, determines the local competitors of the neuron. A competition windows of size
ω = 5, centered on the neuron, is used for the reported results. The modulation in
Equation 2 simulates lateral inhibition among the top-
k winners.