Abstract
Neural network are generally based on the assumptions that connections represents strength of association. Although it lead to major breakthrough in the study of cognition, this solution has problem accounting for some of the most basic data collected on humans. First, this kind of network has conceptual difficulties generating predictions on reaction times. The most obvious one being that empirical reaction times are always asymmetrical. Second, learning is generally slow, even with second-order gradient descent.
We propose a network somehow similar to the architecture of standard supervised network. The main difference is that we do not assume that connections are strength of association and thus, we do not use the weighted sum approach. Instead, we assume that all the connections are equally strong. It is rather the time they take to become activated that is crucial. Outputs are made by assessing whether a critical amount of connections are activated at any given moment. The first output to reach criterion is triggered. In essence, this type of network is the purest form of a winner-take-all network. It is at the same time an accumulator model often studied in cognitive psychology.
We explore such a network, called a parallel race network, and propose one simple learning rule that can learn arbitrary mapping of input to outputs. Among other things, this network can learn a XOR problem without the need for hidden units.
The speed of learning is very good. For example, XOR problems can be learned in around forty exposures to the stimulus set. In addition, it is simple to demonstrate what is the predicted shape of the response time distributions. It turns out to be very similar, if not identical, to human response time distributions. Finally, trade-offs are naturally implemented in this network using an increase in the decision criterion. Thus ROC curves can be generated for this network as well pretty easily.