Abstract
The difficulty of a visual discrimination task depends on stimulus strength (e.g., contrast C) and stimulus value (e.g., Gabor orientation θ in an orientation-discrimination task relative to vertical). Signal detection theory predicts poor performance when the expected likelihood function of the decision variable has high variance (for small C) or has a mean close to the decision criterion (small θ). How does the decision variable evolve over time and what are the consequences for reaction-time (RT) tasks? We consider two neural population-code models. (1) Drift-diffusion model (DDM): Momentary discrimination evidence accumulates until a decision bound is reached (a single-stage decision process). Predicted RT(C,θ) decreases with increasing C and θ but with an interaction: RT as a function of C is steeper for lower values of θ. (2) Two-stage model (TSM): Stage 1: Neural responses are accumulated until a threshold reliability of estimated orientation is reached (or until at least one neuron has fired a criterion number of spikes). Stage 2: the estimate is compared to a decision criterion, a process that takes longer for more difficult discriminations. Stage 1's duration is a function of C alone; stage 2's duration is a function of θ alone (i.e., no interaction). We report three discrimination experiments: (1) location discrimination of a Gaussian blob (RT=f(x,C)), (2) orientation discrimination of a Gabor (RT=f(θ,C)), and (3) direction discrimination of a random-dot kinematogram with coherence=C (RT=f(θ,C)). In all three cases, RTs, when plotted against C for different values of θ, are largely parallel, supporting TSM over DDM. TSM, unlike DDM, also predicts that in a cued-response task, RT will decrease with increasing response-cue delay. We measured RT as a function of response-cue delay in the motion-discrimination task. RT decreased with increasing cue delay. Overall, our results favor the two-stage model of discrimination over the drift-diffusion model.
Meeting abstract presented at VSS 2016