Abstract
Understanding how long decisions take is of fundamental importance for uncovering the underlying mechanisms of perceptual decision making. Consequently, there has been an enormous interest in the subject and a long tradition of creating models to jointly predict choice and reaction time (RT) data. However, most previous studies only used two conditions to manipulate the speed-accuracy tradeoff (SAT): one with higher and one with lower speed pressure. Here we report on the results of a large study where we created five distinct SAT conditions and were thus able to describe with much higher precision the relationship between speed and accuracy. Subjects (N = 15) came for five separate sessions, completing a total of 5,000 trials each. The task was to indicate whether a Gabor patch presented for 33 ms was tilted clockwise or counterclockwise. We used such short stimulus presentation in order to precisely describe the speed of information propagation through the system. We found that the fastest median RTs were 230-250 ms with performance at chance level. Performance for higher median RTs increased steeply until reaching a plateau around 500-550 ms. Simulations from a drift diffusion model (DDM) instead suggested that performance should not saturate for at least another second. Further, we observed robust U-shaped curves for the RT difference between correct and incorrect trials as a function of accuracy. However, DDM could only generate increasing or decreasing curves. Finally, we showed that the curves of accuracy as a function of median RT had the same stereotyped shape across subjects. Based on our findings, we created a new measure of SAT that does not depend on DDM's parametric assumptions. Overall, our results demonstrate that DDM does not faithfully capture the dynamics of SAT and highlight the need for large, data-driven investigations of the true relationship between speed and accuracy.
Meeting abstract presented at VSS 2018