Abstract
Perceptual decision making requires the process of translating sensory evidence into a representational space that identifies action-relevant categories (i.e., decision evidence). Although the neural dynamics of evidence integration have been investigated, it remains unclear how the representational space is organized for sensory versus decision evidence, how it adapts to task demands, and how it contributes to individual differences in quality of decision making. We decoded the representation of evidence during multisampling categorization tasks (Wyart, et al., 2012, Neuron) using multivariate pattern classification analysis of scalp-recorded EEG. Participants made binary judgments about the degree to which a series of Gabor patterns with variable orientations (sensory evidence) was on average closer in terms of their angular distance (decision evidence) to one of two sets of axes (e.g., cardinal or diagonal axes). We found that the representational space of sensory evidence showed a graded property, suggesting feature-selective responses. In contrast, representations of decision evidence showed the combined effect of a graded coding of the strength of the evidence, and a non-graded, binary coding along category boundaries. The neural measure of decision evidence predicted––to a much larger degree than the measure of sensory evidence––trial-by-trial errors as well as individuals' performance. Over the course of evidence presentation, the category distinction in the decision evidence representation was strengthened. Furthermore, when decision rules shifted from block to block, the category boundary was flexibly adjusted, indicating that the translation from sensory to decision evidence is under top-down control. These findings demonstrate a novel approach towards characterizing the organization of decision-relevant representations.
Meeting abstract presented at VSS 2016