Threshold durations were estimated separately for each of the five training sessions and each training group. For each threshold duration, the estimation process first required plotting the group's mean orientation sensitivity at each of the seven stimulus durations. At each stimulus duration, signal detection procedures (Green & Swets,
1966) were used to compute orientation sensitivity (
d′) as follows: Hits and false alarms were operationally defined as clockwise responses made when the second Gabor patch was, respectively, clockwise or anticlockwise to the first. The standard deviation of the function that converted the proportion of hits and false alarms to
z scores was set to 0.5. Consequently,
d′ = 1.0 corresponded to 84% correct without response bias. A least squares procedure identified the best fitting power function that related orientation sensitivity (
d′) to stimulus duration. In all cases, the power function provided a statistically significant fit (
p < .001) to the data. Because each fit was significant, we could fairly interpolate the stimulus duration (
X-intercept) corresponding to a given level of orientation sensitivity (
d′), which we call the criterion sensitivity (
d′). The criterion sensitivity (
d′) for each group was set at a level that would minimize “floor” and “ceiling” effects. Specifically, for each group, the criterion sensitivity (
d′) was operationally defined as the average of the
d′ values empirically observed at the extremes of our training conditions. These extremes were the first day of training at the briefest stimulus duration—where
d′ would likely be lowest—and the final day of training at the longest stimulus duration—where
d′ would likely be greatest. The resultant criterion sensitivity (
d′) values were 1.26 for the cardinal group and 1.41 for the oblique group. With those values in hand, we could track daily changes in threshold duration. Significant downward trends would signify hastening in orientation sensitivity (
d′).