As the measures of sensitivity do not allow for a direct comparison between the visual and tactile tasks, we further analyzed effects of modality and comparison on confidence efficiency with the help of a CMI (see Methods). For the visual task, the average CMI was 26.03 ± 2.11 in the unimodal condition and 28.90 ± 2.50 in the cross-modal condition. For the tactile task, the average CMI was 28.96 ± 1.57 in the unimodal condition and 31.90 ± 2.57 in the cross-modal condition.
T-tests confirmed that CMIs were consistently greater than zero in all conditions (all
ps < 0.001).
Figure 5 displays average CMIs for both modalities and types of comparison.
We submitted CMIs to a repeated-measures ANOVA with the within-subject factors modality (visual vs. tactile) and comparison (unimodal vs. cross-modal). There was no significant main effect of modality, F(1, 53) = 2.28, p = 0.137, ηp2 = 0.04, or comparison, F(1, 53) = 2.50, p = 0.120, ηp2 = 0.05, and no interaction between modality and comparison, F(1, 53) < 0.01, p = 0.986, ηp2 < 0.01. Since the absence of any effects would be expected from the hypothesis that confidence is stored in a modality-independent format, we calculated the corresponding Bayes Factors (BF) to back up these results. Analyses of BF provided evidence that neither modality, BF10 = 0.42, nor comparison, BF10 = 0.41, in isolation, nor their interaction, BF10 = 0.19, have an effect on metacognitive efficiency.
The previous analysis used the CMI that is based on the global psychometric function (see again
Figure 3) where each stimulus strength presented in one interval is compared to all the other stimulus strengths in the other intervals. We can also perform a finer analysis by trying to fit the confidence choice probabilities between each stimulus strength across the two intervals. The problem with this analysis is that it requires a large number of trials (
Mamassian & de Gardelle, 2021), so we decided to pool the trials across all participants after transforming their perceptual data into standard scores (substracting the perceptual bias and dividing by the sensory noise). The data were then grouped into six equal-sized bins and submitted to a model of confidence forced-choice to fit the 576 confidence choice probabilities (i.e.,
\((6_{{\rm{visual}}}+ 6_{{\rm{tactile}}})^{\wedge}{2}_{{{\rm{intervals}}}}\; \times 4_{{{\rm{typel-responses}}}}\)) using the Matlab code package provided in
Mamassian & de Gardelle (2021).
We considered two models. In model 1, we only fitted the confidence choice probabilities for the unimodal comparisons (visual-visual and tactile-tactile), but applied this model to all the confidence choice probabilities (
Figure 6A). In model 2, we fitted the confidence choice probabilities for both unimodal and cross-modal comparisons (
Figure 6B). Replicating the previous analysis, we did not find any significant difference between metacognitive abilities across the two tasks, namely confidence efficiency was 0.376 for the visual task (95% CI = [0.309, 0.463], obtained from 100 bootstraps) and 0.365 for the tactile task (95% CI = [0.294, 0.427]). Importantly, there was no difference in the goodness of fits between models 1 and 2 as estimated by the BIC (Bayesian information criterion) measure (
Figure 6C). A Kolmogorov-Smirnov test indicates that the two models did not differ significantly in the quality of the fits,
D(100) = .140,
p = 0.261. In other words, the cross-modal confidence comparisons could be predicted very well from the unimodal comparisons, consistent with the hypothesis that confidence is computed in a modality-independent format.