After conducting the same statistical analyses as in
Experiment 1, the higher number of participants and comparison durations in
Experiment 2 also allowed us to further explore our data by performing an orthogonal (or Deming) regression (
Deming, 1943;
Hall, 2014;
Kane & Mroch, 2020) between our discrimination and confidence estimates of perceived duration for each speed condition. This analysis can be used to determine the equivalence of measurement instruments. Unlike linear regression, orthogonal regression assumes that both the dependent and the independent variables (which are supposed to be linearly correlated) are measured with error (as is the case in the present study), and it minimizes the distances of the data points in both the
x and
y directions from the fitted line; that is, it minimizes the sum-of-squared orthogonal deviations. It also produces confidence interval estimates for the slope and the intercept of the orthogonal fit, which can be used to test whether the two parameters are significantly different from 1 and 0, respectively, indicating a deviation from a perfect linear correlation between the two measures. In addition, we determined the Bayes factor, which gave us the amount of evidence favoring the reduced model (with slope fixed to 1 and intercept fixed to 0) over the orthogonal model given the data. To calculate the Bayes factor, we used the large sample approximation method (
Burnham & Anderson, 2004). A similar application of this method can be found, for example, in
Schütz, Kerzel, and Souto (2014). We first determined the Bayesian information criterion (BIC) (
Schwarz, 1978) for both methods:
\begin{eqnarray*}{\rm{BIC}} = n\ln \left( {\frac{{RSS}}{n}} \right) + k\ln \left( n \right)\end{eqnarray*}
where
n corresponds to the number of participants,
RSS is the residual sum of squares, and
k is the number of free parameters (0 for the reduced model and 2 for the orthogonal model). Then, for each model
i, we determined the posterior probability
p:
\begin{eqnarray*}{p_i} = \;\frac{{{e^{ - 0.5\Delta BI{C_i}}}}}{{\mathop \sum \nolimits_{r = 1}^R {e^{ - 0.5\Delta BI{C_r}}}}}\end{eqnarray*}
where ∆
BIC is the difference, for each model, between the
BIC for that model and the lower
BIC between the two models (i.e., the ∆
BIC for the minimum
BIC model is 0). Finally, the Bayes factor was calculated as the ratio between the two posterior probabilities:
\begin{eqnarray*}B{F_{10}} = {\rm{\;}}\frac{{{p_{reduced}}}}{{{p_{orthogonal}}}}\end{eqnarray*}