We examined which model of likelihoods and fusion best accounted for the data in another way. In causal-inference models (Körding et al.,
2007; Sato et al.,
2007), cue fusion is linked to the probability of inferring one as opposed to two causes for two sensory measurements. We simulated two versions of the causal model: one with Gaussian likelihoods (Körding et al.,
2007; Sato et al.,
2007) and another with heavy-tailed likelihoods (see
Supplement). Körding et al. (
2007) used four free parameters, one called
pcommon that equals the prior probability of one cause p(
C = 1), where
C equals the number of causes, and three others. We simulated three variants of the model: no fusion (
pcommon = 0), complete fusion (
pcommon = 1), and partial fusion (
pcommon as a free parameter). The likelihood parameters (
σD12,
σD22,
σT12, and
σT22) were set by measurement and analysis in
Experiment 1. The prior over slant was assumed to be uniform; we later verified that this assumption had no impact on our main conclusions. We then measured goodness of fit in the same fashion as we functions (df = 16–48 depending on the observer), coin flipping (df = 0), Gaussian, no fusion (df = 0,
pcommon did for
Experiment 1. Eight models were tested: psychometric = 0), Gaussian, partial fusion (df = 1), Gaussian, complete fusion (df = 0,
pcommon = 1), heavy-tailed, no fusion (df = 0,
pcommon = 0), heavy-tailed, partial fusion (df = 1), and heavy-tailed, complete fusion (df = 0,
pcommon = 1). The goodness of fit was normalized such that the psychometric and coin-flipping models represented the upper and lower bounds, respectively.
Figure 10 shows the results. The goodness of fit for the heavy-tailed likelihood was always better than for the Gaussian likelihood, consistent with
Experiment 1. Among the heavy-tailed models, the goodness of fit for the complete-fusion model was also consistently greater than for no-fusion model and was nearly as great as for the partial-fusion model, which had a free parameter. The fits for the partial-fusion model were quite similar to the fits for the complete-fusion model, suggesting that adding
pcommon as a free parameter rather than fixing it at 1 is not necessary to account for these data. Indeed, the mean best-fitting
pcommon for the heavy-tailed, partial-fusion model was
pcommon = 0.88(±0.08), which is quite close to 1. Computing Bayesian Information Criteria (BIC) (Burnham & Anderson,
2002), which penalizes the partial-fusion models for their one free parameter, revealed decisive evidence for all observers in favor of the heavy-tailed, complete-fusion model. The results therefore suggest that observers assumed one cause, i.e., p(
C = 1) ≈ 1.