Abstract
When combining information to estimate properties such as size, depth and shape, observers are thought to combine cues optimally in proportion to their relative reliabilities in order to minimise the variance of the combined cues percept (Landy, Maloney, Johnston and Young, 1995). However, experimental tests of optimal cue combination rarely test performance relative to other candidate models (e.g. 'cue veto', 'go with most reliable cue', or 'probabilistic cue switching'). This is problematic as, for a wide range of relative reliabilities, the predictions of these models are very similar to one another. Here, we present Monte-Carlo simulations of end-to-end experiments in which the simulated observers performed in accordance with the predictions of an optimal cue combination model. We varied the relative reliability of the available cues, the number of simulated observers and parameters of the experimental psychometric procedure, such as sampling of the psychometric functions, in each case, fitting the simulated data with a cumulative Gaussian. By comparing the performance of our simulated optimal observers with the predictions from the alternative candidate models we calculated the proportion of times these alternative models could be rejected. We find that models are maximally distinguishable when the available cues have equal reliabilities and, of course, when number of participants per experiment increases. Models such as 'probabilistic cue switching' are easier to reject than 'go with most reliable cue' and 'cue veto'. We examine a series of published studies that claim to support optimal cue combination and report on their ability to actually distinguish between alternative models. This analysis allows us to specify the way in which experiments should be designed if they aim to distinguish between candidate models of reliability in sensory cue combination.
Meeting abstract presented at VSS 2018