Abstract
Interpreting the visual world often requires us to judge whether two stimuli are the same, or different. The computational mechanisms by which we assess visual similarity and difference, however, remain poorly understood. One puzzle is the well-replicated finding that human observers are faster at judging two multidimensional visual stimuli to be the same than different (the 'fast-same' effect). This is curious, because visual similarity can only be confirmed after an exhaustive search over all relevant features or dimensions. Moreover, although participants are slower to judge dissimilarity when done on only one dimension compared to two, this effect reverses when the similarity is defined disjunctively rather than conjunctively – the 'criterion effect'. A unified account of perceptual comparison that can accommodate these phenomena has remained elusive for many years. Here, we show that an ideal observer model in which stimulus features are processed simultaneously can account for both effects. The model iteratively estimates posterior belief about each possible identity of a comparison stimulus that is being compared to a previously-viewed standard, using Bayes' rule. The total log posterior odds for 'same' vs 'different' identities is accumulated to a threshold value, at which point a choice is triggered. Comparing simulated cycles-to-bound with human decision latencies, the model accurately predicts the 'fast-same' and 'criterion' effects for visual comparison tasks involving both discrete and continuously-varying feature information. The only free parameter in the model reflects the observers' prior belief that standard and comparison will match, and variation in this parameter alone is sufficient to explain the complex patterns of data observed in the conjunctive and disjunctive versions of the task. These findings suggest that perceived similarity reflects the geometry of an internal, multidimensional space of representations, and that judgments of sameness and difference can be accommodated under a single-process, statistically optimal framework.
Meeting abstract presented at VSS 2012