One approach that has proven to be fruitful in illuminating the computational mechanisms underlying perceptual discriminations is the ideal observer framework (Geisler,
2003; Knill & Richards,
1996). This approach characterizes a given perceptual task by specifying an ideal observer, a theoretical decision-making agent described in probabilistic terms, that performs the task optimally given the available information. To determine how human observers use information in the perceptual task, researchers compare their performance with that of the ideal observer across manipulations of the task that systematically change the information available in the stimulus. This approach has been particularly successful at characterizing the ways in which observers integrate information across different perceptual modalities (e.g., Battaglia, Jacobs, & Aslin,
2003; Ernst & Banks,
2002; Gepshtein, Burge, Ernst, & Banks,
2005), different visual modules (e.g., Jacobs,
1999; Knill,
2003; Knill & Saunders,
2003), or both (e.g., Atkins, Fiser, & Jacobs,
2001; Hillis, Ernst, Banks, & Landy,
2002) to make perceptual judgments when multiple cues are available. Briefly, in making quotidian perceptual judgments, observers usually have access to a number of perceptual cues. An observer attempting to determine the curvature of a surface, for example, may have access to cues based on visual texture, binocular disparity, and shading, as well as to haptic cues obtained by manually exploring the surface. To make an optimal judgment based on these cues, the observer must combine the curvature estimates from these different cues. Yuille and Bülthoff (
1996) demonstrated that, given certain mathematical assumptions, the optimal strategy for combining estimates
1, …,
n from a set of class-conditionally independent cues (i.e., cues
c1, …,
cn that are conditionally independent given the scene parameter of interest so that
P(
c1, …,
cn∣
θ) = Π
in P(
ci∣
θ)) consists of taking a weighted average of the individual cue estimates
* = ∑
iωii (where
* represents the optimal estimate based on all available cues) such that the weight for each cue is inversely proportional to the variance of the distribution of the scene parameter given the cue's value (i.e.,
ωi ∝ 1/
σi2). Researchers have found that, across a variety of perceptual tasks, human observers seem to base their perceptual judgments on just such a strategy. While most of these cue integration studies have focused on strategies used by observers in stationary environments, several (Atkins et al.,
2001; Ernst, Banks, & Bülthoff,
2000; Jacobs & Fine,
1999) have investigated how observers change their cue integration strategies after receiving training in virtual environments in which a perceptual cue to a scene variable is artificially manipulated to be less informative with respect to that variable. In one of these studies, Ernst et al. (
2000) manipulated either the texture- or disparity-specified slant of a visually presented surface to indicate a slant value that was uncorrelated with the haptically defined orientation of the surface. The authors found that after receiving training in this environment, subjects' perceptions of slant changed such that, in a qualitatively similar fashion to the ideal observer, they gave less weight to the slant estimate of the now less reliable visual cue.