Images contain many different cues to the three-dimensional (3D) layout of objects in a scene—retinal disparity, motion, texture, figure shape, and shading. The visual system integrates these cues to estimate objects' 3D properties both for perception and to guide action. When different cues suggest similar values for a scene parameter (curvature, slant, etc.), one can reasonably approximate cue integration as a linear combination of the estimates suggested by each cue individually. A large body of contemporary research has focused on how the human visual system integrates cues when operating in this linear regime (Alais & Burr,
2004; Jacobs,
2002; Johnston, Cumming, & Landy,
1994; Johnston, Cumming, & Parker,
1993; Landy, Maloney, Johnston, & Young,
1995; Young, Landy, & Maloney,
1993). Thus, for example, research has shown that humans weight cues, both within and across sensory modalities, according to their relative reliabilities. As cue reliability changes across stimulus conditions, so do the weights that subjects give to the cues (Alais & Burr,
2004; Ernst & Banks,
2002; Hillis, Watt, Landy, & Banks,
2004; Knill & Saunders,
2003). The fact that cue weights in a local linear model of cue integration change across stimulus conditions reflects one form of global nonlinearity in how the brain integrates cues. Another potential form of nonlinearity can arise when sensory cues suggest very different estimates of a scene parameter, requiring the use of nonlinear, robust strategies for integrating cues (Landy et al.,
1995). This article describes a Bayesian approach to modeling cue integration in large-conflict situations and describes two experiments designed to test a Bayesian model for integrating figural shape cues and binocular disparity cues to surface slant. The analysis provides a test of the explanatory power of the Bayesian approach for characterizing nonlinear, robust cue integration behaviors.