The human brain devotes enormous resources toward providing a cyclopean view of the world by combining the separate inputs from the two eyes. The process of binocular combination has been studied in a wide variety of different tasks including luminance change detection (Anstis & Ho,
1998; Baker, Wallis, Georgeson, & Meese,
2012; Cogan,
1987; Cohn & Lasley,
1976), contrast detection (Anderson & Movshon,
1989; Campbell & Green,
1965; Legge,
1984), contrast discrimination (Baker, Meese, & Georgeson,
2007; Ding & Levi,
2016; Georgeson, Wallis, Meese, & Baker,
2016; Legge,
1981,
1984; Meese, Georgeson, & Baker,
2006), contrast matching (Baker et al.,
2007; Ding, Klein, & Levi,
2013b; Huang, Zhou, Zhou, & Lu,
2010; Legge & Rubin,
1981), Vernier acuity (Banton & Levi,
1991), orientation discrimination (Bearse & Freeman,
1994), visual direction (Mansfield & Legge,
1996), phase perception (Ding et al.,
2013b; Ding & Sperling,
2006,
2007; Huang et al.,
2010; Zhou, Georgeson, & Hess,
2014), and orientation perception (Yehezkel, Ding, Sterkin, Polat, & Levi,
2016). However, precisely how the brain combines the two eyes' images is still unclear. Typically, a model was developed specifically for one binocular task, but seldom addressed other tasks (Blake & Wilson,
2011). For example, Ding and Sperling (
2006) proposed a gain control model to explain their phase data while Meese et al. (
2006) proposed a two-stage model to explain their contrast discrimination data. Most models were only tested in a zero-dimensional space—with two numbers as inputs and one number as output rather than using two-dimensional (2D) images as the model's input and output.