Spatial summation of luminance contrast signals describes an increase in sensitivity to a stimulus given an increase in its area, and occurs for stimuli presented at and above contrast threshold levels (Baker & Meese,
2011; Campbell & Green,
1965; Graham,
1977; Graham & Robson,
1987; Graham, Robson, & Nachmias,
1978; Graham & Sutter,
1998; Kersten,
1984; Landy & Oruç,
2002; Legge,
1984; Meese,
2004; Meese & Baker,
2011; Meese, Hess, & Williams,
2005; Meese & Summers,
2007; Robson & Graham,
1981; Summers, Baker, & Meese,
2015). Computationally, spatial summation is described as a multistage process that begins with spatial filtering (i.e., a filter narrowly tuned for spatial frequency and orientation), followed by nonlinear transduction, linear summation and probably summation, where each stage operates over a progressively larger area of the retina (Baker & Meese,
2011; Foley, Varadharajan, Koh, & Farias,
2007; Meese,
2004; Meese & Baker,
2011; Meese & Summers,
2007; Wilson & Gelb,
1984). This structure of summation is an excellent foundation that accounts for psychophysical and neurophysiological summation effects across eye and space for phase congruent and incongruent stimuli—excluding modality specific terms like interocular suppression for binocular combination and phase selective channels (Baker & Meese,
2011; Cunningham, Baker, & Peirce,
2017; Georgeson, Wallis, Meese, & Baker,
2016; Meese,
2004; Meese & Baker,
2011; Meese, Georgeson, & Baker,
2006; Richard, Chadnova, & Baker,
2018). Summation models (including recent implementations; Baker & Meese,
2011, Meese & Baker,
2011) have been developed explicitly with narrowband stimuli (i.e., sinusoidal gratings), and thus can only describe the response of a single detecting channel to a stimulus. Yet, the retinal image formed by real-world environments is broadband: it contains contrast across a broad range of spatial frequencies and orientations. This means that multiple channels with different tuning are simultaneously active, and their outputs weighted by the interdependent responses of similarly and dissimilarly tuned channels (Cass, Stuit, Bex, & Alais,
2009; Schwartz & Simoncelli,
2001). To understand how these channels operate (e.g., spatially sum) in naturalistic environments, it is important to measure psychophysical effects with stimuli that better represent the typical input received by the visual system (i.e., broadband images). This, in turn, can guide how psychophysical models of vision may be adjusted to describe how vision operates in the real world (Bex, Mareschal, & Dakin,
2007; Hansen et al.,
2015; Hansen & Hess,
2012; Haun & Peli,
2013; Legge & Foley,
1980; Meese & Holmes,
2010; Petrov, Carandini, & McKee,
2005; Schwartz & Simoncelli,
2001).