Open Access
Article  |   May 2019
The contributions of central and peripheral vision to scene-gist recognition with a 180° visual field
Author Affiliations
Journal of Vision May 2019, Vol.19, 15. doi:https://doi.org/10.1167/19.5.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lester C. Loschky, Sebastien Szaffarczyk, Clement Beugnet, Michael E. Young, Muriel Boucart; The contributions of central and peripheral vision to scene-gist recognition with a 180° visual field. Journal of Vision 2019;19(5):15. https://doi.org/10.1167/19.5.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We investigated the relative contributions of central versus peripheral vision in scene-gist recognition with panoramic 180° scenes. Experiment 1 used the window/scotoma paradigm of Larson and Loschky (2009). We replicated their findings that peripheral vision was more important for rapid scene categorization, while central vision was more efficient, but those effects were greatly magnified. For example, in comparing our critical radius (which produced equivalent performance with mutually exclusive central and peripheral image regions) to that of Larson and Loschky, our critical radius of 10° had a ratio of central to peripheral image area that was 10 times smaller. Importantly, we found different functional relationships between the radius of centrally versus peripherally presented imagery (or the proportion of centrally versus peripherally presented image area) and scene-categorization sensitivity. For central vision, stimulus discriminability was an inverse function of image radius, while for peripheral vision the relationship was essentially linear. In Experiment 2, we tested the photographic-bias hypothesis that the greater efficiency of central vision for rapid scene categorization was due to more diagnostic information in the center of photographs. We factorially compared the effects of the eccentricity from which imagery was sampled versus the eccentricity at which imagery was presented. The presentation eccentricity effect was roughly 3 times greater than the sampling eccentricity effect, showing that the central-vision efficiency advantage was primarily due to the greater sensitivity of central vision. We discuss our results in terms of the eccentricity-dependent neurophysiology of vision and discuss implications for computationally modeling rapid scene categorization.

Introduction
The visual world across our field of view is full of information, but how much can we process at once? One functional approach to answering this question is to consider people's ability to grasp the gist of a scene. Scene gist is a holistic semantic representation of a scene that we can perceive within a single fixation. Because scene gist is acquired in a single fixation (i.e., 330 ms; Rayner, 1998) when the retinal image is stationary, a retinotopic approach to answering this question is to consider how much of our visual field we can process at once. When describing our visual field, we commonly distinguish between central vision and peripheral vision. We can therefore combine the functional and retinotopic approaches and reframe our question as: What are the relative roles of central versus peripheral vision in recognizing the gist of a scene? 
Central versus peripheral vision
We next must define central and peripheral vision. As illustrated in Figure 1, the human visual field probably extends somewhat beyond 90° to left and right of the point of fixation (Table 1). Among those who study the neurophysiology of the retina, some define central vision as encompassing the macula, from 0° to approximately 2.6°–3.6°, with everything beyond that being peripheral vision (Quinn et al., 2019). In visual-cognition research, central vision is generally considered to extend from 0° to 5° eccentricity (Hollingworth, Schrock, & Henderson, 2001; Rayner, 1998; Shimozaki, Chen, Abbey, & Eckstein, 2007; van Diepen, Wampers, & d'Ydewalle, 1998), including both the rod-free foveola from 0° to approximately 0.5°–1° eccentricity and the parafovea from approximately 0.5°–1° to 5° eccentricity, with everything beyond 5° being peripheral vision. Others in vision science consider central vision to extend from 0° to 10° eccentricity (Fortenbaugh, Hicks, Hao, & Turano, 2007; Schwartz, 2010), to the outer edge of the perifovea (Table 2), with peripheral vision beyond that edge. Most importantly, by any of these definitions the vast majority of the human visual field is in peripheral vision. 
Figure 1
 
The human visual field with approximate retinal eccentricities for outer limits of visual field regions. Approximate values for eccentricities are based on Table 1.
Figure 1
 
The human visual field with approximate retinal eccentricities for outer limits of visual field regions. Approximate values for eccentricities are based on Table 1.
Table 1
 
Estimates of retinal eccentricities for outer limits of visual-field regions from Curcio and Allen (1990), Curcio, Sloan, Kalina, and Hendrickson (1990), Duke-Elder (1962), Polyak (1941), Roenne (1915) and Frisén (1990). Notes: aPolyak, p. 201. bCurcio et al., p. 518. cPolyak, p. 203. dCurcio and Allen, p. 13. ePolyak, p. 211. fCurcio et al., p. 505. gPolyak, p. 214. hDuke-Elder, figure 301. iRoenne, p. 297. jFrisén, p. 60, figure 6.4. Estimates a–h are based on neurophysiology, while estimates i and j are based on perimetry testing. Estimates of eccentricity in degrees of visual angle for a–g were converted from micrometers based on the equation given by Drasdo and Fowler (1974, figure 2); note that at the shortest eccentricities, this produced slightly smaller values than those given in the original articles.
Table 1
 
Estimates of retinal eccentricities for outer limits of visual-field regions from Curcio and Allen (1990), Curcio, Sloan, Kalina, and Hendrickson (1990), Duke-Elder (1962), Polyak (1941), Roenne (1915) and Frisén (1990). Notes: aPolyak, p. 201. bCurcio et al., p. 518. cPolyak, p. 203. dCurcio and Allen, p. 13. ePolyak, p. 211. fCurcio et al., p. 505. gPolyak, p. 214. hDuke-Elder, figure 301. iRoenne, p. 297. jFrisén, p. 60, figure 6.4. Estimates a–h are based on neurophysiology, while estimates i and j are based on perimetry testing. Estimates of eccentricity in degrees of visual angle for a–g were converted from micrometers based on the equation given by Drasdo and Fowler (1974, figure 2); note that at the shortest eccentricities, this produced slightly smaller values than those given in the original articles.
Table 2
 
Experiment 1: Tests of the null and alternative hypotheses (H0 and HA) for window and scotoma conditions versus the full-image control condition (window = 90° radius, scotoma = 0° radius) as a function of window/scotoma radius (°). Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014). Strength of support for HA and H0 based on Wetzels, Matzke, Lee, Rouder, Iverson, and Wagenmakers (2011, table 1).
Table 2
 
Experiment 1: Tests of the null and alternative hypotheses (H0 and HA) for window and scotoma conditions versus the full-image control condition (window = 90° radius, scotoma = 0° radius) as a function of window/scotoma radius (°). Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014). Strength of support for HA and H0 based on Wetzels, Matzke, Lee, Rouder, Iverson, and Wagenmakers (2011, table 1).
The distinction between central and peripheral vision begins with the exponential drop-off of cone photoreceptor density with eccentricity, shown in Figure 2. That figure shows that cone density is highest in the foveola (0° to approximately 0.5°–1°), the knee of the density curve is roughly at the boundary between the parafovea and perifovea at 5°, and the drop-off in density reaches asymptote at roughly the outer edge of the perifovea at 10°. This change in cone density with eccentricity has a snowball effect on visual neurophysiology throughout the visual system which produces strong effects on perception (for reviews, see Loschky et al., 2019; Strasburger, Rentschler, & Juttner, 2011; Whitney & Levi, 2011; Wilson, Levi, Maffei, Rovamo, & DeValois, 1990). These functional differences are numerous and multifaceted, including a rapid drop-off in visual resolution with retinal eccentricity (Loschky, McConkie, Yang, & Miller, 2005; Wilkinson, Anderson, Bradley, & Thibos, 2016); increasing difficulty in recognizing objects with increasing eccentricity, especially when they are flanked by other objects (Ehinger & Rosenholtz, 2016; Herzog, Sayim, Chicherov, & Manassi, 2015; Levi, 2008; Nelson & Loftus, 1980; Whitney & Levi, 2011); and a decrease in color sensitivity with eccentricity (Anderson, Mullen, & Hess, 1991; Hansen, Pracejus, & Gegenfurtner, 2009; Nagy & Wolf, 1993; Rovamo & Iivanainen, 1991). Nevertheless, there is also recent evidence that peripheral vision can be very useful for a number of visual tasks, including (surprisingly) recognition of objects and faces if they are relatively large (Boucart et al., 2016), place localization (Eberhardt, Zetzsche, & Schill, 2016), hazard detection during driving (Huestegge & Böckler, 2016), and scene-gist recognition (Boucart, Moroni, Thibaut, Szaffarczyk, & Greene, 2013; Ehinger & Rosenholtz, 2016; Larson & Loschky, 2009). In sum, research on the roles of central and peripheral vision in natural-scene perception has shown that while peripheral vision is very poor compared to central vision, it also very useful, if the tasks and stimuli are sufficiently well specified. 
Figure 2
 
Cone density in the retina on the horizontal meridian as a function of eccentricity in degrees of visual angle. Figure drawn from data digitized from Curcio, Sloan, Kalina, and Hendrickson (1990, figure 6a, 6c) and averaged over nasal and temporal directions (with the gap at the blind spot ignored). Cone density given in cells/°2 × 1,000. Degrees of visual angle translated micrometers based on data digitized from Drasdo and Fowler (1974, figure 2) using a Biexponential 5P fit: a + b × exp(−c × retinal arc) + d × exp(−f × retinal arc), with a = asymptote = 205.810, b = scale 1 = 0.445, c = decay rate = −0.195, d = scale 2 = −206.537, f = decay rate 2 = 0.018, and retinal arcs in micrometers. This was used for all eccentricities except the first two, which produced negative estimates. For those values, we used Drasdo and Fowler's 0.274°/μm for 0°. Foveola = 0.875°, parafovea = 5°, and perifovea = 10° eccentricity.
Figure 2
 
Cone density in the retina on the horizontal meridian as a function of eccentricity in degrees of visual angle. Figure drawn from data digitized from Curcio, Sloan, Kalina, and Hendrickson (1990, figure 6a, 6c) and averaged over nasal and temporal directions (with the gap at the blind spot ignored). Cone density given in cells/°2 × 1,000. Degrees of visual angle translated micrometers based on data digitized from Drasdo and Fowler (1974, figure 2) using a Biexponential 5P fit: a + b × exp(−c × retinal arc) + d × exp(−f × retinal arc), with a = asymptote = 205.810, b = scale 1 = 0.445, c = decay rate = −0.195, d = scale 2 = −206.537, f = decay rate 2 = 0.018, and retinal arcs in micrometers. This was used for all eccentricities except the first two, which produced negative estimates. For those values, we used Drasdo and Fowler's 0.274°/μm for 0°. Foveola = 0.875°, parafovea = 5°, and perifovea = 10° eccentricity.
Scene-gist recognition
We mentioned that viewers can recognize that they are looking at, say, a beach rather than a forest, in less time than the duration of a single eye fixation. Besides being surprising, current theories argue that scene-gist recognition plays several important roles in scene perception (Rayner, Smith, Malcolm, & Henderson, 2009; Wolfe, Võ, Evans, & Greene, 2011). Specifically, after the gist of a scene is acquired very early during the first eye fixation on a scene (Bacon-Mace, Mace, Fabre-Thorpe, & Thorpe, 2005; Fei-Fei, Iyer, Koch, & Perona, 2007; Greene & Oliva, 2009; Loschky & Larson, 2010), it activates relevant information stored in long-term memory. This activated prior knowledge rapidly affects many important processes, such as guiding visual attention during visual search (Eckstein, Drescher, & Shimozaki, 2006; Gordon, 2004; Torralba, Oliva, Castelhano, & Henderson, 2006), facilitating recognition of objects that are consistent with expectations (Bar & Ullman, 1996; Biederman, Mezzanotte, & Rabinowitz, 1982; Boyce & Pollatsek, 1992; Davenport & Potter, 2004; but see Hollingworth & Henderson, 1998), and influencing the contents of long-term memory (with missing objects that are consistent with the scene gist being more likely to be falsely recalled, and present objects that are inconsistent with the scene gist being more accurately recognized; Brewer & Treyens, 1981; Pezdek, Whetstone, Reynolds, Askari, & Dougherty, 1989). Thus, understanding the processes involved in rapidly recognizing the gist of scenes is important for understanding how we perceive, attend to, and later remember the contents of scenes. 
We should also clarify a key distinction between the theoretical construct of scene gist and how that construct is operationally defined in empirical research. In many studies of scene-gist recognition, it has been operationally defined in terms of rapid scene categorization (Greene & Oliva, 2009; Loschky & Larson, 2010; Rousselet, Joubert, & Fabre-Thorpe, 2005), and we do the same here. Thus, when discussing the theoretical construct, including in our General discussion and Summary/conclusion, we will use the term scene gist. However, when discussing our operational definition of that construct—particularly when discussing our tasks, measures, and empirical results—we will use the term rapid scene categorization
Scene-gist recognition from central to peripheral vision
Though we have learned quite a bit over the last two decades about factors that influence people's ability to rapidly categorize scenes, the key question addressed in this study has frequently been overlooked, namely: What roles do central and peripheral vision play in that process? Most theories or models of scene-gist recognition are relatively silent on this issue (Fei-Fei & Perona, 2005; Fei-Fei, VanRullen, Koch, & Perona, 2005; Oliva, 2005; Oliva & Torralba, 2006). Thus, a thorough understanding of scene-gist recognition and computational models of rapid scene categorization must take into account differences in visual processing between central and peripheral vision (Wang & Cottrell, 2017). 
There have been only a small number of studies investigating the roles of central versus peripheral vision in scene-gist recognition. Two studies used briefly flashed scenes presented at varying retinal eccentricities, one with an animal-detection task (Thorpe, Gegenfurtner, Fabre-Thorpe, & Bulthoff, 2001) and the other with a basic-level scene-categorization task (Boucart et al., 2013). Both found that, as expected, performance decreased with increasing retinal eccentricity, but both also found above-chance performance at up to 70° eccentricity. Similarly, Tran, Rambaud, Despretz, and Boucart (2010) have shown that people with central vision loss due to age-related macular degeneration (AMD) could categorize centrally displayed (20° × 20°) scenes as natural/urban or indoor/outdoor with high accuracy (75%–80%), again suggesting that peripheral vision is quite useful for rapidly categorizing scenes. However, consistent with the finding that for those with normal vision, scene categorization became worse with increasing eccentricity (Boucart et al., 2013), rapid scene categorization by individuals with age-related macular degeneration was about 20% lower than for age-matched observers with intact central vision. 
These studies leave open important questions. First, at a methodological level, entire scenes were briefly flashed at varying retinal eccentricities. However, in normal real-world scene perception, distinct scenes are generally not seen only at a specific retinal eccentricity, either to the left or right of fixation. Instead, viewers are normally situated within a single scene, which is simultaneously visible in both central and peripheral vision and is symmetrically visible around the point of fixation. Thus, these studies raise the question of what specific roles central and peripheral vision play in rapid scene categorization under such conditions. Larson and Loschky (2009) addressed these concerns by adopting the window/scotoma paradigm (Henderson, McClure, Pierce, & Schrock, 1997; Loschky & McConkie, 2002; Nuthmann, 2014; Rayner et al., 2009; van Diepen & Wampers, 1998), comparing perception of fully visible scenes centered at fixation with scenes viewed through windows (a circular region centered at fixation, with everything outside the circle replaced by neutral gray) or with scotomas (a circular region centered at fixation filled with neutral gray, with the remainder of the original image shown outside the circle). They varied the sizes of the windows and scotomas to determine the relative importance and relative efficiency of central versus peripheral vision for rapid scene categorization. Consistent with other visual-cognition researchers, they defined central vision as 0°–5° eccentricity and peripheral vision as >5° eccentricity. Thus, with windows of 5° radius, imagery would be seen only in central vision, and with scotomas of ≥5° radius, imagery would be presented only in peripheral vision. 
Larson and Loschky drew three main conclusions from their results. First, they concluded that peripheral vision is more important for scene gist than central vision is. This was because, as shown in Figure 3A, they could remove information from both foveal and parafoveal vision using a 5°-radius scotoma without reducing performance relative to the full-image control condition. Conversely, showing only the information in a 5°-radius window produced significantly worse rapid scene categorization. Second, they concluded that central vision is more efficient than peripheral vision for rapid scene categorization. This was because, as shown in Figure 3B, when the same number of original-image pixels were visible inside a window or outside a scotoma, performance was significantly better in the window condition. This conclusion, when combined with the first, suggested that the greater importance of peripheral vision for rapid scene categorization was due to the far greater proportion of the visual field subsumed within peripheral vision, which is generally consistent with spatial summation (Strasburger et al., 2011). 
Figure 3
 
Major results from Larson and Loschky (2009), Experiment 1. To facilitate comparisons with the current study, we converted the Larson and Loschky results from percent accuracy to d′ based on two-alternative forced choice. (A) Rapid-scene-categorization sensitivity measured in d′ as a function of window and scotoma condition and radius (°). Note that the critical radius is the single radius for both window and scotoma conditions that produces equivalent sensitivity, as shown by the crossover point in their respective Radius × Sensitivity functions. (B) Rapid-scene-categorization sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example window and scotoma images, with the experimentally manipulated radius of each being the critical radius of 7.4° from Larson and Loschky's experiment 2. (From “The spatiotemporal dynamics of scene gist recognition,” by A. M. Larson, T. E. Freeman, R. V. Ringer, and L. C. Loschky, 2014, Journal of Experimental Psychology: Human Perception and Performance, 40(2), p. 474. Copyright 2014 by the American Psychological Association. Reprinted with permission.) Note that the window and scotoma images are mutually exclusive image regions from the same original image, which together constitute the whole image. Note also that while the outer edge of the scotoma condition was cropped to be circular, in Experiment 1 it was square. This may explain the somewhat smaller critical radius of 7.4° as opposed to that of Experiment 1, which was 8.7°, as shown by the crossover point in (A).
Figure 3
 
Major results from Larson and Loschky (2009), Experiment 1. To facilitate comparisons with the current study, we converted the Larson and Loschky results from percent accuracy to d′ based on two-alternative forced choice. (A) Rapid-scene-categorization sensitivity measured in d′ as a function of window and scotoma condition and radius (°). Note that the critical radius is the single radius for both window and scotoma conditions that produces equivalent sensitivity, as shown by the crossover point in their respective Radius × Sensitivity functions. (B) Rapid-scene-categorization sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example window and scotoma images, with the experimentally manipulated radius of each being the critical radius of 7.4° from Larson and Loschky's experiment 2. (From “The spatiotemporal dynamics of scene gist recognition,” by A. M. Larson, T. E. Freeman, R. V. Ringer, and L. C. Loschky, 2014, Journal of Experimental Psychology: Human Perception and Performance, 40(2), p. 474. Copyright 2014 by the American Psychological Association. Reprinted with permission.) Note that the window and scotoma images are mutually exclusive image regions from the same original image, which together constitute the whole image. Note also that while the outer edge of the scotoma condition was cropped to be circular, in Experiment 1 it was square. This may explain the somewhat smaller critical radius of 7.4° as opposed to that of Experiment 1, which was 8.7°, as shown by the crossover point in (A).
A third conclusion of Larson and Loschky's was that their results were less biased in favor of central vision than would be predicted by V1 cortical-magnification functions, suggesting that a higher order visual area (e.g., the parahippocampal place area [PPA]) with less of a central-vision bias might better explain their results. This conclusion was based on first identifying a critical radius for both window and scotoma conditions that produced equivalent performance in rapid scene categorization. The critical radius therefore divided the scene into two mutually exclusive regions, inside the window and outside the scotoma, each of which had the same radius and each of which produced equivalent performance, which is the crossover point of the Window and Scotoma lines in Figure 3A. An illustration of stimuli produced to match the critical radius is shown in Figure 3C. When Larson and Loschky found the critical radius for their stimuli (roughly 7.4°–8.7° radius), the percentage of total image pixels inside the window was far from 50%—it was actually about 30%–33%. This outcome was consistent with their second conclusion, that central vision was more efficient at rapidly extracting scene-category information. A straightforward explanation for this central-vision bias would be in terms of cortical magnification of the fovea (Rovamo, Virsu, & Naesaenen, 1978; Strasburger et al., 2011). To test that hypothesis, the authors calculated predicted critical radii based on two different published V1 cortical-magnification functions (from Florack, 2007; Van Essen, Newsome, & Maunsell, 1984). They found that compared to the empirical critical radius, the predicted critical radii based on cortical-magnification functions were much smaller, thus more biased to central vision (i.e., predicted percentages of the total pixel area inside the critical radius were 5% or 3%, based on, respectively, Florack, 2007, and Van Essen et al., 1984). Thus, Larson and Loschky argued that their results could not be explained in terms of V1 cortical magnification. Instead, they hypothesized that their less extreme central-vision advantage may have been due to processing in a higher cortical area, such as the PPA, which has less of a central-vision bias (Hasson, Levy, Behrmann, Hendler, & Malach, 2002).1 Interestingly, these three patterns of results from Larson and Loschky were recently replicated using a convolutional deep neural network (Wang & Cottrell, 2017), suggesting that the results are in some way intrinsic to the task of scene categorization and the nature of scene images. 
Nevertheless, Larson and Loschky's results were produced using images of only 27° × 27° of visual angle. As noted earlier, some researchers consider central vision 0°–10°, with anything at ≥10° eccentricity being peripheral vision (Fortenbaugh et al., 2007; Schwartz, 2010). In that case, windows and scotomas of radius 10° would divide central and peripheral vision. Still other researchers consider anything past roughly 50°–65° eccentricity, which is the monocular field of view, to be the far periphery (Palmer & Rosa, 2006; see Figure 1, Table 1). Thus, scotoma of radius ≥50° would involve only far peripheral vision. These considerations raise the question of what results would be produced with a display that more closely mimics the field of view in real-world scene perception, for example images presented on a 180°-wide screen, which is close to the width of the human visual field. We used such a screen, which was larger in the horizontal dimension than the vertical dimension. Although this made using circular images impossible, it was consistent with the fact that human and other mammalian vision is wider than it is tall, due to having two eyes separated laterally on the horizontal plane. 
Research questions
Experiment 1 was largely exploratory, with the goal of determining whether we would replicate the primary results of Larson and Loschky (2009) regarding the greater importance of the periphery, the greater efficiency of central vision, and the smaller proportion of image area inside the window compared to outside the scotoma at the critical radius. Alternatively, would the large difference in the proportion of the visual field stimulated by Larson and Loschky's images versus ours produce different patterns of results? To foreshadow our results: Experiment 1 generally replicated Larson and Loschky's primary results, but we also found some interesting differences, likely due to the much larger stimulated visual field in our study. The research question in Experiment 2 was more focused—in Experiment 2 we tested two explanations for the greater efficiency of central vision for rapid scene categorization, one in terms of the greater sensitivity of central vision and the other in terms of there being more informative pictorial content presented in the centers of images, due to photographers pointing the centers of their camera viewfinders at interesting things, known as the photographic bias (Velisavljevic & Elder, 2008). The results of Experiment 2 showed some support for both explanations, but with a larger impact from the greater sensitivity of central vision. 
Experiment 1
Method
Participants
Twelve participants (six women, six men; age: M = 22 years, SD = 1.9) from the University of Lille gave written informed consent and were paid 30 euros for their participation in this and other related studies. The study was approved by the Research Ethics Committee. 
Stimuli
There were 512 full-color panoramic photographic images used in the study, with 64 images each in eight basic-level, real-world scene categories: four natural—beach, field, forest, and mountain—and four constructed—amusement park, city, highway, and stadium. The panoramic images measured 1,700 × 425 pixels and were collected from the Internet. Selection criteria were that the images met our minimum size dimensions, were well focused, contained minimal text, were deemed to be good exemplars of their scene categories, did not appear to have strong geometric distortions due to wide-angle view, and, if they contained people, the people were not the main focus of the image. These criteria (particularly size and lack of geometric distortion) were most often met by outdoor images, and thus all of our categories comprised outdoor scenes. When viewed from a distance of 2.04 m, the images were slightly stretched and measured 180° × 32.78° of visual angle. To avoid occluding peripheral vision with the lateral bars of a chin rest, we measured head position with a Polhemus head tracker (Polhemus, Colchester, VT). The images were projected on a large curved screen 6.4 × 2 m (image size: 6.4 × 1.2 m) using three Optoma HD83 projectors (resolution: 1,920 × 1,080, luminance: 68 cd/m2) in an otherwise dark room (Optima, Coretronic Corp, New Taipei, Taiwan). Experimental software was written in MATLAB (MathWorks, Natick, MA). 
Window images showed the central portion of the scene that was visible within a circular region, with everything outside of the circle replaced by neutral gray (a luminance value of 127). Scotoma images were the inverse of the window images—the central portion of the scene was blocked from view within a circular region filled with neutral gray, with the remainder of the original image shown outside the circle. As shown in Figure 4, there were eight window and scotoma sizes each, measured in terms of their radii in degrees of visual angle: window radii = 1°, 2°, 5°, 10°, 15°, 20°, 24°, 90°; scotoma radii = 0°, 10°, 20°, 30°, 40°, 50°, 60°, 70°. Note that the 90°-radius window and 0°-radius scotoma both showed the full image, and thus served as identical control conditions. Several of the window radii were selected for specific reasons (see Table 1 for comparisons). The 1° radius roughly corresponds to various estimates of the eccentricity limit of the rod-free foveola. The 2° radius roughly corresponds to an estimate of the outer limit of the macula. The 5° radius roughly corresponds to varying estimates of the outer limit of the parafovea. Although it is commonly argued that eccentricities of >5° are within peripheral vision (Hollingworth et al., 2001; Rayner, 1998; Shimozaki et al., 2007; van Diepen et al., 1998), the 10° radius corresponds to the reported outer eccentricity of what some anatomists call the perifovea, not to be confused with the similarly named parafovea. The remaining window radii were selected to provide further data with even multiples of 5°. The exception to this was 24°, which was chosen to contain an equal number of image pixels to the 70°-radius scotoma. However, this was based on a false assumption about the image height. In fact, a window radius of 21.86°, roughly midway between 20° and 24°, would contain an equal number of image pixels to the 70°-radius scotoma. The scotoma radii were selected to cover the widest possible range of eccentricities in eight even multiples of 10°, while still leaving some visible scene information beyond the largest scotoma radius (70°). Note that the 10°- and 20°-radius window and scotoma conditions were matched for radius, and thus would produce the full image if combined (Figure 4). 
Figure 4
 
Example image in the window and scotoma conditions. Window and scotoma radii (° of visual angle) are shown on the left of each column. Important comparisons: The 90° window and 0° scotoma conditions were identical, both showing the entire (180° × 32.78°) original image. The 10° window condition showed exactly what was missing from the 10° scotoma condition. A window radius roughly halfway between the 20° and 24° windows (21.86°) would show an identical number of image pixels to the 70° scotoma condition (both showing 23% of the original). For the 1° window condition, a zoomed image inset is shown for the reader, though this was not shown to participants.
Figure 4
 
Example image in the window and scotoma conditions. Window and scotoma radii (° of visual angle) are shown on the left of each column. Important comparisons: The 90° window and 0° scotoma conditions were identical, both showing the entire (180° × 32.78°) original image. The 10° window condition showed exactly what was missing from the 10° scotoma condition. A window radius roughly halfway between the 20° and 24° windows (21.86°) would show an identical number of image pixels to the 70° scotoma condition (both showing 23% of the original). For the 1° window condition, a zoomed image inset is shown for the reader, though this was not shown to participants.
Procedure
Participants were seated in front of a panoramic screen and viewed the images while their eyes were tracked using a head-mounted eye tracker (SensoMotoric Instruments, Teltow, Germany; 50 Hz). After calibration (5 points), participants were instructed to fixate a central fixation cross and press a mouse button to initiate a trial. Eye fixation was controlled with the eye tracker. Head position was measured with the Polhemus head tracker. If participants did not fixate within 3° of the fixation cross, the trial was aborted and they were instructed to more carefully fixate the fixation cross and press their button again. In this way, central fixation was guaranteed at the beginning of each trial. The fixation cross was then presented for an additional 500 ms, followed by the panoramic image, which was flashed for 33 ms. This image duration was selected because it is too short to allow an eye movement (Becker, 1991), pilot testing showed that it avoided ceiling and floor effects on performance, and it is similar to unmasked stimulus durations used in many rapid scene-categorization studies (Rousselet et al., 2005; VanRullen & Thorpe, 2001). The image was immediately followed by the response screen, which showed the eight category names in a 2 × 4 grid. The location of each category in the response grid was randomized on each trial in order to avoid any systematic error due to manual response biases to particular locations (e.g., a preference to select the top-left grid location). Responses were selected by mouse, with a click used to finalize the selection. 
Participants saw either window or scotoma images in separate blocks of 512 trials. The blocks were presented either after a short break on the same day or on separate days. Each of the 512 images was seen once per block, thus twice across the two blocks. The 512 images were randomly assigned to eccentricities for each participant, and the order of eccentricities within a block was randomized for each participant. The order of blocks was counterbalanced across participants. 
Data analysis
Sensitivity was measured using signal-detection theory. To derive d′ values for each condition, we used a generalized multilevel linear probit regression-modeling approach (DeCarlo, 1998). Because a probit link function was specified, the estimated slope for a predictor coded as +0.5 for the correct response and −0.5 for the incorrect response assesses the d′ for a two-choice task. Because this technique derives d′ values for two-choice tasks and not eight-alternative forced-choice (8-AFC) tasks, the d′ values were converted to probabilities and then converted back to the correct d′ for 8-AFC tasks under the assumption of no bias (DeCarlo, 2012) using the computations that produced the Hacker and Ratcliff (1979) d′ table. Random effects were the predictor slope and intercept within participant and an item random effect of intercept within scene category. Given that subjects always vary in average discriminability and in sensitivity to manipulated variables, it is standard to model these individual differences as random effects. We also tested models in which predictor slopes were allowed to vary across items, but these models were exploratory and contributed nothing novel—changes in parameter estimates were only in the third or fourth significant digit. Thus, only the simpler intercept-only item random-effect models were included. 
Results and discussion
As Experiment 1 was largely exploratory, we begin by simply describing the basic results. For each condition, two analyses were run. In the first, we treated each predictor as if it were categorical to derive 8-AFC d′ values at each predictor value with no assumptions about the functional relationship between the predictor values and d′. The results of the categorical fit are shown as dashed lines in Figures 5 and 6. These values are provided as a reference for the model fits in which each predictor was treated as continuous, which are shown as solid lines in Figure 5A and 5B. The goal for the model-fitting process was to produce an approximate description of the functional relationship between the predictor and the 8-AFC d′ scores. 
Figure 5
 
Results of Experiment 1. (A) Panoramic-image scene-categorization sensitivity measured using eight-alternative forced choice d′ as a function of window and scotoma condition and radius (°). The dashed lines were produced by treating radius as categorical, whereas the solid lines (±SE) were produced by treating it as continuous. The symbols represent the categorical means, and the error bars on the continuous fits represent their degree of uncertainty. For the continuous fits, an inverse function was the best fit for the window condition, whereas a linear function was the best fit for the scotoma condition (see text for details). (B) Sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example image illustrating the empirically derived critical radius (10°), which produced equivalent performance in the window and scotoma conditions. Surprisingly, performance with the critical radius was also equivalent to that in the full-image condition. Note that participants saw only either a window-condition image or a scotoma-condition image on any given trial. The dotted lines connecting the window- and scotoma-condition images serve to indicate that both together constitute the entire image.
Figure 5
 
Results of Experiment 1. (A) Panoramic-image scene-categorization sensitivity measured using eight-alternative forced choice d′ as a function of window and scotoma condition and radius (°). The dashed lines were produced by treating radius as categorical, whereas the solid lines (±SE) were produced by treating it as continuous. The symbols represent the categorical means, and the error bars on the continuous fits represent their degree of uncertainty. For the continuous fits, an inverse function was the best fit for the window condition, whereas a linear function was the best fit for the scotoma condition (see text for details). (B) Sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example image illustrating the empirically derived critical radius (10°), which produced equivalent performance in the window and scotoma conditions. Surprisingly, performance with the critical radius was also equivalent to that in the full-image condition. Note that participants saw only either a window-condition image or a scotoma-condition image on any given trial. The dotted lines connecting the window- and scotoma-condition images serve to indicate that both together constitute the entire image.
Figure 5A shows the d′ across participants as a function of window and scotoma radius for both analyses, illustrating the replicated inverse relationship between radius and sensitivity for the window versus scotoma conditions illustrated in Figure 3A (Larson & Loschky, 2009; Wang & Cottrell, 2017). As the window radius increased, sensitivity increased, whereas as the scotoma radius increased, sensitivity decreased. Multilevel probit regressions for each condition revealed that the relationship between radius and sensitivity was different for the two conditions. For simplicity, we considered only linear, logarithmic, and inverse relationships for each condition. The window data were well approximated by an inverse function: 2-AFC d′ = 1.90 + −4.73 × 1/(radius + 1); SEint = 0.16, SEslope = 0.13, Akaike information criterion (AIC) = 4,086. Conversely, the scotoma data were well approximated by a linear function: sensitivity = 2.28 − 0.040 × radius; SEint = 0.21, SEslope = 0.002, AIC = 5,005. 
As shown in Figure 5B, we also replotted the d′ values as a function of the percentage of the image pixels shown. When we used the proportion of the image shown as the predictor, the linear relation for the scotoma condition was unchanged, AIC = 5,004 versus 5,005, although in the opposite direction, as expected. The scotoma model was 2-AFC d′ = −1.33 + 3.46 × proportion; SEint = 0.15, SEslope = 0.14. When we tried to predict performance in the window condition from the proportion of the image shown, the best fit obtained was again inverse and better than the first fit using the inverse of the radius, AIC = 4,076 versus 4,086, Bayes factor (BF) = 221. The window model was d′ = 1.94 − 0.022 × 1/(proportion + 0.01); SEint = 0.17, SEslope = 0.00068. 
This difference in which type of function best fitted the data in the two conditions arises from the rapidity with which performance changed as a function of condition. In the window condition, performance varied nonlinearly and extremely rapidly as a function of the proportion of image shown. Conversely, in the scotoma condition, the increase in performance was far more steady and gradual, and closer to a 1:1 ratio between increase in proportion shown and sensitivity as measured using d′. 
Having generally described the results, let us now evaluate them in terms of our research questions, namely the degree to which our results replicated those of Larson and Loschky (2009) concerning the greater importance of the periphery and the greater efficiency of central vision for rapid scene categorization, versus the degree to which stimulating a far larger portion of the visual field in the current study produced different results from Larson and Loschky's. The key data are shown in Figure 5A and 5B, and the t tests and Bayes factors are reported in Table 2. Bayes factors estimate the relative posterior likelihood of two competing hypotheses; in this case, one of the hypotheses is a point null (Dienes, 2014). Note that by calculating Bayes factors we can evaluate support for not only the alternative hypothesis but also for the null hypothesis (Rouder, Speckman, Sun, Morey, & Iverson, 2009; Wetzels et al., 2011). As a general statement, the results show important similarities to those of Larson and Loschky.2 
The importance of the periphery
We first consider the importance of the periphery for rapid scene categorization. As discussed earlier, and shown in Figure 3A, Larson and Loschky (2009) found that the 5° window condition produced meaningfully and significantly worse performance than the full-image condition, but the 5° scotoma produced equivalent performance to the full-image condition, suggesting that peripheral vision (i.e., >5° eccentricity) was more important than central vision (i.e., ≤5° eccentricity) for rapid scene categorization. Similarly, in the current study, as shown in Figure 5A and Table 2, rapid scene recognition in the 5°-radius window condition was significantly and meaningfully lower than in the 90°-radius full-image control condition. This replicated Larson and Loschky's results, and indicates that foveal and parafoveal vision, as defined by eccentricities ≤5°, is not sufficient to achieve asymptotic performance. Thus, information from beyond 5°, which is commonly defined as peripheral vision, was necessary to achieve asymptotic performance. Let us compare these results from the window condition to the scotoma condition. We did not include a 5°-radius scotoma condition, but we can be confident that it would likely have produced performance no different from the asymptotic performance in the 0°-radius scotoma full-image control condition. We can infer this because, as shown in Table 2 and Figure 5A, the 10°-radius scotoma condition—which removed even more image content from the center of vision than a 5° radius would have—also produced performance no different from the 0°-radius scotoma full-image control condition due to a ceiling effect. This indicates that not only was the central 5° radius of vision unnecessary for asymptotic rapid scene categorization, but even the central 10° radius of vision was unnecessary. Thus, taking the results from the 5°-radius window condition together with those of the 10° scotoma condition, we replicated and extended Larson and Loschky's result, as shown in Figure 3A, that peripheral vision was more important than central vision for rapid scene categorization. Nevertheless, these results raise another question: What explains the fact that the largest scotoma to produce performance no different from seeing the entire image increased from 5° radius in Larson and Loschky (2009), in Figure 3A, to 10° radius in the current study, in Figure 5A? A possible simple answer is that the current study stimulated a much larger field of view, thereby increasing the relative utility of peripheral vision for rapid scene categorization. 
Other aspects of the current results also dramatically showed the value of information in peripheral vision for rapid scene categorization. Specifically, even after all imagery was removed in the scotoma condition out to 30° eccentricity (twice the typical maximum eccentricity of 15° in most previous studies of rapid scene categorization), rapid scene categorization was still quite sensitive, with d′ = 2.63. Likewise, even when viewers had information only from beyond the 70° scotoma, they were still above chance, with d′ = 0.73. 
Given that the central 5° of central vision (i.e., foveal and parafoveal vision) is insufficient to achieve asymptotic rapid scene categorization, how large must the central region be to achieve asymptotic performance? As shown in Figure 5A and Table 2, the 15°-radius window is clearly no different from the 90°-radius window full-image condition. An interesting question is whether the same can be said of the 10°-radius window. However, the statistical evidence is ambiguous. While Table 2 shows that the 10°-radius window was not significantly different from the 90°-radius window full-image condition, we cannot say that performance in the two conditions was the same. Specifically, to provide evidence in favor of the null, we cannot use p values but must instead use the Bayes factor, which in this case provides weak (“anecdotal”) evidence for the two conditions being different, as shown in Table 2. Thus, in the current study, while performance in the 15°-radius window condition was clearly no different from the 90°-radius full-image control condition, further research will be needed concerning the 10°-radius window condition. 
The central efficiency advantage
We next consider the efficiency advantage for central vision. As discussed earlier and shown in Figure 3B, Larson and Loschky (2009) found that when the same proportion of the original image was shown in the window and scotoma conditions, the window condition always produced superior performance to the scotoma condition. Our results are consistent with those but support this conclusion far more compellingly. This can be shown in several ways. First, as shown in Figure 5B and noted in the caption to Figure 4, a window size of 21.86°, roughly halfway between the 20° and 24° window radii, would show an equal amount of image area to that shown in the 70°-radius scotoma condition—namely 23% of the original image. However, Figure 5B shows that this virtual window size produced radically greater sensitivity than the 70°-radius scotoma condition (roughly, d′ = 3.41 for window vs. 0.72 for scotoma). This difference in sensitivity is far greater than the largest difference found by Larson and Loschky, shown in Figure 3B, for the roughly 20% visible comparison (roughly, d′ = 1.70 for window vs. 1.05 for scotoma). Figure 5B shows even more compelling evidence for this conclusion in terms of the functions relating sensitivity to proportion of image pixels shown in window and scotoma conditions. The portion of the window condition representing central vision produced an extremely steep function for the sensitivity to proportion of image shown, which was roughly the inverse of eccentricity, while the scotoma condition produced a far shallower linear function. Furthermore, the steepest portion of the window condition's slope was between the two shortest eccentricities (1° vs. 2° radius). Thus, although information in the foveal and parafoveal regions was insufficient to rapidly categorize a scene at a high level of sensitivity, each increment in information shown there made a bigger contribution to successful scene categorization than anywhere else in the visual field. Conversely, the slope for the scotoma condition is very close to unity. This last point is strongly consistent with Larson and Loschky's conclusion that the advantage of peripheral vision over central vision is because the area in peripheral vision is so much greater, which they dubbed the “more is better” hypothesis. 
The latter observations suggest that a meaningful comparison can be made between the cone-density-by-eccentricity function shown in Figure 2 (based on Curcio, Sloan, Kalina, & Hendrickson, 1990) and the experimental results shown in Figures 5A and 5B. Specifically, Figure 2 shows that the exponential drop-off in cone density is greatest between 0° and 5°–10° eccentricity. This exponential change is consistent with the observed inverse relationship between window radius and sensitivity that is shown in Figure 5B for the window conditions with 0° and 5°–10° radii. This comparison suggests that the highly nonlinear cone-density function in central vision explains the high efficiency of central vision in rapid scene categorization. Conversely, the very flat cone-density function at >10° eccentricity in Figure 2 seems to map well with the linear function relating proportion of the original image shown to sensitivity in the scotoma condition in Figure 5B. This suggests that in the periphery (e.g., beyond 10°), because cone density is essentially constant, each additional unit of visual information (i.e., each additional percentage of original image area shown) produces a roughly equal increment in performance at rapid scene categorization. An interesting direction for future work would be to model the influence of cone density on results such as ours using, for example, the Image System Engineering Toolbox for Biology. Nevertheless, Experiment 1 points to the conclusion that central and peripheral vision may make relatively independent contributions to rapid scene categorization, based on having quantitatively very different functions operating in each. 
The critical radius
We last consider results regarding the critical radius, namely the single radius in window and scotoma conditions that produces equivalent results. In the current study, this was roughly 10°. (To formalize this comparison, an analysis of only the common 10°-radius conditions revealed substantial evidence for H0, z = −1.11, p = 0.267, difference in 8-AFC d′ = 0.14, BF = 0.21). Furthermore, whereas the critical radius of roughly 7.4°–8.7° in Larson and Loschky (2009) produced significantly lower sensitivity than the full-image control condition, the critical radius of 10° in the current study was nearly equivalent to the full-image control condition (though see earlier for the ambiguity of this equivalence). The nearly equivalent performance of both the 10° window and 10° scotoma conditions to the full-image condition indicates that, in the current study, information presented in both the central and peripheral visual fields provided redundant information for the task of rapid scene categorization. An example image illustrating the 10° critical radius is shown in Figure 5C
The results regarding the critical radius showed both similarities to and important differences from those of Larson and Loschky's experiment 1. In that study, the critical radius for the window and scotoma conditions, which produced equivalent performance, was estimated to be between 8.7° (using scotoma images with square outer boundaries) and 7.4° (using scotoma images with circular outer boundaries). In the current experiment, as noted earlier, the critical radius was roughly 10°, which is relatively close to Larson and Loschky's estimates. Nevertheless, this relative similarity in the eccentricity of the critical radius belies a large difference in the ratio of image area within the window versus outside of the scotoma at the critical radius in the two studies. In Larson and Loschky's study, the ratio of area within the window to area outside the scotoma (which we will call the central efficiency ratio) was 0.33:0.67 = 1:2.07 (using scotoma images with square outer boundaries) and 0.3:0.7 = 1:2.33 (using scotoma images with circular outer boundaries), whereas in the current experiment it was 0.04:0.96 = 1:24. Larson and Loschky asked whether their 1:2.33 central efficiency ratio could be explained in terms of V1 cortical magnification, namely by the proportion of V1 cortical tissue stimulated by both the window and scotoma conditions as estimated from aerial cortical-magnification functions, and found that cortical-magnification functions overestimated the importance of central vision in creating the critical radius. However, the far smaller central efficiency ratio of 1:24 in the current study shows markedly greater efficiency for central vision than peripheral vision. This, in turn, suggests that cortical magnification may be a more reasonable explanation than it seemed based on Larson and Loschky's larger ratio of 1:2.33, which showed a far less marked central efficiency advantage (see also the recent results of Geuzebroek & van den Berg, 2018, which showed equal central and peripheral scene-gist performance after m-scaling of the scene stimuli). 
Differences due to the larger stimulated visual field
The results of Experiment 1 replicated both the importance of peripheral vision and the central-vision efficiency advantage of Larson and Loschky (2009). Nevertheless, they showed greater behavioral dissociability of central and peripheral regions than Larson and Loschky's study. This was shown by the highly dissimilar inverse versus linear functional relationships for central versus peripheral vision shown in Figure 5A and 5B. Likewise, we found more dramatic evidence of the central efficiency advantage when comparing equal proportions of visible image. Lastly, we found a far smaller ratio of proportion of image inside the window to outside the scotoma at the critical radius. All of these results are most simply explained by the much greater extent of the visual field stimulated in the current study. Thus, while Larson and Loschky's study had a maximum radius of 15°, and thus did not include far peripheral vision, the current experiment, which extended to 90°, did. The relatively near peripheral vision that Larson and Loschky studied seems in retrospect to have more in common with central vision (i.e., foveal and parafoveal vision) than the middle and far peripheral vision we investigated. 
Experiment 2
Experiment 1 showed a very robust efficiency advantage for information presented centrally (in the window condition) compared to information presented peripherally (in the scotoma condition). However, there are two qualitatively different potential explanations for this efficiency advantage. One is in terms of either cortical magnification in, say, V1 (Rovamo et al., 1978; Strasburger et al., 2011; Van Essen et al., 1984) or a central-vision processing bias in higher ventral-stream areas involved in object processing, such as lateral occipital cortex (Larsson & Heeger, 2006; Sayres & Grill-Spector, 2008)—which, while not essential to rapid scene categorization, plays a role in it (Linsley & MacEvoy, 2014). This explanation for the efficiency advantage in central-vision processing (on a per-pixel basis) in terms of greater visual sensitivity of central vision is consistent with our discussion of the results of Experiment 1. However, an alternative explanation is in terms of the photographic bias, namely that photographers point their cameras at particularly interesting things and highly diagnostic image content is therefore concentrated in the central region of scene images (Velisavljevic & Elder, 2008). According to this explanation, it is the diagnostic content-sampling eccentricity, rather than neurophysiological differences due to presentation eccentricity, that produces the efficiency advantage in rapid scene categorization for central vision. Of course, a third possibility is that both of these factors contribute, with each making some weighted contribution to performance in Experiment 1. Velisavljevic and Elder's experiment 2 investigated this issue by manipulating each factor independently of the other to investigate the effects of both on short-term memory for scenes. They manipulated the content-sampling eccentricity by selectively showing only parts of larger original images, with the parts varying in terms of their distances from the center of the original image. They then manipulated presentation eccentricity by varying the distance of memory-probe items from fixation. Using a factorial design, they found main effects of both factors, but a larger effect of retinal eccentricity than the photographic bias. Nevertheless, that study investigated visual short-term memory rather than rapid scene categorization. Thus, our Experiment 2 used a similar approach to determine whether, or to what extent, the content-sampling eccentricity contributed to the central-vision efficiency advantage in rapid scene categorization found in Experiment 1
Method
Participants
Nine participants (five women, four men; age: M = 24.3 years, SD =4.3) from the University of Lille gave written informed consent and were paid 30 euros for their participation in this and other related studies. The study was approved by the Research Ethics Committee. 
Design, stimuli, and procedure
Experiment 2 used the same stimuli and procedures as Experiment 1 with the following exceptions (Figure 6 illustrates the stimuli and experimental design): First, we used only circular window images, because we were specifically testing two explanations of the central-vision efficiency advantage found in the window condition in Experiment 1. Second, to simplify our experimental design we used only a single window radius, 5°, which had produced excellent but not asymptotic performance in Experiment 1. Third, to manipulate content-sampling eccentricity we selected three windowed image regions from each original image, centered at 0° eccentricity from the image center (horizontal and vertical), −60° eccentricity (horizontal: to the left), and +60° eccentricity (horizontal: to the right). To manipulate presentation eccentricity, we showed each of the three windowed image regions (which varied in content-sampling eccentricity) at the same three presentation eccentricities: 0°, −60° (to the left), and +60° (to the right). Thus, as shown at the bottom of Figure 6, we had a full factorial 3 (content-sampling eccentricity: −60°, 0°, +60°) × 3 (presentation eccentricity: −60°, 0°, +60°) within-subject experimental design. We used 504 source images (8 categories × 9 cells in the design × 7 instances of each). In order to show each participant only one window image from each original source image but show all nine cells in the factorial design for each source image, we counterbalanced the nine cells in the factorial design for each of the 504 source images across nine participants. 
Figure 6
 
An illustration of the stimuli and experimental design in Experiment 2. Top row: Example of an original image, illustrating content-sampling eccentricity. The content of three 5°-radius windows was sampled at −60°, 0°, and +60° eccentricity from the image center. Middle row: Magnified versions of the windowed content samples. Bottom row: Illustration of the factorial design of Experiment 2 using the middle-row samples as examples. Content-sampling eccentricity (−60°, 0°, and +60°) in the three columns, and presentation eccentricity (−60°, 0°, and +60°) in the three rows. Each of the nine images in the 3 × 3 matrix is an example of what would be presented to participants in that condition.
Figure 6
 
An illustration of the stimuli and experimental design in Experiment 2. Top row: Example of an original image, illustrating content-sampling eccentricity. The content of three 5°-radius windows was sampled at −60°, 0°, and +60° eccentricity from the image center. Middle row: Magnified versions of the windowed content samples. Bottom row: Illustration of the factorial design of Experiment 2 using the middle-row samples as examples. Content-sampling eccentricity (−60°, 0°, and +60°) in the three columns, and presentation eccentricity (−60°, 0°, and +60°) in the three rows. Each of the nine images in the 3 × 3 matrix is an example of what would be presented to participants in that condition.
Results and discussion
As in Experiment 1, we used a multilevel probit regression to estimate the 8-AFC d′ for each of the conditions. Each categorical predictor (presentation eccentricity and sampling eccentricity) was effect-coded to orthogonalize the relationship between the main effects and the interaction (effect coding thus produces an analysis that is a probit version of an analysis of variance). The random effects were the intercept and slope main effects for the two predictors across participants and an item random effect of intercept across scenes. As shown in Figure 7, there was, as expected, a strong and significant main effect of the presentation eccentricity of the window, F(2, 16) = 159.00, p < 0.001. Specifically—and unsurprisingly—windows presented at 0° were far more accurately categorized than those presented at −60° and +60° eccentricity. Of much greater interest for the current experiment was the fact that the content-sampling eccentricity also had a significant main effect, F(2, 16) = 10.78, p < 0.001, which was, however, substantially smaller (as shown in Figure 7). Specifically, image content sampled from 0° produced higher sensitivity than did content sampled from the −60° and +60° eccentricities, z = 4.38, p < 0.001. This main effect was qualified by a significant Presentation eccentricity × Content-sampling eccentricity interaction, F(4, 32) = 4.39, p < 0.001. As shown in Figure 7 and Table 3, the interaction indicates that the advantage for the image content sampled from the middle of the figure (0° eccentricity) versus the edges (−60° or +60° eccentricities) was specific to presenting this content centrally (at the 0° window eccentricity). In simple terms, this means that when image content is presented at a far retinal eccentricity, it does not matter where the image content was sampled from. Presumably, this is due to either the lower resolution or the greater crowding effects of the far visual periphery, which would reduce differences in content-sampling eccentricity. 
Figure 7
 
Windowed scene-categorization sensitivity as a function of presentation eccentricity (−60°, 0°, and +60°) and content-sampling eccentricity (−60°, 0°, and +60°). (The distinction between presentation eccentricity and content-sampling eccentricity is illustrated in Figure 6.) Error bars represent the standard error of the mean.
Figure 7
 
Windowed scene-categorization sensitivity as a function of presentation eccentricity (−60°, 0°, and +60°) and content-sampling eccentricity (−60°, 0°, and +60°). (The distinction between presentation eccentricity and content-sampling eccentricity is illustrated in Figure 6.) Error bars represent the standard error of the mean.
Table 3
 
Comparisons of the three content-sampling eccentricities at each of three presentation eccentricities. Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014).
Table 3
 
Comparisons of the three content-sampling eccentricities at each of three presentation eccentricities. Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014).
In sum, consistent with the results of Velisavljevic and Elder (2008, experiment 2), we found effects for both presentation eccentricity and sampling eccentricity, but a far larger effect of presentation eccentricity. Thus, a portion of the central efficiency advantage in Experiment 1 can be explained by the photographic bias. However, most of that advantage appears to be due to retinal eccentricity, regardless of where in a given real-world scene a photographer happened to be pointing their camera. 
An interesting question is whether our comparison of 0° and 60° skewed our results to show a greater effect of presentation eccentricity than sampling eccentricity. After all, 60° eccentricity is in far peripheral vision, which we know is generally poor. However, our design factorially combined both presentation eccentricity and sampling eccentricity using the same set of eccentricities. Thus, if one assumes that sampling eccentricity (i.e., the eccentricity from which image content is sampled, thus the basis of the photographic bias) is equally important to presentation eccentricity (i.e., where on the retina images are presented), then both forms of eccentricity should have equal effects across eccentricities. In contrast to this assumption of equality, our factorial design showed a larger effect of presentation eccentricity than sampling eccentricity. 
Importantly, the results of Experiment 2 are consistent with those of Velisavljevic and Elder (2008) but were found using a very different dependent measure (rapid scene categorization vs. short-term memory for randomly selected scene patches) and with stimuli presented at a larger range of retinal eccentricities. Further research could investigate the psychophysical relationship between content-sampling eccentricity and presentation eccentricity in greater detail, looking into the possibility of a nonlinear relationship or interactions with other factors known to affect rapid scene categorization differentially as a function of eccentricity, such as spatial-frequency content (Loschky et al., 2005; Schyns & Oliva, 1994) and processing time (Larson, Freeman, Ringer, & Loschky, 2014). 
General discussion
In Experiment 1, we replicated two of Larson and Loschky's (2009, experiment 1) key findings: that peripheral vision is more important for rapid scene categorization than the central 5° of vision (foveal plus parafoveal vision) and that the central 5° of vision is more efficient at rapid scene categorization on a per-pixel basis than peripheral vision. Relatively similarly to their results, we found that a critical radius of 10° produced equivalent performance in both window and scotoma conditions (compared to the 7.4°–8.7° critical radii they found). However, in sharp contrast to Larson and Loschky's results, our critical radius showed a central efficiency ratio with 24 times more image area/content in peripheral vision than in central vision, rather than Larson and Loschky's 2 times more. Thus, the true magnitude of the central-vision efficiency advantage was shown to be a power of magnitude greater than their estimates. This result also adds far greater weight to Larson and Loschky's argument that peripheral vision makes up for its inefficiency through its tremendous advantage in sheer area—what they called the “more is better” principle. 
A counterargument to this conclusion is that we are inflating our estimate of the central efficiency ratio by our use of a far larger screen when calculating the proportion of image within the window to that outside the scotoma. Specifically, according to this counterargument, if our calculation method were taken to its logical extreme, one would extend the panoramic screen beyond our 90° eccentricity to a full 180° eccentricity (i.e., a wraparound 360° screen), which would further inflate the central efficiency ratio but also violate common sense by putting imagery at the back of the head. However, the key assumption of this counterargument is that vision in the middle to far periphery cannot meaningfully contribute to rapid scene categorization and thus should not be included in calculating the central efficiency ratio. This assumption is clearly falsified by the results of our Experiment 1. Consider performance based on image content presented only from the middle to far periphery (the 30° scotoma condition) or only in the far periphery (the 60° scotoma condition); performance in those conditions ranged from d′ = 2.7 to 1.1 respectively, showing clear contributions to performance. In sum, such performance based only on image content presented in the middle or far periphery justified its inclusion in calculating the central efficiency ratio. 
The current study also contributes a novel insight by showing that central and peripheral vision show radically different functional relationships between sensitivity in rapid scene categorization and retinal eccentricity or proportion of original image shown. The functional relationship for central vision (between 0° and 5°–10° eccentricity) is highly nonlinear, in an apparently inversely analogous relationship to the drop-off in cone density. Conversely, the functional relationship for peripheral vision (>10° eccentricity) is largely linear, in an apparently analogous relationship to the essentially flat cone density there. Again, for peripheral vision this linear relationship between area of the original image shown and scene-gist performance lends further support to the “more is better” principle. 
The current study also answers an important question that was left hanging in Larson and Loschky's (2009) study, namely: Is the incredible central-vision efficiency advantage simply due to the photographic bias (Velisavljevic & Elder, 2008) that puts the most interesting (i.e., diagnostic) image content at the center of photographs? Experiment 2 addressed this question by contrasting the effects of content-sampling eccentricity (i.e., where in the original image the content came from) versus presentation eccentricity (i.e., where on the screen the image content was presented). The results showed a relatively small effect of the photographic bias, which produced a difference in sensitivity of roughly d′ = 0.5 between content sampled from the center of the original images versus content sampled from 60° to the left or right. In contrast, there was a difference in sensitivity of roughly d′ = 1.7 between imagery presented at the center of the screen versus imagery presented 60° to the left or right. In sum, the effect of presentation eccentricity on sensitivity was roughly 3 times greater than the effect of sampling eccentricity. Thus, the efficiency advantage for central vision on rapid scene categorization is primarily an effect of retinal eccentricity rather than of the photographic bias. 
An interesting issue raised by the current study is the dichotomization of the visual field into central versus peripheral vision. Some retinal-neurophysiology researchers have defined peripheral vision as anything beyond the macula, namely >2.6°–3.6° (Quinn et al., 2019), while many visual-cognition researchers have defined it as anything beyond 5° eccentricity (Hollingworth et al., 2001; Larson & Loschky, 2009; Rayner, 1998; Shimozaki et al., 2007; van Diepen et al., 1998), and still other vision researchers have defined it as anything beyond 10° eccentricity (Fortenbaugh et al., 2007; Schwartz, 2010). We found that the critical radius producing equivalent performance in the window and scotoma conditions was 10°, or roughly the outer limit of the perifovea (Table 1). Thus, behaviorally, the results of the rapid-scene-categorization task suggest a dichotomy of the visual field between <10° and >10°. This also seems to be very close to the eccentricity at which the drop-off in cone density reaches an asymptote (as shown in Figure 2). Nevertheless, the critical radius in the current study is specific to the task of rapid scene categorization. Other scene-perception studies, using other tasks, have found very different important limiting eccentricities. For example, saccade lengths have been shown to be relatively unaffected by gaze-contingent blur outside of a moving window of 4.6° radius (Loschky & McConkie, 2002), or roughly the outer limit of the parafovea (Table 1). And long-term-memory performance for picture-change detection was found to be no better than chance when the closest fixation to a change was >2° (Nelson & Loftus, 1980, experiment 1), or roughly the outer limit of the macula (Table 1). Whether these results are meaningfully related to these physiological landmarks of the visual field, or are simply coincidences, is a subject that deserves further research (e.g., Ehinger & Rosenholtz, 2016). However, based on the current study, it seems reasonable that when dichotomizing between central and peripheral vision, one should at least be clear about what is included in the definition of central vision, namely only the foveola, the macula, the foveola and parafovea, or the foveola, parafovea, and perifovea. 
From the perspective of our understanding of central versus peripheral vision, there are two opposing approaches and a middle path to characterizing them and explaining their operation: the “two visual systems” approach, which argues that central and peripheral vision are qualitatively different and serve different functions, and the “equivalence principle” that argues that central and peripheral vision are essentially the same except for their spatial resolution, which can be factored out by appropriate contrast and size scaling through the use of cortical-magnification functions (i.e., m-scaling). The middle path acknowledges that despite the elegant simplicity of the equivalence principle, there are a number of visual tasks which produce worse performance in peripheral vision than central vision, even after m-scaling, thus pointing to quantitative rather than qualitative differences between central and peripheral vision (Yu, Chaplin, & Rosa, 2014). Distinguishing among these three possibilities critically depends upon taking account of cortical magnification when examining differential performance in central versus peripheral vision. Thus, a key unresolved question left open by the current study is the extent to which the results of Experiment 1 can be explained in terms of cone density, retinal-ganglion cell density, or cortical magnification of the fovea in primary visual cortex (V1; c.f. Geuzebroek & van den Berg, 2018). 
Research on rapid scene categorization has greatly benefited from computational modeling efforts (Bosch, Munoz, & Martì, 2007; Oliva & Torralba, 2001; Torralba & Oliva, 2003), including both incredibly accurate models using deep neural networks (for review, see Zhou, Lapedriza, Khosla, Oliva, & Torralba, 2018) and highly biologically inspired models (e.g., Grossberg & Huang, 2009). However, to our knowledge, only one such scene-classification model has included the distinction between central and peripheral vision, and it shows the particular importance of information from the image periphery on scene categorization (Wang & Cottrell, 2017). Wang and Cottrell replicated all the major patterns of results from Larson and Loschky (2009) using the same image sizes (i.e., maximum eccentricity of 13.5°), including the greater importance of peripheral information but better efficiency based on central information, with a deep neural network trained on scene images and tested on window and scotoma images. They also showed an effect of the photographic bias, but similar to in our Experiment 2, it played a minor role. They then went beyond Larson and Loschky's study by training one model on window images and another on scotoma images, and testing each on whole images. Their results showed that the model trained on scotoma images was more accurate. Furthermore, when they combined the two models at a decision stage which could differentially weight the input from each model, the scotoma-trained model was weighted far more heavily, based on learning by the model over 10,000 trials, and visualization of the features of the central and peripheral models that were weighted heavily by the model were qualitatively different. Thus, their results strongly suggest that peripheral vision contains particularly valuable information for scene categorization, not only for humans but also for deep neural networks optimized to categorize scene images. Finally, they found that incorporating V1-like cortical magnification through log-polar transformation of the input images (for both training and test stages) produced far better simulations of Larson and Loschky's human-performance results than using untransformed images. In sum, Wang and Cottrell's results point to the importance for computational models of scene categorization of taking into account the contributions of central versus peripheral vision, including cortical magnification (through log-polar transformation) and the importance of information from the periphery (despite being greatly compressed). 
Further computational models of scene categorization should also take account of the strong evidence that both scene-selective and object-selective brain regions are involved in rapid scene categorization (Linsley & MacEvoy, 2014), as well as their specific eccentricity biases (Levy, Hasson, Avidan, Hendler, & Malach, 2001). Such modeling could take into account the V1 eccentricity-dependent connectivity strengths of scene-selective regions such as the PPA, the retrosplenial cortex, and the occipital place area, and object-selective areas such as the lateral occipital cortex (Baldassano, Fei-Fei, & Beck, 2016). Specifically, PPA, retrosplenial cortex, and occipital place area all show a large contribution from peripheral vision and a relatively small weighting of input from central vision (Arcaro, McMains, Singer, & Kastner, 2009; Nasr et al., 2011; Silson, Chan, Reynolds, Kravitz, & Baker, 2015; Silson, Groen, Kravitz, & Baker, 2016), with their eccentricity-dependent connection strengths to V1 showing the same pattern (Baldassano et al., 2016). Conversely, lateral occipital cortex shows a strong preference for central vision with far less influence from the periphery (Baldassano et al., 2016; Larsson & Heeger, 2006; Sayres & Grill-Spector, 2008; Silson et al., 2015; Silson et al., 2016). Unfortunately, it is difficult to derive detailed estimates of the degree of peripheral bias across the entire visual field for the scene-selective areas based on even the most detailed studies with both humans and macaques published to date (Arcaro & Livingstone, 2017; Arcaro et al., 2009; Baldassano et al., 2016; Kornblith, Cheng, Ohayon, & Tsao, 2013; Nasr et al., 2011; Silson et al., 2015; Silson et al., 2016). This is because quantification of brain activity in scene-selective areas as a function of retinal eccentricity has not been as precise as that for measurement of cortical-magnification functions in, say, V1–V3. In addition, most such studies have not used panoramic stimuli to measure brain activity across the entire visual field. More precise mapping of brain activity in scene-selective areas as a function of retinal eccentricity would allow more precise computational models that could be tested against behavioral data such as in our Experiment 1. Likewise, more detailed analyses of the eccentricity-dependent contributions of such scene- and object-selective brain areas could incorporate the behavioral methods of the current study (e.g., Experiment 1) with the computational modeling methods of Wang and Cottrell (2017). 
Summary/conclusion
In this study, we asked whether we would replicate the general patterns of results obtained by Larson and Loschky (2009) regarding the relative roles of central and peripheral vision in scene-gist recognition of panoramic scene images on a 180°-wide screen. We generally replicated their findings showing the greater importance of peripheral vision for recognizing scene gist and the greater efficiency of central vision. However, our results differed by showing markedly different functional relationships between eccentricity and performance for central versus peripheral vision, with central vision showing a strongly inverse relationship with image radius and peripheral vision showing a largely linear relationship. We also found that the central efficiency advantage was much greater than shown by Larson and Loschky when central vision was compared with the middle and far periphery rather than the near periphery. Finally, we showed that the central efficiency advantage is, largely, not explainable by the bias of photographers to put interesting (diagnostic) information at the center of images, replicating the earlier weak effect of the photographic bias found by Velisavljevic and Elder (2008). 
Acknowledgments
This research was supported by Grant 10846128 from the United States Office of Naval Research to LCL, and the French National Agency for Scientific Research Grant SHS2 LowVision to MB. We thank Jia Li Tang and Alicia Johnson for helping collect and prepare the panoramic scene images. Research in this article was previously presented at the 2015 Annual Meeting of the Vision Sciences Society, with the abstract published in the Journal of Vision
Commercial relationships: none. 
Corresponding author: Lester C. Loschky. 
Email: loschky@ksu.edu
Address: Psychological Sciences, Kansas State University, Manhattan, KS, USA. 
References
Anderson, S. J., Mullen, K. T., & Hess, R. F. (1991). Human peripheral spatial resolution for achromatic and chromatic stimuli: Limits imposed by optical and retinal factors. The Journal of Physiology, 1 (442), 47–64.
Arcaro, M. J., & Livingstone, M. S. (2017). Retinotopic organization of scene areas in macaque inferior temporal cortex. The Journal of Neuroscience, 37 (31), 7373–7389, https://doi.org/10.1523/JNEUROSCI.0569-17.2017.
Arcaro, M. J., McMains, S. A., Singer, B. D., & Kastner, S. (2009). Retinotopic organization of human ventral visual cortex. The Journal of Neuroscience, 29 (34), 10638–10652, https://doi.org/10.1523/jneurosci.2807-09.2009.
Bacon-Mace, N., Mace, M. J., Fabre-Thorpe, M., & Thorpe, S. J. (2005). The time course of visual processing: Backward masking and natural scene categorisation. Vision Research, 45, 1459–1469.
Baldassano, C., Fei-Fei, L., & Beck, D. M. (2016). Pinpointing the peripheral bias in neural scene-processing networks during natural viewing. Journal of Vision, 16 (2): 9, 1–14, https://doi.org/10.1167/16.2.9. [PubMed] [Article]
Bar, M., & Ullman, S. (1996). Spatial context in recognition. Perception, 25 (3), 343–352.
Becker, W. (1991). Saccades. In Carpenter R. H. S. (Ed.), Eye movements (Vol. 8, pp. 95–137). Boca Raton, FL: CRC Press.
Biederman, I., Mezzanotte, R., & Rabinowitz, J. (1982). Scene perception: Detecting and judging objects undergoing relational violations. Cognitive Psychology, 14, 143–177.
Bosch, A., Munoz, X., & MartÌ, R. (2007). Which is the best way to organize/classify images by content? Image and Vision Computing, 25 (6), 778–791, https://doi.org/10.1016/j.imavis.2006.07.015.
Boucart, M., Lenoble, Q., Quettelart, J., Szaffarczyk, S., Despretz, P., & Thorpe, S. J. (2016). Finding faces, animals, and vehicles in far peripheral vision. Journal of Vision, 16 (2): 10, 1–13, https://doi.org/10.1167/16.2.10. [PubMed] [Article]
Boucart, M., Moroni, C., Thibaut, M., Szaffarczyk, S., & Greene, M. (2013). Scene categorization at large visual eccentricities. Vision Research, 86, 35–42, https://doi.org/10.1016/j.visres.2013.04.006.
Boyce, S., & Pollatsek, A. (1992). Identification of objects in scenes: The role of scene background in object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 531–543.
Brewer, W. F., & Treyens, J. C. (1981). Role of schemata in memory for places. Cognitive Psychology, 13 (2), 207–230.
Curcio, C. A., & Allen, K. A. (1990). Topography of ganglion cells in human retina. The Journal of Comparative Neurology, 300, 5–25.
Curcio, C. A., Sloan, K. R., Kalina, R. E., & Hendrickson, A. E. (1990). Human photoreceptor topography. The Journal of Comparative Neurology, 292 (4), 497–523, https://doi.org/10.1002/cne.902920402.
Davenport, J. L., & Potter, M. C. (2004). Scene consistency in object and background perception. Psychological Science, 15 (8), 559–564.
DeCarlo, L. T. (1998). Signal detection theory and generalized linear models. Psychological Methods, 3 (2), 186–205.
DeCarlo, L. T. (2012). On a signal detection approach to m-alternative forced choice with bias, with maximum likelihood and Bayesian approaches to estimation. Journal of Mathematical Psychology, 56 (3), 196–207, https://doi.org/10.1016/j.jmp.2012.02.004.
Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers in Psychology, 5: 781, https://doi.org/10.3389/fpsyg.2014.00781.
Drasdo, N., & Fowler, C. (1974). Non-linear projection of the retinal image in a wide-angle schematic eye. British Journal of Ophthalmology, 58 (8), 709–714.
Duke-Elder, S. (1962). System of ophthalmology: The foundations of ophthalmology (Vol. 7). London, UK: Henry Kimpton.
Eberhardt, S., Zetzsche, C., & Schill, K. (2016). Peripheral pooling is tuned to the localization task. Journal of Vision, 16 (2): 14, 1–13, https://doi.org/10.1167/16.2.14. [PubMed] [Article]
Eckstein, M. P., Drescher, B. A., & Shimozaki, S. S. (2006). Attentional cues in real scenes, saccadic targeting, and Bayesian priors. Psychological Science, 17 (11), 973–980, https://doi.org/10.1111/j.1467-9280.2006.01815.x.
Ehinger, K. A., & Rosenholtz, R. (2016). A general account of peripheral encoding also predicts scene perception performance. Journal of Vision, 16 (2): 13, 1–19, https://doi.org/10.1167/16.2.13. [PubMed] [Article]
Fei-Fei, L., Iyer, A., Koch, C., & Perona, P. (2007). What do we perceive in a glance of a real-world scene? Journal of Vision, 7 (1): 10, 1–29, https://doi.org/10.1167/7.1.10. [PubMed] [Article]
Fei-Fei, L., & Perona, P. (2005). A Bayesian hierarchical model for learning natural scene categories. In Schmid, C. Soalto, S. & Tomasi C. (Eds.), Computer vision and pattern recognition, 2005 (Vol. 2, pp. 524–531). Los Alamitos, CA: IEEE Computer Society.
Fei-Fei, L., VanRullen, R., Koch, C., & Perona, P. (2005). Why does natural scene categorization require little attention? Exploring attentional requirements for natural and synthetic stimuli. Visual Cognition, 12 (6), 893–924.
Florack, L. M. J. (2007). Modeling foveal vision. Paper presented at the First International Conference on Scale Space and Variational Methods in Computer Vision, Berlin, Germany.
Fortenbaugh, F. C., Hicks, J. C., Hao, L., & Turano, K. A. (2007). Losing sight of the bigger picture: Peripheral field loss compresses representations of space. Vision Research, 47 (19), 2506–2520.
Frisén, L. (1990). Clinical tests of vision. New York, NY: Raven Books.
Geuzebroek, A. C., & van den Berg, A. V. (2018). Eccentricity scale independence for scene perception in the first tens of milliseconds. Journal of Vision, 18 (9): 9, 1–14, https://doi.org/10.1167/18.9.9. [PubMed] [Article]
Gordon, R. D. (2004). Attentional allocation during the perception of scenes. Journal of Experimental Psychology: Human Perception and Performance, 30 (4), 760–777, https://doi.org/10.1037/0096-1523.30.4.760.
Greene, M. R., & Oliva, A. (2009). The briefest of glances: The time course of natural scene understanding. Psychological Science, 20 (4), 464–472.
Grossberg, S., & Huang, T.-R. (2009). ARTSCENE: A neural system for natural scene classification. Journal of Vision, 9 (4): 6, 1–19, https://doi.org/10.1167/9.4.6. [PubMed] [Article]
Hacker, M. J., & Ratcliff, R. (1979). A revised table of d′ for M-alternative forced choice. Perception & Psychophysics, 26 (2), 168–170, https://doi.org/10.3758/bf03208311.
Hansen, T., Pracejus, L., & Gegenfurtner, K. R. (2009). Color perception in the intermediate periphery of the visual field. Journal of Vision, 9 (4): 26, 1–12, https://doi.org/10.1167/9.4.26. [PubMed] [Article]
Hasson, U., Levy, I., Behrmann, M., Hendler, T., & Malach, R. (2002). Eccentricity bias as an organizing principle for human high-order object areas. Neuron, 34 (3), 479–490.
Henderson, J. M., McClure, K. K., Pierce, S., & Schrock, G. (1997). Object identification without foveal vision: Evidence from an artificial scotoma paradigm. Perception & Psychophysics, 59 (3), 323–346, https://doi.org/10.3758/bf03211901.
Herzog, M. H., Sayim, B., Chicherov, V., & Manassi, M. (2015). Crowding, grouping, and object recognition: A matter of appearance. Journal of Vision, 15 (6): 5, 1–18, https://doi.org/10.1167/15.6.5. [PubMed] [Article]
Hollingworth, A., & Henderson, J. M. (1998). Does consistent scene context facilitate object perception? Journal of Experimental Psychology: General, 127 (4), 398–415.
Hollingworth, A., Schrock, G., & Henderson, J. M. (2001). Change detection in the flicker paradigm: The role of fixation position within the scene. Memory & Cognition, 29 (2), 296–304, https://doi.org/10.3758/BF03194923.
Huestegge, L., & Böckler, A. (2016). Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes. Journal of Vision, 16 (2): 11, 1–15, https://doi.org/10.1167/16.2.11. [PubMed] [Article]
Kornblith, S., Cheng, X., Ohayon, S., & Tsao, D. Y. (2013). A network for scene processing in the macaque temporal lobe. Neuron, 79 (4), 766–781, https://doi.org/10.1016/j.neuron.2013.06.015.
Larson, A. M., Freeman, T. E., Ringer, R. V., & Loschky, L. C. (2014). The spatiotemporal dynamics of scene gist recognition. Journal of Experimental Psychology: Human Perception and Performance, 40 (2), 471–487, https://doi.org/10.1037/a0034986.
Larson, A. M., & Loschky, L. C. (2009). The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 9 (10): 6, 1–16, https://doi.org/10.1167/9.10.6. [PubMed] [Article]
Larsson, J., & Heeger, D. J. (2006). Two retinotopic visual areas in human lateral occipital cortex. The Journal of Neuroscience, 26 (51), 13128–13142.
Levi, D. M. (2008). Crowding—An essential bottleneck for object recognition: A mini-review. Vision Research, 48 (5), 635–654, https://doi.org/10.1016/j.visres.2007.12.009.
Levy, I., Hasson, U., Avidan, G., Hendler, T., & Malach, R. (2001). Center-periphery organization of human object areas. Nature Neuroscience, 4, 533–539.
Linsley, D., & MacEvoy, S. P. (2014). Evidence for participation by object-selective visual cortex in scene category judgments. Journal of Vision, 14 (9): 19, 1–17, https://doi.org/10.1167/14.9.19. [PubMed] [Article]
Loschky, L. C., Fortenbaugh, F. C., Rosenholtz, R., Nuthmann, A., Pannasch, S., Calvo, M. G., & Vo, M. L. H. (2019). Scene perception from central to peripheral vision: A review and synthesis. Journal of Vision. Manuscript in preparation.
Loschky, L. C., & Larson, A. M. (2010). The natural/man-made distinction is made prior to basic-level distinctions in scene gist processing. Visual Cognition, 18 (4), 513–536.
Loschky, L. C., & McConkie, G. W. (2002). Investigating spatial vision and dynamic attentional selection using a gaze-contingent multi-resolutional display. Journal of Experimental Psychology: Applied, 8 (2), 99–117.
Loschky, L. C., McConkie, G. W., Yang, J., & Miller, M. E. (2005). The limits of visual resolution in natural scene viewing. Visual Cognition, 12 (6), 1057–1092, https://doi.org/10.1080/13506280444000652.
Nagy, A. L., & Wolf, S. (1993). Red-green color discrimination in peripheral vision. Vision Research, 33 (2), 235–242.
Nasr, S., Liu, N., Devaney, K. J., Yue, X., Rajimehr, R., Ungerleider, L. G., & Tootell, R. B. H. (2011). Scene-selective cortical regions in human and nonhuman primates. The Journal of Neuroscience, 31 (39), 13771–13785, https://doi.org/10.1523/jneurosci.2792-11.2011.
Nelson, W. W., & Loftus, G. R. (1980). The functional visual field during picture viewing. Journal of Experimental Psychology: Human Learning & Memory, 6 (4), 391–399, https://doi.org/10.1037/0278-7393.6.4.391.
Nuthmann, A. (2014). How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements. Journal of Experimental Psychology: Human Perception and Performance, 40 (1), 342–360, https://doi.org/10.1037/a0033854.
Oliva, A. (2005). Gist of a scene. In Itti, L. Rees, G. & Tsotsos J. K. (Eds.), Neurobiology of attention (pp. 251–256). Burlington, MA: Elsevier Academic Press.
Oliva, A., & Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42 (3), 145–175.
Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research, 155 (B), 23–36.
Palmer, S. M., & Rosa, M. G. P. (2006). A distinct anatomical network of cortical areas for analysis of motion in far peripheral vision. European Journal of Neuroscience, 24 (8), 2389–2405, https://doi.org/10.1111/j.1460-9568.2006.05113.x.
Pezdek, K., Whetstone, T., Reynolds, K., Askari, N., & Dougherty, T. (1989). Memory for real-world scenes: The role of consistency with schema expectation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15 (4), 587–595.
Polyak, S. L. (1941). The retina: The anatomy and the histology of the retina in man, ape, and monkey, including the consideration of visual functions, the history of physiological optics, and the histological laboratory technique. Chicago, IL: University of Chicago Press.
Quinn, N., Csincsik, L., Flynn, E., Curcio, C. A., Kiss, S., Sadda, S. R.,… Lengyel, I. (2019). The clinical relevance of visualising the peripheral retina. Progress in Retinal and Eye Research, 68, 83–109, https://doi.org/10.1016/j.preteyeres.2018.10.001.
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124 (3), 372–422, https://doi.org/10.1037//0033-2909.124.3.372.
Rayner, K., Smith, T. J., Malcolm, G. L., & Henderson, J. M. (2009). Eye movements and visual encoding during scene perception. Psychological Science, 20 (1), 6–10, https://doi.org/10.1111/j.1467-9280.2008.02243.x.
Roenne, H. (1915). Zur Theorie und Technik der Bjerrumschen Gesichtsfelduntersuchung. Archiv für Augenheilkunde, 78 (4), 284–301.
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16 (2), 225–237.
Rousselet, G. A., Joubert, O. R., & Fabre-Thorpe, M. (2005). How long to get to the “gist” of real-world natural scenes? Visual Cognition, 12 (6), 852–877.
Rovamo, J., & Iivanainen, A. (1991). Detection of chromatic deviations from white across the human visual field. Vision Research, 31 (12), 2227–2234, https://doi.org/10.1016/0042-6989(91)90175-5.
Rovamo, J., Virsu, V., & Naesaenen, R. (1978, January 5). Cortical magnification factor predicts the photopic contrast sensitivity of peripheral vision. Nature, 271 (5640), 54–56, https://doi.org/10.1038/271054a0.
Sayres, R., & Grill-Spector, K. (2008). Relating retinotopic and object-selective responses in human lateral occipital cortex. Journal of Neurophysiology, 100 (1), 249–267, https://doi.org/10.1152/jn.01383.2007.
Schwartz, S. H. (2010). Visual perception: A clinical orientation (4th ed.). New York, NY: McGraw-Hill.
Schyns, P. G., & Oliva, A. (1994). From blobs to boundary edges: Evidence for time- and spatial-scale-dependent scene recognition. Psychological Science, 5, 195–200.
Shimozaki, S. S., Chen, K. Y., Abbey, C. K., & Eckstein, M. P. (2007). The temporal dynamics of selective attention of the visual periphery as measured by classification images. Journal of Vision, 7 (12): 10, 1–20, https://doi.org/10.1167/7.12.10. [PubMed] [Article]
Silson, E. H., Chan, A. W.-Y., Reynolds, R. C., Kravitz, D. J., & Baker, C. I. (2015). A retinotopic basis for the division of high-level scene processing between lateral and ventral human occipitotemporal cortex. The Journal of Neuroscience, 35 (34), 11921–11935, https://doi.org/10.1523/JNEUROSCI.0137-15.2015.
Silson, E. H., Groen, I. I. A., Kravitz, D. J., & Baker, C. I. (2016). Evaluating the correspondence between face-, scene-, and object-selectivity and retinotopic organization within lateral occipitotemporal cortex. Journal of Vision, 16 (6): 14, 1–21, https://doi.org/10.1167/16.6.14. [PubMed] [Article]
Strasburger, H., Rentschler, I., & Juttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11 (5): 13, 1–82, https://doi.org/10.1167/11.5.13. [PubMed] [Article]
Thorpe, S. J., Gegenfurtner, K. R., Fabre-Thorpe, M., & Bulthoff, H. H. (2001). Detection of animals in natural images using far peripheral vision. European Journal of Neuroscience, 14 (5), 869–876.
Torralba, A., & Oliva, A. (2003). Statistics of natural image categories. Network, 14 (3), 391–412.
Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113 (4), 766–786.
Tran, T. H. C., Rambaud, C., Despretz, P., & Boucart, M. (2010). Scene perception in age-related macular degeneration. Investigative Ophthalmology & Visual Science, 51 (12), 6868–6874, https://doi.org/10.1167/iovs.10-5517.
van Diepen, P. M. J., & Wampers, M. (1998). Scene exploration with Fourier-filtered peripheral information. Perception, 27 (10), 1141–1151.
van Diepen, P. M. J., Wampers, M., & d'Ydewalle, G. (1998). Functional division of the visual field: Moving masks and moving windows. In Underwood G. (Ed.), Eye guidance in reading and scene perception (pp. 337–355). Oxford, UK: Elsevier.
Van Essen, D. C., Newsome, W. T., & Maunsell, H. R. (1984). The visual field representation in striate cortex of the macaque monkey: Asymmetries, anisotropies, and individual variability. Vision Research, 24 (5), 429–448.
VanRullen, R., & Thorpe, S. J. (2001). The time course of visual processing: From early perception to decision-making. Journal of Cognitive Neuroscience, 13 (4), 454–461.
Velisavljevic, L., & Elder, J. H. (2008). Visual short-term memory for natural scenes: Effects of eccentricity. Journal of Vision, 8 (4): 28, 1–17, https://doi.org/10.1167/8.4.28. [PubMed] [Article]
Wang, P., & Cottrell, G. W. (2017). Central and peripheral vision for scene recognition: A neurocomputational modeling exploration. Journal of Vision, 17 (4): 9, 1–22, https://doi.org/10.1167/17.4.9. [PubMed] [Article]
Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., & Wagenmakers, E.-J. (2011). Statistical evidence in experimental psychology: An empirical comparison using 855 t-ests. Perspectives on Psychological Science, 6 (3), 291–298, https://doi.org/10.1177/1745691611406923.
Whitney, D., & Levi, D. M. (2011). Visual crowding: A fundamental limit on conscious perception and object recognition. Trends in Cognitive Sciences, 15 (4), 160–168, https://doi.org/10.1016/j.tics.2011.02.005.
Wilkinson, M. O., Anderson, R. S., Bradley, A., & Thibos, L. N. (2016). Neural bandwidth of veridical perception across the visual field. Journal of Vision, 16 (2): 1, 1–17, https://doi.org/10.1167/16.2.1. [PubMed] [Article]
Wilson, H. R., Levi, D. M., Maffei, L., Rovamo, J., & DeValois, R. (1990). The perception of form: Retina to striate cortex. In Spillmann L. & Werner J. S. (Eds.), Visual perception: The neurophysiological foundations (pp. 231–272). San Diego, CA: Academic Press.
Wolfe, J. M., Võ, M. L.-H., Evans, K. K., & Greene, M. R. (2011). Visual search in scenes involves selective and nonselective pathways. Trends in Cognitive Sciences, 15 (2), 77–84, https://doi.org/10.1016/j.tics.2010.12.001.
Yu, H. H., Chaplin, T. A., & Rosa, M. G. P. (2015). Representation of central and peripheral vision in the primate cerebral cortex: Insights from studies of the marmoset brain. Neuroscience Research, 93, 47–61, https://doi.org/10.1016/j.neures.2014.09.004.
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A. (2018). Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (6), 1452–1464, https://doi.org/10.1109/TPAMI.2017.2723009.
Footnotes
1  Larson and Loschky (2009) cited only Hasson et al. (2002). In our General discussion we discuss many more recent studies relevant to this topic.
Footnotes
2  One relatively unimportant difference between Larson and Loschky's (2009) results and the current experiment is in the lowest sensitivity in each. In both studies the lowest d′ was for the 1° window condition; for Larson and Loschky, that lowest d′ was 0.33, whereas our lowest d′ was 1.4. It is unclear what caused that difference, since the two studies used different stimulus sets, including different images and scene categories and monochrome versus color images, and were carried out with different participant populations—any of which might explain this difference.
Figure 1
 
The human visual field with approximate retinal eccentricities for outer limits of visual field regions. Approximate values for eccentricities are based on Table 1.
Figure 1
 
The human visual field with approximate retinal eccentricities for outer limits of visual field regions. Approximate values for eccentricities are based on Table 1.
Figure 2
 
Cone density in the retina on the horizontal meridian as a function of eccentricity in degrees of visual angle. Figure drawn from data digitized from Curcio, Sloan, Kalina, and Hendrickson (1990, figure 6a, 6c) and averaged over nasal and temporal directions (with the gap at the blind spot ignored). Cone density given in cells/°2 × 1,000. Degrees of visual angle translated micrometers based on data digitized from Drasdo and Fowler (1974, figure 2) using a Biexponential 5P fit: a + b × exp(−c × retinal arc) + d × exp(−f × retinal arc), with a = asymptote = 205.810, b = scale 1 = 0.445, c = decay rate = −0.195, d = scale 2 = −206.537, f = decay rate 2 = 0.018, and retinal arcs in micrometers. This was used for all eccentricities except the first two, which produced negative estimates. For those values, we used Drasdo and Fowler's 0.274°/μm for 0°. Foveola = 0.875°, parafovea = 5°, and perifovea = 10° eccentricity.
Figure 2
 
Cone density in the retina on the horizontal meridian as a function of eccentricity in degrees of visual angle. Figure drawn from data digitized from Curcio, Sloan, Kalina, and Hendrickson (1990, figure 6a, 6c) and averaged over nasal and temporal directions (with the gap at the blind spot ignored). Cone density given in cells/°2 × 1,000. Degrees of visual angle translated micrometers based on data digitized from Drasdo and Fowler (1974, figure 2) using a Biexponential 5P fit: a + b × exp(−c × retinal arc) + d × exp(−f × retinal arc), with a = asymptote = 205.810, b = scale 1 = 0.445, c = decay rate = −0.195, d = scale 2 = −206.537, f = decay rate 2 = 0.018, and retinal arcs in micrometers. This was used for all eccentricities except the first two, which produced negative estimates. For those values, we used Drasdo and Fowler's 0.274°/μm for 0°. Foveola = 0.875°, parafovea = 5°, and perifovea = 10° eccentricity.
Figure 3
 
Major results from Larson and Loschky (2009), Experiment 1. To facilitate comparisons with the current study, we converted the Larson and Loschky results from percent accuracy to d′ based on two-alternative forced choice. (A) Rapid-scene-categorization sensitivity measured in d′ as a function of window and scotoma condition and radius (°). Note that the critical radius is the single radius for both window and scotoma conditions that produces equivalent sensitivity, as shown by the crossover point in their respective Radius × Sensitivity functions. (B) Rapid-scene-categorization sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example window and scotoma images, with the experimentally manipulated radius of each being the critical radius of 7.4° from Larson and Loschky's experiment 2. (From “The spatiotemporal dynamics of scene gist recognition,” by A. M. Larson, T. E. Freeman, R. V. Ringer, and L. C. Loschky, 2014, Journal of Experimental Psychology: Human Perception and Performance, 40(2), p. 474. Copyright 2014 by the American Psychological Association. Reprinted with permission.) Note that the window and scotoma images are mutually exclusive image regions from the same original image, which together constitute the whole image. Note also that while the outer edge of the scotoma condition was cropped to be circular, in Experiment 1 it was square. This may explain the somewhat smaller critical radius of 7.4° as opposed to that of Experiment 1, which was 8.7°, as shown by the crossover point in (A).
Figure 3
 
Major results from Larson and Loschky (2009), Experiment 1. To facilitate comparisons with the current study, we converted the Larson and Loschky results from percent accuracy to d′ based on two-alternative forced choice. (A) Rapid-scene-categorization sensitivity measured in d′ as a function of window and scotoma condition and radius (°). Note that the critical radius is the single radius for both window and scotoma conditions that produces equivalent sensitivity, as shown by the crossover point in their respective Radius × Sensitivity functions. (B) Rapid-scene-categorization sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example window and scotoma images, with the experimentally manipulated radius of each being the critical radius of 7.4° from Larson and Loschky's experiment 2. (From “The spatiotemporal dynamics of scene gist recognition,” by A. M. Larson, T. E. Freeman, R. V. Ringer, and L. C. Loschky, 2014, Journal of Experimental Psychology: Human Perception and Performance, 40(2), p. 474. Copyright 2014 by the American Psychological Association. Reprinted with permission.) Note that the window and scotoma images are mutually exclusive image regions from the same original image, which together constitute the whole image. Note also that while the outer edge of the scotoma condition was cropped to be circular, in Experiment 1 it was square. This may explain the somewhat smaller critical radius of 7.4° as opposed to that of Experiment 1, which was 8.7°, as shown by the crossover point in (A).
Figure 4
 
Example image in the window and scotoma conditions. Window and scotoma radii (° of visual angle) are shown on the left of each column. Important comparisons: The 90° window and 0° scotoma conditions were identical, both showing the entire (180° × 32.78°) original image. The 10° window condition showed exactly what was missing from the 10° scotoma condition. A window radius roughly halfway between the 20° and 24° windows (21.86°) would show an identical number of image pixels to the 70° scotoma condition (both showing 23% of the original). For the 1° window condition, a zoomed image inset is shown for the reader, though this was not shown to participants.
Figure 4
 
Example image in the window and scotoma conditions. Window and scotoma radii (° of visual angle) are shown on the left of each column. Important comparisons: The 90° window and 0° scotoma conditions were identical, both showing the entire (180° × 32.78°) original image. The 10° window condition showed exactly what was missing from the 10° scotoma condition. A window radius roughly halfway between the 20° and 24° windows (21.86°) would show an identical number of image pixels to the 70° scotoma condition (both showing 23% of the original). For the 1° window condition, a zoomed image inset is shown for the reader, though this was not shown to participants.
Figure 5
 
Results of Experiment 1. (A) Panoramic-image scene-categorization sensitivity measured using eight-alternative forced choice d′ as a function of window and scotoma condition and radius (°). The dashed lines were produced by treating radius as categorical, whereas the solid lines (±SE) were produced by treating it as continuous. The symbols represent the categorical means, and the error bars on the continuous fits represent their degree of uncertainty. For the continuous fits, an inverse function was the best fit for the window condition, whereas a linear function was the best fit for the scotoma condition (see text for details). (B) Sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example image illustrating the empirically derived critical radius (10°), which produced equivalent performance in the window and scotoma conditions. Surprisingly, performance with the critical radius was also equivalent to that in the full-image condition. Note that participants saw only either a window-condition image or a scotoma-condition image on any given trial. The dotted lines connecting the window- and scotoma-condition images serve to indicate that both together constitute the entire image.
Figure 5
 
Results of Experiment 1. (A) Panoramic-image scene-categorization sensitivity measured using eight-alternative forced choice d′ as a function of window and scotoma condition and radius (°). The dashed lines were produced by treating radius as categorical, whereas the solid lines (±SE) were produced by treating it as continuous. The symbols represent the categorical means, and the error bars on the continuous fits represent their degree of uncertainty. For the continuous fits, an inverse function was the best fit for the window condition, whereas a linear function was the best fit for the scotoma condition (see text for details). (B) Sensitivity (d′) as a function of window and scotoma condition and proportion of image visible. (C) Example image illustrating the empirically derived critical radius (10°), which produced equivalent performance in the window and scotoma conditions. Surprisingly, performance with the critical radius was also equivalent to that in the full-image condition. Note that participants saw only either a window-condition image or a scotoma-condition image on any given trial. The dotted lines connecting the window- and scotoma-condition images serve to indicate that both together constitute the entire image.
Figure 6
 
An illustration of the stimuli and experimental design in Experiment 2. Top row: Example of an original image, illustrating content-sampling eccentricity. The content of three 5°-radius windows was sampled at −60°, 0°, and +60° eccentricity from the image center. Middle row: Magnified versions of the windowed content samples. Bottom row: Illustration of the factorial design of Experiment 2 using the middle-row samples as examples. Content-sampling eccentricity (−60°, 0°, and +60°) in the three columns, and presentation eccentricity (−60°, 0°, and +60°) in the three rows. Each of the nine images in the 3 × 3 matrix is an example of what would be presented to participants in that condition.
Figure 6
 
An illustration of the stimuli and experimental design in Experiment 2. Top row: Example of an original image, illustrating content-sampling eccentricity. The content of three 5°-radius windows was sampled at −60°, 0°, and +60° eccentricity from the image center. Middle row: Magnified versions of the windowed content samples. Bottom row: Illustration of the factorial design of Experiment 2 using the middle-row samples as examples. Content-sampling eccentricity (−60°, 0°, and +60°) in the three columns, and presentation eccentricity (−60°, 0°, and +60°) in the three rows. Each of the nine images in the 3 × 3 matrix is an example of what would be presented to participants in that condition.
Figure 7
 
Windowed scene-categorization sensitivity as a function of presentation eccentricity (−60°, 0°, and +60°) and content-sampling eccentricity (−60°, 0°, and +60°). (The distinction between presentation eccentricity and content-sampling eccentricity is illustrated in Figure 6.) Error bars represent the standard error of the mean.
Figure 7
 
Windowed scene-categorization sensitivity as a function of presentation eccentricity (−60°, 0°, and +60°) and content-sampling eccentricity (−60°, 0°, and +60°). (The distinction between presentation eccentricity and content-sampling eccentricity is illustrated in Figure 6.) Error bars represent the standard error of the mean.
Table 1
 
Estimates of retinal eccentricities for outer limits of visual-field regions from Curcio and Allen (1990), Curcio, Sloan, Kalina, and Hendrickson (1990), Duke-Elder (1962), Polyak (1941), Roenne (1915) and Frisén (1990). Notes: aPolyak, p. 201. bCurcio et al., p. 518. cPolyak, p. 203. dCurcio and Allen, p. 13. ePolyak, p. 211. fCurcio et al., p. 505. gPolyak, p. 214. hDuke-Elder, figure 301. iRoenne, p. 297. jFrisén, p. 60, figure 6.4. Estimates a–h are based on neurophysiology, while estimates i and j are based on perimetry testing. Estimates of eccentricity in degrees of visual angle for a–g were converted from micrometers based on the equation given by Drasdo and Fowler (1974, figure 2); note that at the shortest eccentricities, this produced slightly smaller values than those given in the original articles.
Table 1
 
Estimates of retinal eccentricities for outer limits of visual-field regions from Curcio and Allen (1990), Curcio, Sloan, Kalina, and Hendrickson (1990), Duke-Elder (1962), Polyak (1941), Roenne (1915) and Frisén (1990). Notes: aPolyak, p. 201. bCurcio et al., p. 518. cPolyak, p. 203. dCurcio and Allen, p. 13. ePolyak, p. 211. fCurcio et al., p. 505. gPolyak, p. 214. hDuke-Elder, figure 301. iRoenne, p. 297. jFrisén, p. 60, figure 6.4. Estimates a–h are based on neurophysiology, while estimates i and j are based on perimetry testing. Estimates of eccentricity in degrees of visual angle for a–g were converted from micrometers based on the equation given by Drasdo and Fowler (1974, figure 2); note that at the shortest eccentricities, this produced slightly smaller values than those given in the original articles.
Table 2
 
Experiment 1: Tests of the null and alternative hypotheses (H0 and HA) for window and scotoma conditions versus the full-image control condition (window = 90° radius, scotoma = 0° radius) as a function of window/scotoma radius (°). Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014). Strength of support for HA and H0 based on Wetzels, Matzke, Lee, Rouder, Iverson, and Wagenmakers (2011, table 1).
Table 2
 
Experiment 1: Tests of the null and alternative hypotheses (H0 and HA) for window and scotoma conditions versus the full-image control condition (window = 90° radius, scotoma = 0° radius) as a function of window/scotoma radius (°). Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014). Strength of support for HA and H0 based on Wetzels, Matzke, Lee, Rouder, Iverson, and Wagenmakers (2011, table 1).
Table 3
 
Comparisons of the three content-sampling eccentricities at each of three presentation eccentricities. Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014).
Table 3
 
Comparisons of the three content-sampling eccentricities at each of three presentation eccentricities. Notes: BF = Bayes factor, based on a uniform prior of 0 < d′ difference < 2 and a point null hypothesis; thus this BF indicates whether a difference of zero is more or less likely after collecting the data. Calculations performed per Dienes (2014).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×