Free
Article  |   August 2015
Locating the cortical bottleneck for slow reading in peripheral vision
Author Affiliations
Journal of Vision August 2015, Vol.15, 3. doi:10.1167/15.11.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Deyue Yu, Yi Jiang, Gordon E. Legge, Sheng He; Locating the cortical bottleneck for slow reading in peripheral vision. Journal of Vision 2015;15(11):3. doi: 10.1167/15.11.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Yu, Legge, Park, Gage, and Chung (2010) suggested that the neural bottleneck for slow peripheral reading is located in nonretinotopic areas. We investigated the potential rate-limiting neural site for peripheral reading using fMRI, and contrasted peripheral reading with recognition of peripherally presented line drawings of common objects. We measured the BOLD responses to both text (three-letter words/nonwords) and line-drawing objects presented either in foveal or peripheral vision (10° lower right visual field) at three presentation rates (2, 4, and 8/second). The statistically significant interaction effect of visual field × presentation rate on the BOLD response for text but not for line drawings provides evidence for distinctive processing of peripheral text. This pattern of results was obtained in all five regions of interest (ROIs). At the early retinotopic cortical areas, the BOLD signal slightly increased with increasing presentation rate for foveal text, and remained fairly constant for peripheral text. In the Occipital Word-Responsive Area (OWRA), Visual Word Form Area (VWFA), and object sensitive areas (LO and PHA), the BOLD responses to text decreased with increasing presentation rate for peripheral but not foveal presentation. In contrast, there was no rate-dependent reduction in BOLD response for line-drawing objects in all the ROIs for either foveal or peripheral presentation. Only peripherally presented text showed a distinctive rate-dependence pattern. Although it is possible that the differentiation starts to emerge at the early retinotopic cortical representation, the neural bottleneck for slower reading of peripherally presented text may be a special property of peripheral text processing in object category selective cortex.

Introduction
Reading speed in normal peripheral vision is slower than foveal reading speed, even when text is scaled to compensate for differences in letter acuity (Chung, Mansfield, & Legge, 1998). Similarly, both word recognition (Lee, Legge, & Ortiz, 2003) and character recognition (Seiple, Holopigian, Shnayder, & Szlyk, 2001; Strasburger, Harvey, & Rentschler, 1991) are slower in peripheral vision. These findings reveal slower temporal processing for text-related information in the periphery compared to the fovea. In contrast, some simpler measures of temporal processing show little difference between central and peripheral vision (e.g., contrast detection; Waugh & Hess, 1994), and some even show a peripheral advantage (e.g., orientation discrimination; Carrasco, McElree, Denisova, & Giordano, 2003; Rovamo & Raninen, 1988; Tyler, 1981, 1985). 
The goal of the current study was to investigate the neural bottleneck for slow peripheral processing of text stimuli. The candidate neural sites include both the early retinotopic visual cortex as well as the word sensitive and selective cortical regions in the lateral occipital and fusiform cortices. 
There are two lines of evidence that suggest that peripheral reading may be limited at stages beyond the early retinotopic cortex. First, early retinotopic regions, responding primarily to physical characteristics of the stimulus (Grill-Spector, Kushnir, Hendler, & Malach, 2000), are not object-category selective (but see Williams et al., 2008 for object-category specific feedback signals at early visual cortex). If peripheral reading is limited at this level, the slow peripheral processing should generalize to stimulus categories with similar spatial frequency content rather than being specific to written scripts. Although there is no direct evidence that peripheral processing of line-drawing faces or other objects is as fast in peripheral vision as in foveal vision, it has been shown that subjects are able to perform a categorization task in peripheral vision at very short exposure durations (shorter than 100 ms; Li, VanRullen, Koch, & Perona, 2002; Thorpe, Gegenfurtner, Fabre-Thorpe, & Bulthoff, 2001). Additionally, objects such as faces (Crouzet, Kirchner, & Thorpe, 2010) and natural scenes (Kirchner & Thorpe, 2006) were found to be processed much faster (approximately half the processing time) compared to words (Chanceaux, Vitu, Bendahman, Thorpe, & Grainger, 2012) in the periphery. Second, increased reading speed in peripheral vision can be achieved following extensive practice on a letter-recognition task, but the improvement is not retinotopically specific; there is substantial transfer to an untrained retinal location (Chung, Legge, & Cheung, 2004; Lee, Kwon, Legge, & Gefroh, 2010; Yu, Legge et al., 2010). This transfer implies involvement of a nonretinotopic site(s) limiting letter recognition or reading speed. 
Beyond the early retinotopic cortex, there are a number of cortical sites that have been identified as important for object (Grill-Spector, Kourtzi, & Kanwisher, 2001) and word processing (Dehaene & Cohen, 2011; Thesen et al., 2012; Vinckier et al., 2007). The present study focused on two main areas. The first was the Lateral Occipital area (LO), located laterally just anterior to the retinotopic specific cortex. Even though these areas (LO1 and LO2) have been reported to show some degree of retinotopy (Abdollahi et al., 2014; Amano, Wandell, & Dumoulin, 2009; Kolster, Peeters, & Orban, 2010; Larsson & Heeger, 2006; Wang, Mruczek, Arcaro, & Kastner, 2014), extensive evidence shows that LO is an area where object-selectivity starts to emerge. Indeed, there is evidence for object-category selective response at this level, such as for facial information (occipital face area; Gauthier et al., 2000) and words or components of words (occipital word-responsive area; Vinckier et al., 2007; Dehaene & Cohen, 2011). The next stage is the Visual Word Form Area (VWFA) located in the left mid-fusiform cortex. VWFA is a region that is said to be specialized for processing visual word forms (Cohen et al., 2000; but see also Price & Devlin, 2011; Reich, Szwed, Cohen, & Amedi, 2011). Both neuroimaging and neuropsychological evidence support that the VWFA plays a fundamental role in word recognition. For example, Gaillard and colleagues examined a patient who underwent surgery that removed a small portion of occipitotemporal cortex overlapping with the presumed VWFA. The patient had normal reading ability and regional selectivity of the left mid-fusiform area for word processing prior to surgery, but developed a reading deficit and lost word-specific activations after surgery (Gaillard et al., 2006). The left-hemisphere VWFA responds to words presented to either the left or right visual field (McCandliss, Cohen, & Dehaene, 2003). Although the site of the VWFA may be invariant to retinal location of the stimuli (McCandliss et al., 2003), the spatial pattern of VWFA responses has been shown to be sensitive to stimulus position (Rauschecker, Bowen, Parvizi, & Wandell, 2012). Being nonretinotopic in cortical location but stimulus-position-sensitive in cortical responses makes VWFA a good candidate for the site of the foveal versus peripheral difference in temporal processing during reading. Although some researchers have questioned the specificity for words of the VWFA (Joseph, Gathers, & Piper, 2003; Joseph, Cerullo, Farley, Steinmetz, & Mier, 2006; Price & Devlin, 2003) and demonstrated its responsiveness beyond the visual modality (Reich et al., 2011), there is no dispute that this region is highly responsive to written words. Thus we have included it in our study because of its importance in word processing. 
It has been shown that the early retinotopic cortex responds most vigorously to flickering stimuli near 6–8 Hz (Ozus et al., 2001; Thomas & Menon, 1998). McKeeff, Remus, and Tong (2007) found that compared with early visual areas, high-level, object-selective regions (e.g., fusiform face area (FFA) and parahippocampal place area) show peak activity at a lower range of temporal frequencies for images presented in the central visual field. Our approach was to measure the BOLD signal for text stimuli at both the early retinotopic cortical areas and object-sensitive areas. The stimuli were presented at three temporal frequencies (2, 4, and 8 Hz), either foveally or peripherally. Over this range, psychophysical measures have shown that reading accuracy is nearly perfect at the fovea and has very little dependence on the presentation rate, but performance decreases dramatically with increasing rate when text is presented at 10° eccentricity. We hypothesized that the neural site responsible for slow peripheral reading would also show a differential temporal rate dependence when the text stimuli were shown in the periphery versus fovea. 
Does visual processing for peripherally presented common objects suffer the same temporal limitation as peripherally presented words? To address this question, we also examined line-drawing objects for comparison with words, as line-drawing objects are similar to words in terms of spatial frequency content and some other pattern features.1 While word recognition is usually confined to high-acuity foveal vision, object recognition often engages both foveal and peripheral vision (Levy, Hasson, Avidan, Hendler, & Malach, 2001). Interestingly, even in the object selective visual cortex, there is still a degree of central and peripheral visual field bias, forming a lateral to medial gradient (Hasson, Levy, Behrmann, Hendler, Malach, 2002). Hasson et al. (2002) found that brain regions representing words or letter strings were mapped preferentially within the center-biased representation (more lateral region of the fusiform cortex) while object images preferentially activated periphery-biased representation (more medial part of the ventral occipitotemporal cortex) or both foveal and peripheral representations. 
To summarize, our goal was to identify the neural site(s) that imposes more severe temporal limitations for processing of peripherally than foveally presented text compared with nonlinguistic visual objects. Specifically, we measured the temporal dependence of cortical responses in a number of independently defined ROIs to words and objects presented in central and peripheral vision. Results from this study will contribute to a more comprehensive understanding of how the human visual system processes text during reading. This is especially important for understanding the reading behavior of special populations (e.g., patients with age-related macular degeneration) who rely on their peripheral vision for reading. 
Methods
Subjects
Seven native English speakers recruited from the University of Minnesota participated in the experiments. They had normal or corrected-to-normal vision and no known neurological or visual disorders. Table 1 shows age, gender, handedness, binocular distance visual acuity measured by the Lighthouse distance visual acuity chart, and log contrast sensitivity measured by the Pelli-Robson contrast sensitivity chart. Subjects S1, S4, S5, S6, and S7 had extensive experience as subjects in psychophysical and fMRI experiments, but none had prior experience with the test stimuli used in the current study. All procedures and protocols were approved by the IRB of the University of Minnesota. Subjects gave written, informed consent before beginning testing. 
Table 1
 
Characteristics of participants.
Table 1
 
Characteristics of participants.
Stimuli and experimental design
Stimuli were projected using a LCD projector (SANYO, Model PLC-XP41/L) onto the rear of a translucent screen located behind the subject's head inside the scanner bore. Subjects viewed the stimuli through an angled mirror placed on the head-coil above their eyes. The viewing distance was 102 cm. Subjects were instructed to keep stable and fixate on a cross (0.3° × 0.3°) during the whole scan. All of the testing stimuli were high contrast dark targets presented on a light gray photopic background (about 200 cd/m2) at either 0° (fovea condition, overlaid with the fixation cross) or 10° (periphery condition) away from the fixation in the lower right visual field (see Figure 1 for an illustration). Each subject completed two fMRI sessions, one for retinotopic mapping and localization of the regions of interest (ROIs) and the other for the main experiment (see below). 
Figure 1
 
Schematic diagram of the experimental paradigm for peripheral presentation of words and nonwords. A block design was used in the experiment to compare activation between three presentation rates (2/s, 4/s, and 8/s).
Figure 1
 
Schematic diagram of the experimental paradigm for peripheral presentation of words and nonwords. A block design was used in the experiment to compare activation between three presentation rates (2/s, 4/s, and 8/s).
There were two types of stimuli in the main experiment, text and line-drawing objects, presented in separate blocks. The text stimuli consisted of three-letter strings called trigrams. The pool of trigrams (the same pool used by Yu, Legge, et al., 2010) included the 350 most frequently used three-letter words in English and 350 nonwords. Line-drawing objects were either non-wearable objects (such as spoon, orange, door, etc, a total of 140) or wearable objects (such as shirt, shoes, pants, etc, a total of 25) selected from the picture set developed by Snodgrass and Vanderwart (1980). The large difference in the numbers of non-wearable and wearable objects was acceptable because of the infrequent presentation of wearable objects (one per three seconds on average in the stimulus sequence). For text stimuli, a lexical-decision task was used during the fMRI scans. The subject indicated with a button press when the briefly shown letter string was a nonword. For line-drawing objects, subjects were asked to press a button when they saw a wearable object. The average target rate was set at one nonword or wearable object every three seconds for all presentation rates. Small random spatial offsets were applied for each stimulus presentation to avoid potential local adaptation effects. Since we included the line-drawing objects as testing stimuli to know whether visual processing for peripherally presented objects suffers the same temporal limitation as peripherally presented text under similar physical conditions, we matched the physical properties of the stimuli (image size and eccentricity) rather than the difficulty levels of the tasks. The horizontal spans of the images were equated across both the line-drawing and the trigram, 1.6° for the stimuli presented at the fovea, and 12° for the stimuli presented in the periphery. In the present study, both the print sizes used in foveal and peripheral text were scaled to exceed the critical print sizes (a threshold value beyond which print size does not limit maximum reading speed; average 0.1° for normal foveal vision, Yu, Cheung, Legge, & Chung, 2007; 1.2° for 10° lower visual field, Chung et al., 2004). The corresponding letter size defined by x-height was 0.4° at the fovea and 3° in the periphery. By using letter size larger than the critical print size, we minimized the difficulty of the reading task for each testing location. Nevertheless, as shown in Figure 8, peripheral reading is still more difficult than the other three tasks (foveal reading, foveal, and peripheral-object recognition). Although task difficulty may influence (enhance or suppress) visual responses (Boudreau, Williford, & Maunsell, 2006; Spitzer, Desimone, & Moran, 1988) and modulate fMRI BOLD signals at multiple stages of the visual pathway (Chen et al., 2008; Ress, Backus, & Heeger, 2000; Rees, Frith, & Lavie, 1997; Schwartz et al., 2005; Spitzer et al., 1988), difficulty is intrinsic in peripheral reading, and the goal of the present study was to understand the neural mechanism of this difficulty. 
In the main experiment, there were four conditions: two retinal testing locations (fovea and periphery) and two stimulus types (text and line-drawing objects). There were 12 functional scans in the main experiment with three scans for each condition. A key variable in the main experiment was the presentation rate. For each combination of retinal location and stimulus type, stimuli were presented at three different presentation rates (2 per second, 4 per second and 8 per second). As shown in Figure 1, for the 2 stimuli per second rate, a stimulus was on for 400 ms and off for 100 ms. For the 4 stimuli per second rate, the on and off durations were 200 ms and 50 ms, respectively. For the 8 stimuli per second rate, the on and off durations were 100 ms and 25 ms. For each scan, stimuli were presented in a block design2 comprised of a 14-s sequence of stimuli followed by a 16-s fixation only period. The 30-s cycle time was long enough to minimize the interference effect between the undershoot at the end of one hemodynamic response and the start of the next one. There were nine blocks per scan with each of the three presentation rate conditions repeated three times in a pseudo-randomized order. The block sequence was also counterbalanced across scans to minimize sequential effects. To reduce transient magnetic saturation effects, a 16-s fixation only period was added to the beginning of each scan. Thus the total time for each scan was 286 s. 
In a separate session, each subject was first scanned with a standard retinotopic mapping procedure viewing four alternating wedges along the vertical and horizontal meridians in two scans (Engel, Glover, & Wandell, 1997). This step allowed us to identify the borders of the early visual cortical areas (i.e., the representations of the upper and lower vertical meridian define the borders between area V1 and V2; the representations of the horizontal meridian define the borders of V2 with V3) for each subject. Additional pairs of blocks with alternating flickering square checkerboards presented at foveal and peripheral locations (the same retinal locations as in the main experiment) were used to identify the corresponding activated regions in the early retinotopic visual cortical areas. ROI localizer scans were conducted independently in order to identify the brain regions selective for different types of stimuli (e.g., VWFA, FFA, etc.). For the localizer scans, there were four types of stimuli (shown in Figure 2A): faces, line-drawing objects, three-letter words, and texture patterns. Similar to the main experiment, the localizer scans for the foveal and peripheral conditions were run separately with three scans for the foveal condition and three scans for the peripheral condition. The image size was 2.2° for the foveal stimuli and 12° for the peripheral stimuli. Again, small random spatial offsets were used to avoid potential local adaptation effects. In each scan, stimuli were presented at a rate of 2 per second (on for 300 ms and off for 200 ms). Each category of stimuli (faces, objects and words) was shown in 14-s blocks alternating with 16-s of texture. A one-back task was used in the ROI localization experiment. The subject indicated with a button press when the stimulus was presented twice in a row. Similar to the main experiment, the block sequence was pseudo-randomized and counterbalanced across scans. An extra 16-s fixation period was added to the beginning of each scan. Each scan took 286 s. Although the localizer scans only used a single temporal frequency, the evidence from Session 2 suggested that the locations of the ROIs are independent of temporal frequency. 
Figure 2
 
Illustrations of stimuli used in ROI localization scans and the defined ROIs (early retinotopic areas for fovea and periphery, VWFA, OWRA, PHA, and LO) on the ventral occipital and temporal cortical surface. (A) Examples of the four categories of stimuli used in the VWFA localizer scans. (B) Two activated early retinotopic areas corresponding to the tested central visual field (colored with yellow and red) and peripheral visual field (colored with green and blue) respectively. (C) VWFA and OWRA (both in red), two regions sensitive to words in the lateral occipital and fusiform cortex, were revealed by a contrast between words and faces. The area in blue is the fusiform face area (FFA). PHA and LO (both in green), two regions with enhanced activation to line-drawing objects were revealed by a contrast between line-drawing objects and textures.
Figure 2
 
Illustrations of stimuli used in ROI localization scans and the defined ROIs (early retinotopic areas for fovea and periphery, VWFA, OWRA, PHA, and LO) on the ventral occipital and temporal cortical surface. (A) Examples of the four categories of stimuli used in the VWFA localizer scans. (B) Two activated early retinotopic areas corresponding to the tested central visual field (colored with yellow and red) and peripheral visual field (colored with green and blue) respectively. (C) VWFA and OWRA (both in red), two regions sensitive to words in the lateral occipital and fusiform cortex, were revealed by a contrast between words and faces. The area in blue is the fusiform face area (FFA). PHA and LO (both in green), two regions with enhanced activation to line-drawing objects were revealed by a contrast between line-drawing objects and textures.
fMRI data acquisition
Functional MRI data were collected on a 3T Siemens Trio scanner with 12-channel phase-array coil at the Center for Magnetic Resonance Research (CMRR) at the University of Minnesota. BOLD signals were acquired with an EPI sequence with standard parameters (20 axial slices approximately parallel to the base of the temporal lobe; slice thickness, 3.0 mm without gap; field of view, 220 × 220 mm2; matrix, 128 × 128; repetition time, TR, 2,000 ms; echo time, TE, 30 ms; flip angle, 75°). The fMRI slices covered both occipital and temporal cortices. A T1 weighted anatomical volume (3D MPRAGE; 1 × 1 × 1 mm3 resolution) was collected in each of the two sessions after the functional scans for localization and visualization of the functional data. 
fMRI data processing and analysis
The functional data were preprocessed and analyzed using BrainVoyager QX (Brain Innovation). The preprocessing included 3D motion correction (trilinear detection and sinc interpolation) and temporal filtering (high-pass filter of three cycles in time course). No spatial smoothing was applied to the data. In each session, the functional images were aligned with the anatomical images which were then normalized in Talairsach space and inflated for each subject using a standard protocol in BrainVoyager. General linear model multiple regression tests were used to find regions of interest (ROIs). Each ROI was identified from a specific contrast (restricted statistical threshold was p < 0.0001) combined with anatomical constraints. A single continuous blob of voxels was selected for each ROI. The coordinates of the ROIs were defined as the center of the selected activation area. Since the BOLD responses to the test stimuli did not show significant differences across the early retinotopic areas, for simplicity, the average data from the early retinotopic cortical areas were reported. VWFA, often lateral to the left FFA, in the mid-fusiform cortex was identified based on the contrast between words and faces. Importantly, the VWFA sites identified with foveal and peripheral localizer stimuli for all subjects were found at exactly the same region in the midfusiform cortex, consistent with the retinal location invariance properties of the VWFA (McCandliss et al., 2003). Similarly, we also defined an area in the lateral occipital lobe which was activated more strongly by word than by face stimuli as the occipital word-responsive area (OWRA). Two ROIs for object stimuli, parahippocampal area (PHA) and lateral occipital region (LO), were identified as well based on the contrast between line-drawing objects and texture in the midfusiform cortex and the lateral occipital lobe, respectively. Using the localization information from Session 1 (localization session) as a guide, the same ROIs could also be consistently identified in Session 2 (the main experiment). The difference of the Talairach coordinates for the defined ROIs between Session 1 and Session 2 were very small (averaging Δx = −2, Δy = 0, Δz = 1 for VWFA; Δx = −1, Δy = −2, Δz = 0 for OWRA; Δx = 1, Δy = 1, Δz = −1 for PHA; Δx = −1, Δy = −1, Δz = 1 for LO). The sizes of the ROIs were very similar across subjects. The standard deviations of the ROI sizes ranged between 3 and 8 mm3, and the relative standard deviations are 1% to 2%. Time course curves were extracted from these ROIs for the different stimulus conditions (trigrams/objects presented at different rates and at different retinal locations) and imported into Matlab (Mathworks, Inc.) for further analysis. For each scan, the signal intensity was averaged across trials for each condition at each of 15 time points (from −4 s to 24 s). Thereafter, the resulting time courses were averaged across scans for each condition and for each subject, and baseline corrected by subtracting the mean of the two pre-stimulus time points. Since it usually takes 8 to 10 seconds for the BOLD signal to rise to its full magnitude, the magnitude of activation was calculated as the average signal amplitude between 10 and 16 seconds for each stimulus condition minus the average for the fixation condition. 
The absolute BOLD amplitudes themselves are affected by many factors, and cannot be used directly as an index to relate to behavioral performance in the present study. Therefore, we focused on the dependency of the activation magnitude on the presentation rate and use it as a marker to search for the neural bottleneck for slower reading of peripherally presented text. Repeated measures ANOVAs were used to analyze the data for each ROI and for each category of stimuli. We used two-factor repeated measures ANOVAs to analyze the amplitude of activation for early retinotopic areas, VWFA, OWRA, PHA, and LO respectively with the within-subject factors being visual field (fovea and periphery) and presentation rate (2/s, 4/s, and 8/s). Since the focus of the study was to examine the dependence of brain activation on the presentation rate rather than the absolute BOLD amplitude for fovea versus periphery, the main effect of presentation rate and the interaction effect of visual field and presentation rate were of particular interest. Specially, we adopted the combination of two statistical results—significant interaction (visual field × presentation rate) on BOLD response for words but not for line drawings—as being diagnostic of distinctive processing of peripheral words. 
Behavioral experiment
To determine if the magnitude of brain activation was related to the behavioral performance of the subjects, we retested all the subjects in a psychophysical experiment outside the scanner using a similar procedure to the fMRI experiment. There were the following exceptions. 
Each subject completed eight blocks of trials (two blocks for each of the four combinations of retinal location and stimulus type). In each block, 20 trials were completed at each of the three presentation rates (2/s, 4/s, and 8/s). Each trial included 12 images (lasting 6 seconds for 2/s rate, 3 seconds for 4/s rate, and 1.5 seconds for 8/s rate). Subjects indicated with a button press whether the 12 images in each trial included a target (a nonword or a wearable object). A target appeared in a trial 50% of the time. Subjects were informed which stimulus format to expect at the beginning of each block. The block sequence was pseudo-randomized and counterbalanced across subjects to minimize any sequencing effects. Practice trials were given at the beginning of the session for all the conditions and presentation rates, and not included in the data analysis. 
For data analysis, we developed a simple model to convert the yes/no data into a behavioral rate measure, having taken guessing into account. By using the model, we estimated the number of stimuli recognized per second, analogous to words per minute for reading. Details are provided in the Appendix. We used two-factor, repeated measures ANOVAs to analyze the processing rate with the within-subject factors being visual field (fovea and periphery) and presentation rate (2/s, 4/s, and 8/s) for both words and line-drawing objects. 
Results
ROIs identification
We identified Regions of Interest (ROIs) at three levels of visual information processing, from the primary visual cortex to the ventral fusiform cortex. Figure 2 shows the identified ROIs on an inflated left hemisphere of a typical subject. ROIs in early visual cortical areas were identified retinotopically, with foveally presented and peripherally presented stimuli mapped to two different ROIs. Contrasting words with faces localized two main ROIs that are more responsive to words: a posterior region in the lateral occipital cortex, which we term occipital word-responsive area (OWRA: x = −38, y = −76, z = −10); and a region in the fusiform cortex, presumably the VWFA (x = −44, y = −57, z = −13), often just lateral to the face selective region. Contrasting line-drawing objects with texture, we defined the lateral occipital area (LO: x = −40, y = −76; z = −8) which has been shown to play an important role in processing shape and object information in previous studies (Grill-Spector et al., 2001). Line-drawing objects also activated the parahippocampal area (PHA: x = −27, y = −56; z = −12) more strongly than the other three categories of stimuli. Because previous studies have consistently found that the VWFA is lateralized to the left hemisphere (Cohen et al., 2000; Cohen & Dehaene, 2004) and objects activate both left and right LO and PHA with no significant hemispherical difference, only the data from the left hemisphere were extracted and analyzed in the present study. 
Cohen et al. (2002) estimated that the VWFA is approximately centered at x = −43, y = −54, z = −12 in Talairach coordinates. Indeed, the peak of the VWFA can be found within 5 mm of this location for 90% of individual subject scans collected in the earlier studies (McCandliss et al., 2003). Consistent with Cohen et al.'s (2002) results, the average VWFA location found in our study is x = −44, y = −57, z = −13 in Talairach space. Our data also confirmed that location of the VWFA is largely invariant to retinal location of the stimuli by showing that words presented either in the fovea or in peripheral vision (10° in the lower right quadrant of the visual field from fixation) activated the same region in left mid-fusiform cortex (the average difference of the Talairach coordinates between fovea and periphery is Δx = −1, Δy = −1, Δz = −1), presumably the VWFA (Figure 2). Detailed spatial locations of VWFA, OWRA, PHA, and LO for each subject are listed in Supplementary Table S1
Comparing BOLD responses for foveally and peripherally presented text and objects
The rate-dependent BOLD signals were extracted and plotted based on the data from Scan Session 2 (the main experiment). We calculated the amplitude of activation in the predefined ROIs, and examined the dependency of the activation magnitude on presentation rate. Figure 3 shows BOLD response as a function of presentation rate (2, 4, and 8 items per second) for words. There were significant interactions between the stimulus position (fovea vs. periphery) and presentation rate for the early retinotopic areas, F(2, 12) = 12.57, p = 0.001, OWRA, F(2, 12) = 11.73, p = 0.001, and VWFA, F(2, 12) = 14.71, p = 0.001. BOLD response in the early retinotopic areas showed a slight increase as a function of rate for foveal presentation, F(2, 12) = 7.75, p = 0.007, and remained fairly constant for peripheral presentation, F(2, 12) = 0.25, p = 0.782. At the OWRA, with increasing presentation rate, BOLD response increased slightly for foveal presentation, F(2, 12) = 8.02, p = 0.006, but decreased for peripheral presentation, F(2, 12) = 8.33, p = 0.005. At the VWFA, the BOLD response remained constant across the three presentation rates for foveal presentation, but showed significant reduction with increasing rate for peripheral presentation, F(2, 12) = 14.83, p = 0.001. 
Figure 3
 
BOLD response to text stimuli as a function of presentation rate (number of words presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 3
 
BOLD response to text stimuli as a function of presentation rate (number of words presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
To obtain a quantitative measure of the dependency of the BOLD signal on presentation rate, we normalized BOLD response by dividing each response by the response for the 2/s presentation rate of the same target location and visual area. The 8/s-to-2/s ratio, proportional to the slope of a fitted line to the same data set, provides an estimation of the rate-dependent change in BOLD response. As shown in Figure 4, normalized BOLD responses for the seven subjects are plotted as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA and VWFA). For peripherally presented text, the BOLD response changed very little in early visual cortex (the ratio of the BOLD response for 8/s to 2/s = 0.98 ± 0.03 (SE), i.e., a reduction of 2 ± 3%), but was reduced by 17 ± 7% (the ratio = 0.83 ± 0.07) at OWRA, and 29 ± 9% (the ratio = 0.71 ± 0.09) at VWFA. In early retinotopic cortical areas, the magnitude of brain activation was less dependent on the rate of peripheral presentation, but gradually more so at OWRA and VWFA. This pattern of results suggests that the processing of rapidly presented text in the periphery becomes less efficient at OWRA, with a further reduction at the VWFA. 
Figure 4
 
Normalized BOLD response to text stimuli plotted as a function of presentation rate (number of words presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
Figure 4
 
Normalized BOLD response to text stimuli plotted as a function of presentation rate (number of words presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
In Figure 5, the BOLD response is plotted as a function of presentation rate for line-drawing objects at the three ROIs (early retinotopic area, LO, and PHA). The normalized BOLD responses for the seven subjects are shown in Figure 6. In contrast to the BOLD responses to peripherally presented text, there was no evidence of reduced BOLD response to rapidly presented line-drawing objects, at all ROIs considered, for both foveal and peripheral presentation. More specifically, BOLD response for line-drawing objects presented in peripheral vision did not show stronger response for a low temporal rate. In early retinotopic visual areas, there was a significant main effect of presentation rate, F(2, 12) = 11.67, p = 0.002, but no interaction between stimulus location and presentation rate. Both foveal BOLD response, F(2, 12) = 8.79, p = 0.004, and peripheral BOLD response, F(2, 12) = 5.81, p = 0.017, increased with presentation rate. The increase from 2/s to 8/s is 12 ± 3% for foveal presentation and 7 ± 1% for peripheral presentation. At the LO, no significant main and interaction effects of stimulus location and presentation rate were found. For the PHA, there was a significant main effect of presentation rate, F(2, 12) = 6.86, p = 0.01, but no interaction between stimulus location and presentation rate. Posthoc pairwise comparisons indicated that peak BOLD response occurred at the faster temporal rate at the PHA (an average increase of 8 ± 3% from 2/s to 8/s). 
Figure 5
 
BOLD response to line-drawing object stimuli as a function of presentation rate (number of images presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Error bars indicate ± standard error across subjects.
Figure 5
 
BOLD response to line-drawing object stimuli as a function of presentation rate (number of images presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Error bars indicate ± standard error across subjects.
Figure 6
 
Normalized BOLD response to line-drawing object stimuli plotted as a function of presentation rate (number of images presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
Figure 6
 
Normalized BOLD response to line-drawing object stimuli plotted as a function of presentation rate (number of images presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
Comparing the data presented in Figure 3 (words) and Figure 5 (line-drawing objects), the most distinctive difference occurred between peripherally presented words and the other three stimulus conditions (i.e., foveally presented words, foveally presented and peripherally presented line-drawing objects). At the early visual cortical areas, the peripherally presented text was the only stimulus condition that did not demonstrate rate-dependent increase in BOLD response. Even though the high level visual areas selective for words are retinotopically invariant in terms of location, text information that is initially processed at the foveal representation of early visual cortex enjoys an advantage of higher processing speed over text information that is initially processed at the peripheral representation of the early visual cortex. When comparing peripheral presentations, response to words has been shown to be greater for a slow temporal rate at higher visual processing stages, while the response to line drawings is greater for a higher temporal rate. Additionally, as shown in Figure 7, the BOLD response to text stimuli in PHA had a similar dependence on presentation rate as in VWFA. The temporal dependency is also similar between LO and OWRA. These similarities are observed at both fovea and periphery, and were confirmed by the statistical analyses (significant interactions between stimulus position and presentation rate for PHA, F(2, 12) = 25.13, p < 0.0005, and for LO, F(2, 12) = 11.26, p = 0.002). The agreement was also found for the BOLD signal to line-drawing object stimuli (no interactions between stimulus position and presentation rate for both OWRA and VWFA). Consistently, BOLD response to peripherally presented text showed a different temporal dependency (i.e., a rate-dependent reduction at the higher processing stages) from line-drawing objects and foveally presented text. The results provide further evidence that the rate dependencies at various cortical regions were determined by stimulus processing at those regions rather than the intrinsic properties of those cortical regions per se. 
Figure 7
 
The averaged BOLD response as a function of presentation rate at different visual cortical areas (LO and PHA for text stimuli; OWRA and VWFA for line-drawing object stimuli). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 7
 
The averaged BOLD response as a function of presentation rate at different visual cortical areas (LO and PHA for text stimuli; OWRA and VWFA for line-drawing object stimuli). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 8
 
Processing rate (estimated number of stimuli recognized per second) as a function of presentation rate (number of images presented per second). Each panel shows the averaged value as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different stimulus types (words and line-drawing objects). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 8
 
Processing rate (estimated number of stimuli recognized per second) as a function of presentation rate (number of images presented per second). Each panel shows the averaged value as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different stimulus types (words and line-drawing objects). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Behavioral experiment
Figure 8 plots the estimated processing rate (number of stimuli recognized per second) as a function of presentation rate (2, 4, and 8 per second) for words and line-drawing objects. For words, there was a significant main effect of stimulus position, F(1, 6) = 96.22, p < 0.0005, consistent with the fact that people can read faster in the fovea than in the periphery. There was also a significant interaction between the stimulus position and presentation rate, F(2, 12) = 9.99, p = 0.003, for text stimuli. As shown in Figure 8, the processing rate for foveal presentation increased with increasing presentation rate, F(2, 12) = 12.65, p = 0.001. For peripheral presentation, number of stimuli recognized per second reduced from 1.16/s to 0.5/s when presentation rate was increased from 2/s to 8/s, although the effect was not statistically significant, F(2, 12) = 2.64, p = 0.11. 
Different from the findings for text, processing rate increased with rapidly presented line-drawing objects at both fovea and periphery, F(2, 12) = 1598.73, p < 0.0005. As shown in Figure 8, subjects' performance was near perfect, i.e., correctly processing almost all the stimuli presented. This indicates that information processing was efficient for objects presented at higher temporal rates even in peripheral vision. No effects of stimulus position and interaction between stimulus position and presentation rate were found. 
Consistent with the findings on BOLD responses, we obtained the combination of two statistical results—significant interaction (visual field × presentation rate) on the behavior performance for words and non-significant interaction for line drawings. Comparing the data presented in the two panels of Figure 8, the only condition showing no increase of processing rate with increasing presentation rate was the peripheral words condition. 
Discussion and conclusions
The present study has shown that BOLD responses have different patterns of temporal rate-dependency for peripherally presented words compared to line-drawing objects and foveally presented words, in both early retinotopic visual areas and object-selective, high-level visual cortical areas. This difference may occur because word recognition usually relies more on higher retinal spatial frequencies (cycles per degree) than object recognition. Hasson et al. (2002) have suggested that there is a central-peripheral visual field bias for the object-related cortex based on resolution (spatial frequency) requirements in the recognition process. The physical properties of text may be the origin responsible for the central-peripheral bias. Legge and Bigelow (2011) showed that the average x-height for contemporary books and newspapers is about 0.24° at a viewing distance of 40 cm. Therefore, letter recognition in fluent reading typically requires higher resolution than recognition of other objects such as tools or buildings, and consequently is more reliant on foveal than on peripheral vision (Nazir, Heller, & Sussmann, 1992; Rayner, 1998). Anatomically, the center-periphery bias manifests as a gradual periphery to fovea shift in response bias, going from medial occipitotemporal cortex to lateral occipitotemporal cortex (Hasson et al., 2002; Ma et al., 2011). With regard to cortical response to different object categories, objects such as buildings activate the more medial side, while faces or words activate the more lateral side. In other words, the cortical representation is progressively more foveally biased from object area to face and word area. Our results suggest that for the center-biased representation (i.e., VWFA) there is a more severe processing-rate limitation for its preferred stimuli (i.e., words) presented in the peripheral visual field. Other unique inherent spatial characteristics of text such as regular vertical periodic structure (Watt & Dakin, 2010) may also, in some way, contribute to the limitation on information processing of peripherally presented words and letters. 
Our behavioral data are qualitatively consistent with our fMRI results. In the behavioral experiment, we found that for line-drawing objects, subjects could recognize more images per unit time for a higher temporal rate at both fovea and periphery. For word stimuli, the increased recognition per unit time with higher temporal rate was only observed for foveal presentation, while for peripheral presentation, subjects recognized more words per unit time for a lower presentation rate. In separate studies that we conducted, reading performance has been measured in both the central (Yu, Park, Gerold, & Legge, 2010) and peripheral visual fields (Yu, Legge, et al., 2010). During the measurements, words with an average word length of four letters were presented sequentially at a fixed location on the display screen with various presentation rates. Using a criterion of 80% of words read correctly, reading speed averaged about 4 words per second for stimuli presented at 10° in the lower visual field (Yu, Legge, et al., 2010) and about 12 words per second for foveal presentation (Yu, Park et al., 2010). Consistent with our fMRI results, the reading data showed that the word recognition speed during reading peaks at a much slower rate for peripheral versus foveal presentation. Although it was not revealed in the present study, it is possible that at high enough temporal rates, the foveal-peripheral difference in recognition rate and the associated difference at various cortical representations can be observed universally across stimulus types (not just specific to word recognition). 
McKeeff and colleagues (2007) found that the human visual system shows a progressive reduction in the temporal processing capacity (declining in peak temporal frequency tuning and reduced range of temporal sensitivities) from early retinotopic areas to high-level object-selective regions. They suggested that temporal limits in object recognition (e.g., face and house) may be due to the limited temporal sensitivity of high-level object-selective areas. Similarly, the findings from the present study and the previous reading studies indicate that temporal limits in peripheral reading may result from the limited temporal sensitivity of word-selective areas in addition to any limitation introduced from early visual areas. For instance, evidence for a similar time dependence of behavior and BOLD responses at higher stages but not at the early cortical areas is suggestive of a temporal limitation occurring at those stages beyond the early visual cortex. The VWFA, OWRA, and/or the paths connecting various cortical stages likely contain at least part of the temporal bottleneck for reading. It is possible that the neural mechanisms underlying the slower peripheral reading speed first emerge in the early retinotopic cortical representation after which the processing-rate limitation becomes apparent and is aggravated when signals travel down the pipeline through the word form sensitive cortical regions. In other words, signals of peripheral word stimuli may deteriorate due to a cumulative effect over multiple stages of processing. In contrast, at comparable object sensitive ROIs, no reduction of BOLD response was observed to peripherally presented line-drawing objects with increasing presentation rate over the same range. 
The candidate sites for the neural bottleneck for slower reading of peripherally presented text were selected prior to conducting the experiment based on previous empirical findings. Specifically, OWRA and VWFA were identified based on the contrast between words and faces. Both regions were more responsive to word than to face stimuli. Although our results revealed that the rate-dependent reduction for peripheral reading was not unique to word form sensitive cortical regions (see Figure 7), we focused on word-selective areas (besides early visual areas) for the candidate sites in the present study. On one hand, it is likely that the nearby nonword-selective regions also contributed to word processing and to hindering the peripheral reading. On the other hand, signals of word stimuli could indeed activate, nonselectively, the nearby nonword-selective regions (i.e., LO and PHA) which consequently demonstrated similar activation patterns in the stimulus processing. 
Our results also have implications for the reading rehabilitation of patients with central vision loss. For example, age-related macular degeneration (AMD) is an eye disease that has high prevalence and severe impact on reading. People with AMD often lose their central vision and have to use peripheral vision in which reading is very slow (Faye, 1984; Fine & Peli, 1995; Fletcher et al., 1999; Legge et al., 1985, 1992). Developing suitable reading rehabilitation using peripheral vision would be exceptionally helpful for these patients (Goodrich et al., 1977; Markowitz, 2006; Nilsson, 1990). Previous psychophysical studies (Chung et al., 2004; Lee et al., 2010; Yu, Legge et al., 2010) have shown that peripheral reading speed can be improved by extensive practice on a letter-recognition task and the learning effect exhibited substantial transfer to an untrained retinal location. The results from the present study provide a reasonable neuronal interpretation of the lack of retinotopic specificity of peripheral reading training, and highlight the areas beyond early retinotopic cortex as the target cortical site. Future work can compare the effects of training on peripheral reading with activation in early retinotopic cortex, OWRA and VWFA. This comparison will allow us to determine which cortical region is primarily correlated with the perceptual learning effect. 
In conclusion, the distinctive rate-dependence pattern shown for peripherally presented words across multiple processing stages suggests that the neural bottleneck for slower reading with peripherally presented text likely spans multiple cortical stages, from the early retinotopic cortical representation to the object category selective cortex (including the word form sensitive and even object sensitive cortical regions). The pathway connecting various processing stages may play a role as well. Further investigation with more sophisticated imaging techniques and experiment design might be performed to provide a more thorough assessment of the neural substrate for slow peripheral reading. 
Acknowledgments
This research was supported by NIH Grant R01 EY002934 and NSF Grant BCS-0818588. The 3T scanner at the University of Minnesota is supported by Neuroscience Core Center (NCC) grant P30 NS057091. We thank Fang Fang for assistance in pilot data collection. 
Commercial relationships: none. Corresponding author: Deyue Yu. 
Email: yu.858@osu.edu. 
Address: College of Optometry, The Ohio State University, Columbus, OH, USA. 
References
Abdollahi R. O., Kolster H., Glasser M. F., Robinson E. C., Coalson T. S., Dierker D., Orban G. A. (2014). Correspondences between retinotopic areas and myelin maps in human visual cortex. NeuroImage, 99, 509–524.
Amano K., Wandell B. A., Dumoulin S. O. (2009). Visual field maps, population receptive field sizes, and visual field coverage in the human MT+ complex. Journal of Neurophysiology, 102, 2704–2718.
Boudreau C. E., Williford T. H., Maunsell J. H. (2006). Effects of task difficulty and target likelihood in area V4 of macaque monkeys. Journal of Neurophysiology, 96 (5), 2377–2387.
Carrasco M., McElree B., Denisova K., Giordano A. M. (2003). Speed of visual processing increases with eccentricity. Nature Neuroscience, 6, 699–670.
Chanceaux M., Vitu F., Bendahman L., Thorpe S., Grainger J. (2012). Word processing speed in peripheral vision measured with a saccadic choice task. Vision Research, 56, 10–19.
Chen Y., Martinez-Conde S., Macknik S. L., Bereshpolova Y., Swadlow H. A., Alonso J. M. (2008). Task difficulty modulates the activity of specific neuronal populations in primary visual cortex. Nature Neuroscience, 11 (8), 974–982.
Chung S. T. L., Legge G. E., Cheung S. H. (2004). Letter-recognition and reading speed in peripheral vision benefit from perceptual learning. Vision Research, 44, 695–709.
Chung S. T. L., Mansfield J. S., Legge G. E. (1998). Psychophysics of reading—XVIII. The effect of print size on reading speed in normal peripheral vision. Vision Research, 38, 2949–2962.
Cohen L., Dehaene S. (2004). Specialization within the ventral stream: The case for the visual word form area. NeuroImage, 22, 466–476.
Cohen L., Dehaene S., Naccache L., Lehéricy S., Dehaene-Lambertz G., Hénaff M. A., Michel F. (2000). The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain, 123, 291–307.
Cohen L., Lehéricy S., Chochon F., Lemer C., Rivaud S., Dehaene S. (2002). Language-specific tuning of visual cortex? Functional properties of the visual word form area. Brain, 125 (5), 1054–1069.
Crouzet S. M., Kirchner H., Thorpe S. J. (2010). Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision, 10 (4): 16, 1–17, doi:10.1167/10.4.16. [PubMed] [Article]
Dehaene S., Cohen L. (2011). The unique role of the visual word form area in reading. Trends in Cognitive Sciences, 15, 254–262.
Engel S. A., Glover G. H., Wandell B. A. (1997). Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cerebral Cortex, 7, 181–192.
Faye E. E. (1984). Clinical low vision (2nd ed.). Boston, MA: Little Brown & Co.
Fine E. M., Peli E. (1995). Scrolled and rapid serial visual presentation texts are read at similar rates by the visually impaired. Journal of the Optical Society of America A: Optics, Image Sciences, and Vision, 12, 2286–2292.
Fletcher D. C., Schuchard R. A., Watson G. (1999). Relative locations of macular scotomas near the PRL: Effect on low vision reading. Journal of Rehabilitation Research and Development, 36, 356–364.
Gaillard R., Naccache L., Pinel P., Clemenceau S., Volle E., Hasboun D., Cohen L. (2006). Direct intracranial, FMRI, and lesion evidence for the causal role of left inferotemporal cortex in reading. Neuron, 50, 191–204.
Gauthier I., Tarr M. J., Moylan J., Skudlarski P., Gore J. C., Anderson A. W. (2000). The fusiform “face area” is part of a network that processes faces at the individual level. Journal of Cognitive Neuroscience, 12, 495–504.
Goodrich G. L., Mehr E. B., Quillman R. D., Shaw H. K., Wiley J. K. (1977). Training and practice effects in performance with low-vision aids: A preliminary study. American Journal of Optometry and Physiological Optics, 54 (5), 312–318.
Grill-Spector K., Kourtzi Z., Kanwisher N. (2001). The lateral occipital complex and its role in object recognition. Vision Research, 41, 1409–1422.
Grill-Spector K., Kushnir T., Hendler T., Malach R. (2000). The dynamics of object-selective activation correlate with recognition performance in humans. Nature Neuroscience, 3 (8), 837–843.
Hasson U., Levy I., Behrmann M., Hendler T., Malach R. (2002). Eccentricity bias as an organizing principle for human high-order object areas. Neuron, 34, 479–490.
Joseph J. E., Cerullo M. A., Farley A. B., Steinmetz N. A., Mier C. R. (2006). fMRI correlates of cortical specialization and generalization for letter processing. Neuroimage, 32, 806–820.
Joseph J. E., Gathers A. D., Piper G. A. (2003). Shared and dissociated cortical regions for object and letter processing. Cognitive Brain Research, 17, 56–67.
Kirchner H., Thorpe S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46 (11), 1762–1776.
Kolster H., Peeters R., Orban G. A. (2010). The retinotopic organization of the human middle temporal area MT/V5 and its cortical neighbors. The Journal of Neuroscience, 30 (29), 9801–9820.
Larsson J., Heeger D. J. (2006). Two retinotopic areas in human lateral occipital cortex. Journal of Neuroscience, 26, 13128–13142.
Lee H. W., Kwon M., Legge G. E., Gefroh J. J. (2010). Training improves reading speed in peripheral vision: Is it due to attention? Journal of Vision, 10(6): 18, 1–15, doi:10.1167/10.6.18. [PubMed] [Article]
Lee H. W., Legge G. E., Ortiz A. (2003). Is word recognition different in central and peripheral vision? Vision Research, 43, 2837–2846.
Legge G. E., Bigelow C. A. (2011). Does print size matter for reading? A review of findings from vision science and typography. Journal of Vision, 11(5): 8, 1–22, doi:10.1167/11.5.8. [PubMed] [Article]
Legge G. E., Rubin G. S., Pelli D. G., Schleske M. M. (1985). Psychophysics of reading. II. Low vision. Vision Research, 25, 253–265.
Legge G. E., Ross J. A., Isenberg L. M., LaMay J. M. (1992). Psychophysics of reading. XII. Clinical predictors of low-vision reading speed. Investigative Ophthalmology & Visual Science, 33 (3), 677–687. [PubMed] [Article]
Levy I., Hasson U., Avidan G., Hendler T., Malach R. (2001). Center-periphery organization of human object areas. Nature Neuroscience, 4, 533–539.
Li F. F., VanRullen R., Koch C., Perona P. (2002). Rapid natural scene categorization in the near absence of attention. Proceedings of the National Academy of Sciences, USA, 99, 8378–8383.
Ma L., Jiang Y., Bai J., Gong Q., Liu H., Chen H.-C., Weng X. (2011). Robust and task-independent spatial profile of the visual word form activation in fusiform cortex. PLoS ONE, 6 (10), e26310. doi:10.1371/journal.pone.0026310.
Markowitz S. N. (2006). Principles of modern low vision rehabilitation. Canadian Journal of Ophthalmology, 41, 289–312.
McCandliss B. D., Cohen L., Dehaene S. (2003). The visual word form area: Expertise for reading in the fusiform gyrus. Trends in Cognitive Sciences, 7, 293–299.
McKeeff T. J., Remus D. A., Tong F. (2007). Temporal limitations in object processing across the human ventral visual pathway. Journal of Neurophysiology, 98, 382–393.
Nazir T. A., Heller D., Sussmann C. (1992). Letter visibility and word recognition—the optimal viewing position in printed words. Perception & Psychophysics, 52, 315–328.
Nilsson U. L. (1990). Visual rehabilitation with and without educational training in the use of optical aids and residual vision: A prospective study of patients with advanced age-related macular degeneration. Clinical Vision Science, 6 (1), 3–10.
Ozus B., Liu H. L., Chen L., Iyer M. B., Fox P. T., Gao J. H. (2001). Rate dependence of human visual cortical response due to brief stimulation: An event-related fMRI study. Magnetic Resonance Imaging, 19, 21–25.
Price C. J., Devlin J. T. (2003). The myth of the visual word form area. Neuroimage, 19, 473–481.
Price C. J., Devlin J. T. (2011). The interactive account of ventral occipitotemporal contributions to reading. Opinion, 15, 246–253.
Rauschecker A. M., Bowen R. F., Parvizi J., Wandell B. A. (2012). Position sensitivity in the visual word form area. Proceedings of the National Academy of Sciences, USA, 109 (24), E1568–E1577.
Rayner K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422.
Rees G., Frith C. D., Lavie N. (1997). Modulating irrelevant motion perception by varying attentional load in an unrelated task. Science, 278 (5343), 1616–1619.
Reich L., Szwed M., Cohen L., Amedi A. (2011). A ventral visual stream reading center independent of visual experience. Current Biology, 21, 363–368.
Ress D., Backus B. T., Heeger D. J. (2000). Activity in primary visual cortex predicts performance in a visual detection task. Nature Neuroscience, 3 (9), 940–945.
Rovamo J., Raninen A. (1988). Critical flicker frequency as a function of stimulus area and luminance at various eccentricities in human cone vision: A revision of Granit-Harper and Ferry-Porter laws. Vision Research, 28, 785–790.
Schwartz S., Vuilleumier P., Hutton C., Maravita A., Dolan R. J., Driver J. (2005). Attentional load and sensory competition in human vision: Modulation of fMRI responses by load at fixation during task-irrelevant stimulation in the peripheral visual field. Cerebral Cortex, 15 (6), 770–786.
Seiple W., Holopigian K., Shnayder Y., Szlyk J. P. (2001). Duration thresholds for target detection and identification in the peripheral visual field. Optometry & Vision Science, 78, 169–176.
Snodgrass J. G., Vanderwart M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology, 6, 174–215.
Spitzer H., Desimone R., Moran J. (1988). Increased attention enhances both behavioral and neuronal performance. Science, 240 (4850), 338–340.
Strasburger H., Harvey L. O.,Jr., Rentschler I. (1991). Contrast thresholds for identification of numeric characters in direct and eccentric view. Perception & Psychophysics, 49, 495–508.
Thesen T., McDonald C. R., Carlson C., Doyle W., Cash S., Sherfey J., Halgren E. (2012). Sequential then interactive processing of letters and words in the left fusiform gyrus. Nature Communications, 3, 1284. doi:10.1038/ncomms2220.
Thomas C. G., Menon R. S. (1998). Amplitude response and stimulus presentation frequency response of human primary visual cortex using BOLD EPI at 4 T. Magnetic Resonance in Medicine, 40, 203–209.
Thorpe S. J., Gegenfurtner K. R., Fabre-Thorpe M., Bulthoff H. H. (2001). Detection of animals in natural images using far peripheral vision. European Journal of Neuroscience, 14, 869–876.
Tyler C. W. (1981). Specific deficits of flicker sensitivity in glaucoma and ocular hypertension. Investigative Ophthalmology & Visual Science, 20 (2), 204–212. [PubMed] [Article]
Tyler C. W. (1985). Analysis of visual modulation sensitivity. II. Peripheral retina and the role of photoreceptor dimensions. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 2, 393–398.
Vinckier F., Dehaene S., Jobert A., Dubus J. P., Sigman M., Cohen L. (2007). Hierarchical coding of letter strings in the ventral stream: Dissecting the inner organization of the visual word-form system. Neuron, 55, 143–156.
Wang L., Mruczek R. E., Arcaro M. J., Kastner S. (2014). Probabilistic maps of visual topography in human cortex. Cerebral Cortex, doi:10.1093/cercor/bhu277.
Watt R. J., Dakin S. C. (2010). The utility of image descriptions in the initial stages of vision: A case study of printed text. British Journal of Psychology, 101 (1), 1–26.
Waugh S. J., Hess R. F. (1994). Suprathreshold temporal-frequency discrimination in the fovea and the periphery. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 11, 1199–1212.
Williams M. A., Baker C. I., Op de Beeck H. P., Shim W. M., Dang S., Triantafyllou C., Kanwisher N. (2008). Feedback of visual object information to foveal retinotopic cortex. Nature Neuroscience, 11, 1439–1445.
Yu D., Cheung S. H., Legge G. E., Chung S. T. L. (2007). Effect of letter spacing on visual span and reading speed. Journal of Vision, 7(2): 2, 1–10, doi:10.1167/7.2.2. [PubMed] [Article]
Yu D., Legge G. E., Park H., Gage E., Chung S. T. L. (2010). Development of a training protocol to improve reading performance in peripheral vision. Vision Research, 50, 36–45.
Yu D., Park H., Gerold D., Legge G.E. (2010). Comparing horizontal and vertical reading of English text. Journal of Vision, 10(2): 21, 1–17, doi:10.1167/10.2.21. [PubMed] [Article]
Footnotes
1  There are, of course, intrinsic differences between line-drawing objects and words. For example, regular vertical periodic structure in words (Watt & Dakin, 2010) but not objects. Since distinctions are inherent, it is impossible to completely equate the physical characteristics of the two types of stimuli.
Footnotes
2  Block design instead of event-related design was used for the present study. A possible confound in the block design is that subjects may anticipate the difficulty of trials within a block. However, if using an event-related design in which durations are randomized, the uncertainty about task difficulty in the next trial would potentially be an even greater confound. Also, we are interested in the “steady-state” difference between different rates of presentation. Adopting an event-related design would introduce the other confounding factors (such as duration differences) that are hard to control.
Appendix
Estimating the processing rates
The model involves estimating two behavioral values designated X and G below. N stimuli are presented on each trial. On Signal trials, one of the N stimuli is a target. On Catch trials, none of the stimuli is a target. Suppose a subject can recognize only a fraction (X) of the stimuli on each trial. On a given trial, if the subject sees the target in the X × N stimuli, the subject says “Yes.” If the subject doesn't see the target, the subject guesses “Yes” on a fraction (G) of the trials. From the proportion of “Yes” responses on Signal and Catch trials, we can estimate the guess rate (G) and recognition rate (X). The guessing rate is simply the false alarm (false positive) rate on the Catch trials. For example, if the subject says “Yes” on 30% of the catch trials, G = 0.3. On the Signal trials, the subject sees the target on a fraction (X) of the trials and says “Yes,” and does not see it on a fraction (1 – X) but still guesses “Yes” on 30% of these trials. 
If the Hit (true positive) rate on the Signal trials is H:    
Solving for X:    
If false alarm rate (G) is larger than hit rate (H), we simply set the recognition rate (X) at zero to avoid getting negative values. 
 Number of stimuli recognized per second     = X × N/duration = X × presentation rate 
For instance, the subject's hit rate is 60% (H = 0.6) and false alarm rate is 30% (G = 0.3). From the above, X = 0.3/0.7 = 0.43. If the subject is receiving 12 stimuli per trial, the number of stimuli recognized by the subject on each trial is 0.43 × 12 = 5.2. If the presentation duration for each trial is 3 seconds (i.e., the presentation rate is 4/s), the effective processing rate is 1.73 stimuli per second. 
Figure 1
 
Schematic diagram of the experimental paradigm for peripheral presentation of words and nonwords. A block design was used in the experiment to compare activation between three presentation rates (2/s, 4/s, and 8/s).
Figure 1
 
Schematic diagram of the experimental paradigm for peripheral presentation of words and nonwords. A block design was used in the experiment to compare activation between three presentation rates (2/s, 4/s, and 8/s).
Figure 2
 
Illustrations of stimuli used in ROI localization scans and the defined ROIs (early retinotopic areas for fovea and periphery, VWFA, OWRA, PHA, and LO) on the ventral occipital and temporal cortical surface. (A) Examples of the four categories of stimuli used in the VWFA localizer scans. (B) Two activated early retinotopic areas corresponding to the tested central visual field (colored with yellow and red) and peripheral visual field (colored with green and blue) respectively. (C) VWFA and OWRA (both in red), two regions sensitive to words in the lateral occipital and fusiform cortex, were revealed by a contrast between words and faces. The area in blue is the fusiform face area (FFA). PHA and LO (both in green), two regions with enhanced activation to line-drawing objects were revealed by a contrast between line-drawing objects and textures.
Figure 2
 
Illustrations of stimuli used in ROI localization scans and the defined ROIs (early retinotopic areas for fovea and periphery, VWFA, OWRA, PHA, and LO) on the ventral occipital and temporal cortical surface. (A) Examples of the four categories of stimuli used in the VWFA localizer scans. (B) Two activated early retinotopic areas corresponding to the tested central visual field (colored with yellow and red) and peripheral visual field (colored with green and blue) respectively. (C) VWFA and OWRA (both in red), two regions sensitive to words in the lateral occipital and fusiform cortex, were revealed by a contrast between words and faces. The area in blue is the fusiform face area (FFA). PHA and LO (both in green), two regions with enhanced activation to line-drawing objects were revealed by a contrast between line-drawing objects and textures.
Figure 3
 
BOLD response to text stimuli as a function of presentation rate (number of words presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 3
 
BOLD response to text stimuli as a function of presentation rate (number of words presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 4
 
Normalized BOLD response to text stimuli plotted as a function of presentation rate (number of words presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
Figure 4
 
Normalized BOLD response to text stimuli plotted as a function of presentation rate (number of words presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, OWRA, and VWFA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
Figure 5
 
BOLD response to line-drawing object stimuli as a function of presentation rate (number of images presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Error bars indicate ± standard error across subjects.
Figure 5
 
BOLD response to line-drawing object stimuli as a function of presentation rate (number of images presented per second). Each panel shows the averaged BOLD signal as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Error bars indicate ± standard error across subjects.
Figure 6
 
Normalized BOLD response to line-drawing object stimuli plotted as a function of presentation rate (number of images presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
Figure 6
 
Normalized BOLD response to line-drawing object stimuli plotted as a function of presentation rate (number of images presented per second) at two target locations—fovea (0°) and periphery (10°) for different visual cortical areas (early visual area, LO, and PHA). Responses were normalized by dividing each response by the response for the 2/s presentation rate. Each panel shows the data from all seven individual subjects. Each line depicts the least-squares fit of each individual data set to a linear line.
Figure 7
 
The averaged BOLD response as a function of presentation rate at different visual cortical areas (LO and PHA for text stimuli; OWRA and VWFA for line-drawing object stimuli). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 7
 
The averaged BOLD response as a function of presentation rate at different visual cortical areas (LO and PHA for text stimuli; OWRA and VWFA for line-drawing object stimuli). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 8
 
Processing rate (estimated number of stimuli recognized per second) as a function of presentation rate (number of images presented per second). Each panel shows the averaged value as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different stimulus types (words and line-drawing objects). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Figure 8
 
Processing rate (estimated number of stimuli recognized per second) as a function of presentation rate (number of images presented per second). Each panel shows the averaged value as a function of presentation rate at two target locations—fovea (0°) and periphery (10°) for different stimulus types (words and line-drawing objects). Black squares represent data for foveal targets. Red circles represent data for peripheral targets. Significant interaction between stimulus position and presentation rate is labeled as “* interaction” in green. Error bars indicate ± standard error across subjects.
Table 1
 
Characteristics of participants.
Table 1
 
Characteristics of participants.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×