Open Access
Article  |   August 2018
Feature prediction across eye movements is location specific and based on retinotopic coordinates
Author Affiliations
  • Arvid Herwig
    Department of Psychology, Bielefeld University, Bielefeld, Germany
    Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
    aherwig@uni-bielefeld.de
  • Katharina Weiß
    Department of Psychology, Bielefeld University, Bielefeld, Germany
    Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
  • Werner X. Schneider
    Department of Psychology, Bielefeld University, Bielefeld, Germany
    Cluster of Excellence, Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
Journal of Vision August 2018, Vol.18, 13. doi:https://doi.org/10.1167/18.8.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arvid Herwig, Katharina Weiß, Werner X. Schneider; Feature prediction across eye movements is location specific and based on retinotopic coordinates. Journal of Vision 2018;18(8):13. https://doi.org/10.1167/18.8.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

With each saccadic eye movement, internal object representations change their retinal position and spatial resolution. Recently, we suggested that the visual system deals with these saccade-induced changes by predicting visual features across saccades based on transsaccadic associations of peripheral and foveal input (Herwig & Schneider, 2014). Here we tested the specificity of feature prediction by asking (a) whether it is spatially restricted to the previous learning location or the saccade target location, and (b) whether it is based on retinotopic (eye-centered) or spatiotopic (world-centered) coordinates. In a preceding acquisition phase, objects systematically changed their spatial frequency during saccades. In the following test phases of two experiments, participants had to judge the frequency of briefly presented peripheral objects. These objects were presented either at the previous learning location or at new locations and were either the target of a saccadic eye movement or not (Experiment 1). Moreover, objects were presented either in the same or different retinotopic and spatiotopic coordinates (Experiment 2). Spatial frequency perception was biased toward previously associated foveal input indicating transsaccadic learning and feature prediction. Importantly, while this pattern was not bound to the saccade target location, it was seen only at the previous learning location in retinotopic coordinates, suggesting that feature prediction probably affects low- or mid-level perception.

Introduction
Whenever the eyes move, objects in the world change their retinal position and, owing to the visual system's inhomogeneitiy, their spatial resolution as well, leading to a multitude of interactions between eye movements and perception (e.g., Gegenfurtner, 2016). Nevertheless, objects appear to be stable across saccades, both with respect to their location in space (i.e., they do not move) as well as their visual features (i.e., they are perceived as one and the same object). How are these two forms of stability achieved? Pertaining to location stability, physiological research suggests that retinal displacements are compensated by signals originating from saccade control areas (Duhamel, Colby, & Goldberg, 1992; Sommer & Wurtz, 2006; Wurtz, Joiner, & Berman, 2011). These signals have been shown to affect cells in retinotopically organized brain areas so that they remap their activity prior to saccades. As a consequence, these neurons predictively start responding to stimuli that will land in their receptive field after the eye movement. Such a predictive remapping of receptive fields (Duhamel et al., 1992) or attentional pointers (Cavanagh, Hunt, Afraz, & Rolfs, 2010; Rolfs & Szinte, 2016) can help keeping track of where things are in the world. Pertaining to the problem of object stability, it is, however, just as important to keep track of what things are in the world (e.g., Hollingworth, Richard, & Luck, 2008; Schneider, 2013). Here, a related predictive mechanism might be at work that predicts visual features instead of locations across saccades (Herwig & Schneider, 2014; Krauzlis & Nummela, 2011; Melcher, 2007; Schenck, 2013). 
One way to investigate transsaccadic feature prediction is to systematically alter feature values during saccadic eye movements and to test accompanying changes in peripheral perception. This strategy has recently been successfully applied to modify the peripheral perception of different visual features like spatial frequency (Herwig & Schneider, 2014), size (Bosco, Lappe, & Fattori, 2015; Valsecchi & Gegenfurtner, 2016), and shape (Herwig, Weiß, & Schneider, 2015; Köller, Poth, & Herwig, 2018; Paeye, Collins, Cavanagh, & Herwig, 2018). For example, in the study of Herwig and Schneider (2014) participants first underwent a 30-min acquisition phase where, unnoticed by participants, one object systematically changed its spatial frequency during the saccade (swapped object), whereas the spatial frequency of a second object remained unchanged (normal object). The goal of this first phase was to establish unfamiliar (swapped object) and familiar (normal object) transsaccadic associations of peripheral and foveal object information. In the following test phase, the frequency of peripheral saccade targets was perceived as higher—in comparison to the normal baseline object—for objects that previously changed from low in the periphery to high in the fovea. Similarly, the frequency of peripheral targets was perceived as lower for objects that previously changed their spatial frequency from high to low. Thus, peripheral perception was biased in the direction of the previously acquired foveal input. Consequently, the presaccadic perception of peripheral saccade targets is not purely based on the actual peripheral object information but also on the predicted postsaccadic foveal input (Herwig, 2015). A recent study showed that the integration of these two information sources is modulated by object discrepancies during learning (Köller et al., 2018). More specifically, the relative contribution of prediction decreased for large feature changes but did not reach zero, showing that even for profound discrepancies in the shape of an object (i.e., square to circle or vice versa) the prediction was not ignored completely. Remarkably, these bias effects were not affected by reported change detection, as no differences in judgment shifts between participants reporting the change in the postsession debriefing (detectors) and participants not reporting the changes (nondetectors) were found (Köller et al., 2018). 
Visual features are extracted at various levels in the recurrent and highly interactive occipitotemporal network from the primary visual cortex to the anterior inferior temporal cortex (e.g., DiCarlo, Zoccolan, & Rust, 2012; Felleman & Van Essen, 1991; Kravitz, Saleem, Baker, Ungerleider, & Mishkin, 2013), and it has been repeatedly shown that most of these levels can be penetrated and biased by top-down signals (e.g., Bundesen, Habekost, & Kyllingsbaek, 2005; O'Callaghan, Kveraga, Shine, Adams, & Bar, 2017; Summerfield & Egner, 2009). One important goal, therefore, should be to further specify the level at which peripheral perception is affected by transsaccadic feature prediction. While most of the recent learning studies on transsaccadic prediction used low- or mid-level visual features like shape, size, or spatial frequency (Bosco et al., 2015; Cox, Meier, Oertelt, & DiCarlo, 2005; Herwig & Schneider, 2014; Herwig et al., 2015; Köller et al., 2018; Valsecchi & Gegenfurtner, 2016), it is probably premature to infer that predictions directly bias visual processing of these low- and mid-level features. Alternatively, the observed biases in the perception of low- and mid-level features may result from signals at higher levels of visual processing (e.g., high level expectation of what the target would look like). 
Pinpointing the affected visual processing stage more thoroughly requires a strategy different from simply altering different feature levels. One promising strategy that can be adopted from perceptual learning studies is to investigate the location specificity of transsaccadic learning and prediction. During perceptual learning, the repeated exposure to certain visual stimuli typically results in better performance over time in a variety of different tasks ranging from vernier discrimination over texture segmentation to classification tasks employing Gabor patches (e.g., Fahle, 1994; Fiorentini & Berardi, 1981; Jüttner & Rentschler, 1996; Karni & Sagi, 1991; Rentschler, Jüttner, & Caelli, 1994; Shiu & Pashler, 1992). Importantly, if training is spatially restricted, this acquired improvement remains in most circumstances spatially specific too (Dill & Fahle, 1997; Karni & Sagi, 1991; but see Hung & Seitz, 2014; Xiao et al., 2008). Location specificity in perceptual learning is commonly interpreted as evidence in favor of learning at early stages in the visual processing hierarchy (e.g., Fahle, 2004), which is also supported by the finding that specific learning effects can result in corresponding changes in primary visual cortices (Schoups, Vogels, Qian, & Orban, 2001). A recent study by Rolfs, Murray-Smith, and Carrasco (2018) extended the perceptual learning paradigm, typically relying on long periods of uninterrupted fixation, to a condition where participants had to execute saccades. They showed that location specificity of orientation discrimination subsists even if stimulus presentation is constrained to periods of saccade preparation. However, as noted by Rolfs et al. (2018, p. 2) there might also be critical differences between perceptual learning studies and the associative learning assumed to underlie transsaccadic feature prediction: First, transsaccadic learning biases perception toward the predicted foveal input rather than making it more accurate, and second it also shows transfer to completely untrained locations (Valsecchi & Gegenfurtner, 2016). 
To date, the question of location specificity of transsaccadic learning has been addressed only once. Valsecchi and Gegenfurtner (2016) showed that the repeated exposure to a transsaccadic change in size modified the perceived size of peripheral targets not only at the trained location (e.g., 20° to the left) but also at the mirrored location in the opposite hemifield (e.g., 20° to the right). Consequently, they suggested “that a relatively high-level perceptual mechanism is responsible for the trans-saccadic re-calibration” (p. 60). However, the observed transfer effect might also be specific to the feature used or their comparison method. More precisely, size might be a special feature because cortical magnification predicts a relatively uniform geometrical distortion of size as a function of eccentricity (Strasburger, Rentschler, & Jüttner, 2011), which might not equally hold true for other visual features like shape and spatial frequency. Given that transsaccadic predictions are probably not affected by deliberate response strategies (Köller et al., 2018) and the aforementioned objections, we think that more research is needed to further specify the level at which peripheral perception is affected by transsaccadic feature prediction. 
The present study thus aimed at systematically readdressing the question of location specificity of transsaccadic learning. To this end, we investigated a visual feature other than size (i.e., spatial frequency) and focused special emphasize on three aspects of location specificity. (a) Is transsaccadic feature prediction specific to the trained location or does it transfer to other locations with the same eccentricity? (b) Is transsaccadic feature prediction specific to the saccade target object or does it also apply to peripheral stimuli at positions other than the saccade target? and (c) Is transsaccadic feature prediction based in retinotopic (eye-centered) or spatiotopic (world-centered) coordinates? Experiment 1 addresses the first two questions; the last question is addressed in Experiment 2
Experiment 1
Beyond assessing whether transsaccadic learning is comparable to perceptual learning in being specific to the trained location, Experiment 1 also wanted to test whether feature prediction is specific to the saccade target object. Most of the studies on transsaccadic feature prediction (e.g., Herwig & Schneider, 2014; Herwig et al., 2015; Köller et al., 2018; Paeye et al., 2018; Valsecchi & Gegenfurtner, 2016) have solely tested the prediction with respect to the saccade target object or location. Because the saccade target object is somewhat special in binding visual attention to a large degree shortly before a saccadic eye movement (Deubel & Schneider, 1996), feature prediction might be restricted to the saccade target object. However, two recent studies on location prediction across saccades have shown that other prioritized locations are also remapped (Jonikaitis, Szinte, Rolfs, & Cavanagh, 2013; Szinte, Carrasco, Cavanagh, & Rolfs, 2015). For example, Jonikaitis et al. (2013), flashed a color cue in the periphery shortly before a saccadic eye movement to another location, leading to perceptual benefits not only at the exogenously cued location but also at its future (i.e., predictively remapped) retinal location. Moreover, Szinte et al. (2015) showed in a motion tracking task that this finding extends to endogenously attended peripheral locations. Together both studies indicate that changes of attended nontarget locations are predictively tracked across saccades. While both findings fit well to studies questioning a special role of the saccade target during transsaccadic learning (e.g., Paeye et al., 2018), in the present study we tested whether this also translates to transsaccadic feature prediction. 
To address both questions in a single experiment, participants first underwent an acquisition phase where swapped objects systematically changed their spatial frequency during saccades, whereas the spatial frequency of the normal objects remained the same. During learning, swapped and normal objects were always presented at the saccade target location. In the following test phases, participants had to judge the frequency of briefly presented peripheral objects. Importantly, these objects were presented either at the previous learning location or at new locations and were either the target of a saccadic eye movement or not. 
Methods
Participants
Thirty-two participants, whose ages were between 18 and 35 years, took part in Experiment 1 (19 women and 13 men). Informed written consent was obtained from each participant prior to the experiment. All participants reported normal or corrected-to-normal vision and were naive with respect to the aim of the study. For half the participants (Subgroup 1a), unfamiliar associations were established by changing one object's frequency from low to high. For the other half of participants (Subgroup 1b), one object changed its frequency from high to low
Apparatus and stimuli
Participants performed the experiment in a dimly lit room and stimuli were presented on a 19-in. display monitor running at 100 Hz at a distance of 71 cm. The screen's resolution was set to 1,024 × 786 pixels, which corresponded to physical dimensions of 36 cm (width) × 27 cm (height). To ensure luminance stability, the monitor was warmed up for at least 30 min before the experiment. This necessary warm-up time was estimated according to Poth and Horstmann (2017). Eye movements were recorded with a video-based tower-mounted eye tracker (Eye Link1000, SR Research, Ontario, Canada) with a sampling rate of 1,000 Hz. In all participants the right eye was monitored, and the head was stabilized by a forehead and chin rest. The central fixation stimulus was a black “plus” character (0.3° × 0 .3°, line width 2 pixels). We used triangular and circular objects (1.5° edge length or diameter, respectively) filled with sinusoidal gratings of different spatial frequency (2.45 or 3.95 cpd) as potential saccade targets in the acquisition phase. The same objects also served as perceptual targets in the test phase where they were presented together with a black plus character (1.1° × 1.1°, line width 4 pixels) as a potential saccade target. In addition, we used triangular and circular objects filled with spatial frequencies of 1.7, 2.45, 3.2, 3.95, and 4.7 cpd as test objects for the judgment task in the test phase. All stimuli were presented on a gray background with a mean luminance of 30 cd/m2. The experiment was controlled by Experiment Builder (SR Research, v1.10.1630). 
Procedure and design
The experiment was run in a single session of about 60 min and comprised an acquisition and a test phase (see Figure 1a and b). Prior to each phase, a 9-point grid calibration procedure was applied. Participants underwent the same acquisition phase as in the study by Herwig and Schneider (2014). That is, each trial of the acquisition phase started (following a variable fixation interval of 500–1,000 ms) with the presentation of a triangular and a circular object appearing at 6° to the left and right of the screen's center at random. Participants were instructed to saccade to either the triangular or the circular object, depending on their own choice, but to look at each object about equally often. Feedback regarding the number of saccades to each object was provided every 48 trials. One of the two peripheral objects had a high spatial frequency of 3.95 cpd, whereas the other object had a low spatial frequency of 2.45 cpd. The mapping of shape and peripheral frequency was fixed for each participant but counterbalanced across participants. For Subgroup 1a we consistently replaced the object with the low spatial frequency by an object of similar shape with a high spatial frequency of 3.95 cpd during the saccade, whereas for Subgroup 1b, we replaced the object with the high spatial frequency by an object with a low spatial frequency of 2.45 cpd during the saccade. That is, different spatial frequencies of one saccade target object with a particular shape (swapped object) were presented to the presaccadic peripheral and postsaccadic foveal retina. Thus, for Subgroup 1a, the swapped object always changed its frequency from low to high, whereas for Subgroup 1b, the swapped object always changed its frequency from high to low (see Figure 1b). For both subgroups, saccades to the peripheral object with the other shape (normal object) did not lead to a replacement. Thus, for the normal object the same frequency was presented to the presaccadic peripheral and postsaccadic foveal retina. Following the saccade, both objects were presented for 250 ms and then replaced by a blank screen of 1,500 ms duration. With this manipulation, we could ensure that participants always foveated triangular and circular objects filled with the same spatial frequency. The frequency of the swapped and the normal object only differed prior to the saccade in the periphery. The acquisition phase consisted of 240 trials, which were run in five blocks of 48 trials. 
Figure 1
 
(a) Trial structure of the acquisition phase. Participants freely decided to saccade to one out of two objects. The normal object did not change its frequency during the saccade, whereas the swapped object changed its frequency. (b) Frequency pairings used in the acquisition phase. (c) Trial structure of the test phase in Experiment 1. Participants were required to saccade to a peripheral saccade target (either the perceptual target or a plus character). Peripheral stimuli disappeared as soon as the eyes started to move. Following the saccade, a test object was presented, and participants had to match the frequency of the test object to the frequency of the presaccadic perceptual target. Note, stimuli in (a) and (b) are not drawn to scale. PT = perceptual target, ST = saccade target.
Figure 1
 
(a) Trial structure of the acquisition phase. Participants freely decided to saccade to one out of two objects. The normal object did not change its frequency during the saccade, whereas the swapped object changed its frequency. (b) Frequency pairings used in the acquisition phase. (c) Trial structure of the test phase in Experiment 1. Participants were required to saccade to a peripheral saccade target (either the perceptual target or a plus character). Peripheral stimuli disappeared as soon as the eyes started to move. Following the saccade, a test object was presented, and participants had to match the frequency of the test object to the frequency of the presaccadic perceptual target. Note, stimuli in (a) and (b) are not drawn to scale. PT = perceptual target, ST = saccade target.
Each trial of the test phase consisted of two subtasks, a saccade task followed by a frequency judgment task (see Figure 1c). A test trial started with the presentation of two stimuli, one of which was a plus character and the other a triangular or circular object filled with a sinusoidal grating of 2.45 or 3.95 cpd. The latter stimulus served as the perceptual target for the frequency judgment task. Peripheral stimuli were pseudorandomly presented either to the left or right side on an imagery circle with a radius of 6° surrounding the center of the screen. One of both objects always appeared at the horizontal meridian, whereas the other object appeared with an angular separation of 30° above or below the horizontal meridian. We manipulated the saccade task by instructing participants in different parts of the test phase to either saccade to the plus character or to the perceptual target object. Half of the participants saccaded to the plus character in the first half of the test phase and to the perceptual target in the second half, whereas this order was reversed for the other half of participants (see Figure 2). The trial was aborted when no saccade was made within 350 ms after target onset or when the first fixation after the saccade was outside a 3° × 3° rectangle centered on the saccade target location. In both cases participants received an error message asking them to execute the eye movement faster or more accurately. To ensure that perceptual targets were only presented to the peripheral retina, both peripheral stimuli disappeared with the next screen refresh after the detection of saccadic eye movement. In addition, the saccade target object was replaced by a fixation stimulus (0.3° × 0 .3°, line width 2 pixels). Five hundred milliseconds after completion of the saccade, a test object was presented at the previous location of the perceptual target. Participants' second subtask was to adjust the frequency of this test object until it matched the frequency of the presaccadic perceptual target object. The frequency of the test object was chosen at random, but could be incrementally changed in steps of 0.75 cpd by pressing the up or down arrow keys on the keyboard. Participants indicated their final choice by pressing the space bar. The test phase consisted of 256 trials, which were run in eight blocks of 32 trials. Each block was composed of a factorial combination of two target locations (left vs. right), two target shapes (triangular vs. circular), two spatial frequencies (high vs. low), two spatial arrangements (second stimulus above vs. below meridian), and two different centered objects (plus character vs. perceptual target). 
Figure 2
 
Experimental conditions in the test phase of Experiment 1. Manipulating the saccade task as well as the stimulus arrangement resulted in four different conditions composed of a factorial combination of PT location (old location vs. new location) and saccade task (ST = PT vs. ST ≠ PT). See text for more details. PT = perceptual target, ST = saccade target.
Figure 2
 
Experimental conditions in the test phase of Experiment 1. Manipulating the saccade task as well as the stimulus arrangement resulted in four different conditions composed of a factorial combination of PT location (old location vs. new location) and saccade task (ST = PT vs. ST ≠ PT). See text for more details. PT = perceptual target, ST = saccade target.
Data analysis
Saccade onsets were detected using a velocity criterion of 30°/s. We excluded trials in the acquisition and test phase if (a) saccades were anticipatory (latency < 100 ms), (b) gaze deviated by more than 1° during acquisition or 1.5° during test from the display center at the time of saccade onset, or (c) saccadic latency was longer than 1,000 ms during acquisition or 350 ms during test. Moreover trials in the test phase were further discarded if (d) the first fixation after the saccade was outside a 3° × 3° rectangle centered on the saccade target location. With these criteria, 5.0% of all acquisition trials and 20.5% of all test trials were discarded from analysis. The significance criterion was set to p < 0.05 for all analyses. 
Results
Acquisition phase
During acquisition participants looked at the to-be-swapped object and the normal object about equally often (50.2% vs. 49.8%) with a mean saccadic latency [± SD] of 275 [± 62] ms. Swapping occurred during the saccade (mean delay after saccade onset was 26.0 [± 3.9] ms; mean saccade duration was 44.3 [± 4.7] ms). 
Test phase
Mean shape judgments (see Figure 3) and saccadic latencies of the test phase were analyzed as a function of the three within-subjects factors object status during acquisition (normal vs. swapped), saccade task (saccade target = perceptual target vs. saccade target ≠ perceptual target), and test location (learning location vs. new location) and the between-subjects factor change direction (low to high vs. high to low). In a second step we calculated the learning effect as the difference between judgments for the normal and swapped object separately for each participant (see Herwig & Schneider, 2014, for a related procedure). Differences were signed so that a positive value indicated a judgment shift in the direction of previously associated foveal input, whereas a negative value indicated a judgment shift in the reverse direction. An analysis of variance (ANOVA) on these learning effects including saccade task, and learning location as within-subjects factors revealed a significant main effect of learning location, F(1, 31) = 6.341, p = 0.017, ηp2 = 0.17. As can be seen in Figure 3, judgments shifted in the direction of the previously associated foveal input at the previous learning location but not at the new location. Neither the main effect of saccade task nor the interaction reached significance (all Fs < 1.390, ps > 0.247). The analysis of latencies revealed a significant main effect of saccade task, F(1, 31) = 34.785, p < 0.001, ηp2 = 0.53, indicating increased saccadic latencies if the perceptual target was not the saccade target (187 [± 23] ms vs. 165 [± 21] ms). No other effects reached significance (all Fs < 2.619, ps > 0.116). 
Figure 3
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 1 for each participant (filled dots) and mean values across participants (empty dots) as a function of saccade task, test location, and change direction (left side) and mean signed judgment differences across participants as a function of saccade task and test location (right side). Error bars represent standard errors of the mean. ST = saccade task, PT = perceptual target.
Figure 3
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 1 for each participant (filled dots) and mean values across participants (empty dots) as a function of saccade task, test location, and change direction (left side) and mean signed judgment differences across participants as a function of saccade task and test location (right side). Error bars represent standard errors of the mean. ST = saccade task, PT = perceptual target.
Experiment 2
Experiment 1 provided first evidence that transsaccadic feature prediction is specific to the trained location. However, it is not clear whether this specificity is tied to a retinal location (eye centered or retinotopic) or a location “out there” in space (world centered or spatiotopic). Answering this question might help to further specify the level at which peripheral perception is affected by transsaccadic feature prediction because phenomena occurring in a retinotopic frame of reference point to rather low-level, early stages of visual processing (e.g., Afraz & Cavanagh, 2009; Knapen, Rolfs, & Cavanagh 2009; Mathôt & Theeuwes, 2013; Zhang & Li, 2010; but see Arcaro & Livingstone, 2017). 
Experiment 2 was preceded by the same acquisition phase as the one in Experiment 1. To address the question about the frame of reference, we manipulated the starting position and the target position of the saccadic eye movement in the test phase. As a consequence, peripheral objects were presented either in the same or different retinotopic and the same or different spatiotopic coordinates as during learning. 
Materials and methods
Thirty-two new participants aged between 20 and 34 years took part in Experiment 2 (12 women, 20 men). Participants fulfilled the same criteria as those in Experiment 1. The method was the same as in Experiment 1, with the following modifications of the test phase. Each test trial started with the presentation of the fixation cross at one out of five possible starting locations (see Figure 4). Starting locations were positioned either at the center of the screen (one half of trials), or with a ±3° vertical and a ±0.8° horizontal offset from screen center. After a variable fixation interval of 500–1,000 ms, a saccade target (triangular or circular object filled with a sinusoidal grating of 2.45 or 3.95 cpd) appeared at one out of six locations all positioned at an imagery circle with a radius of 6° surrounding the center of the screen. Importantly, this arrangement of starting locations and saccade target locations resulted in four different stimulus configurations composed of a factorial combination of spatiotopic (same vs. different) and retinotopic (same vs. different) coordinates. That is, saccades could be either directed (a) at the same spatiotopic and same retinotopic coordinates, (b) at the same spatiotopic but different retinotopic coordinates, (c) at different spatiotopic but the same retinotopic coordinates, and (d) at different spatiotopic and different retinotopic coordinates as in the acquisition phase. 
Figure 4
 
Arrangement of possible saccade starting locations (depicted in green) and target locations (depicted in black) in the test phase of Experiment 2. This arrangement resulted in four different stimulus configurations composed of a factorial combination of spatiotopic (same vs. different) and retinotopic (same vs. different) coordinates. See text for more details.
Figure 4
 
Arrangement of possible saccade starting locations (depicted in green) and target locations (depicted in black) in the test phase of Experiment 2. This arrangement resulted in four different stimulus configurations composed of a factorial combination of spatiotopic (same vs. different) and retinotopic (same vs. different) coordinates. See text for more details.
The test phase consisted of 256 trials, which were run in four blocks of 64 trials. Each block was composed of a factorial combination of two target shapes (triangular vs. circular), two spatial frequencies (high vs. low), two target sides (left vs. right), and five starting locations distributed across eight trials (four trials started at center location and four trials with a ±3° vertical and a ±0.8° horizontal offset). 
Results
Applying the same criteria as specified in Experiment 1, 3.3% of all acquisition trials and 8.6% of all test trials were discarded from analysis. 
Acquisition phase
Participants looked at the to-be-swapped object and the normal object about equally often (50.1% vs. 49.9%) with a mean saccadic latency [± SD] of 263 ms [± 61]. Swapping occurred during the saccade (mean delay after saccade onset was 24.9 [± 2.0] ms; mean saccade duration was 44.6 [± 5.7] ms). 
Test phase
We analyzed mean shape judgments (see Figure 5) and saccadic latencies as a function of the three within-subjects factors object status during acquisition (normal vs. swapped), spatiotopy (same vs. different), and retinotopy (same vs. different) and the between-subjects factor change direction (low to high vs. high to low). An analysis of variance (ANOVA) on the learning effect (see Experiment 1) including spatiotopy and retinotopy as within-subjects factors revealed a significant main effect of retinotopy, F(1, 31) = 4.490, p = 0.042, ηp2 = 0.13. As can be seen in Figure 5, judgments shifted in the direction of the previously associated foveal input if the target was presented at the same but not at another retinal position as during learning. Neither the main effect of spatiotopy, F < 1, nor the interaction, F(1, 31) = 2.151, p = 0.153, ηp2 = 0.06, reached significance. The analysis of latencies revealed a significant main effect of retinotopy, F(1, 31) = 18.371, p < 0.001, ηp2 = 0.37, indicating slightly decreased saccadic latencies for horizontal (Figure 4a and c; 156 [± 21] ms) as compared to oblique saccades (Figure 4b and d; 162 [± 23] ms), as well as a significant interaction of retinotopy and spatiotopy, F(1, 31) = 21.904, p < 0.001, ηp2 = 0.41, indicating an increased difference in saccadic latencies between horizontal and oblique saccade for nonspatiotopic coordinates (Figure 4c vs. d; Δ9ms) as compared to spatiotopic coordinates (Figure 4a vs. b; Δ2ms). No other effects reached significance (all Fs < 1.202, ps > 0.281). 
Figure 5
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 2 for each participant (filled dots) and mean values across participants (empty dots) as a function of spatiotopy, retinotopy, and change direction (left side) and mean signed judgment differences across participants as a function of spatiotopy and retinotopy (right side). Error bars represent standard errors of the mean.
Figure 5
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 2 for each participant (filled dots) and mean values across participants (empty dots) as a function of spatiotopy, retinotopy, and change direction (left side) and mean signed judgment differences across participants as a function of spatiotopy and retinotopy (right side). Error bars represent standard errors of the mean.
Discussion
A number of recent eye tracking studies showed that peripheral perception depends not solely on the current input but also on memorized experiences enabling predictions about the consequences of upcoming saccades (Bosco et al., 2015; Herwig & Schneider, 2014; Herwig et al., 2015; Köller et al., 2018; Paeye et al., 2018; Valsecchi & Gegenfurtner, 2016). Accordingly, peripheral perception is biased toward the predicted foveal input after performing saccades in an altered environment, where visual feature values were changed during the eye movement. In the present study, we assessed this transsaccadic feature prediction in two experiments composed of an acquisition and a test phase. During acquisition, objects systematically changed their spatial frequency during saccades. In the following test phase, participants had to judge the frequency of briefly presented peripheral objects. Using this protocol, the present study's main goal was to further specify the level at which peripheral perception is affected by transsaccadic feature prediction. To pursue this goal we focused on three aspects of location specificity, a hallmark of perceptual learning studies, by asking (a) whether transsaccadic feature prediction is specific to the trained location, (b) whether transsaccadic feature prediction is specific to the saccade target object, and (c) whether transsaccadic feature prediction is based in retinotopic (eye-centered) or spatiotopic (world-centered) coordinates. 
Spatial specificity of transsaccadic prediction
Experiment 1 provided clear evidence that transsaccadic feature prediction is tied to the previous learning location. More specifically, we found a bias in peripheral perception toward the predicted foveal input only at the trained location (i.e., 6° to the left and right of the screen's center), whereas there was no indication for peripheral perception being biased at untrained locations with the same retinal eccentricity (i.e., 6° eccentricity above or below the horizontal meridian). Location specificity is often considered a clear signature of perceptual learning where perceptual improvements over the repeated exposure of certain stimuli at certain locations typically remain spatially specific too (e.g., Dill & Fahle, 1997; Karni & Sagi, 1991; Rolfs et al., 2018). This finding is commonly interpreted as evidence in favor of learning at early stages in the visual processing hierarchy (Fahle, 2004; Poggio, Fahle, & Edelman, 1992) and likely reflects a change in the stimulus representation (e.g., Adab & Vogels, 2011; Karni & Sagi, 1991; Schoups et al., 2001). Likewise, the observed location specificity of transsaccadic feature prediction suggests that peripheral perception is affected by transsaccadic feature prediction at early rather than late stages in the visual processing hierarchy. 
Recently, there has been some evidence that location specificity in perceptual learning can be overcome with certain training protocols. For example, location transfer can be observed when participants have to perform a second task during training with stimuli presented at the untrained location (Xiao et al., 2008). However, the transfer due to this double training may not be ubiquitous but depends on particularities of the training procedure (Hung & Seitz, 2014; Le Dantec & Seitz, 2012; Pilly, Grossberg, & Seitz, 2010) probably indicating that perceptual learning is not restricted to early levels of visual processing (Goldstone & Byrge, 2015). Moreover, the present finding differs from a recent report of a location transfer of transsaccadic feature prediction (Valsecchi & Gegenfurtner, 2016). This study showed that the repeated exposure to a transsaccadic change in size modified the perceived size of peripheral targets not only at the trained location (e.g., 20° to the left) but also at the mirrored location in the opposite hemifield (e.g., 20° to the right). The authors thus suggested that transsaccadic recalibration is due to a relatively high-level perceptual mechanism. There are, however, some differences between the present study and the study by Valsecchi and Gegenfurtner (2016) that need to be considered. First, the present study investigated the peripheral perception of spatial frequency instead of size. As noted by Valsecchi and Gegenfurtner, size might be a special feature because cortical magnification predicts a relatively uniform geometrical distortion of size as a function of eccentricity, which might not equally hold true for other visual features (Strasburger et al., 2011). Second, there were also differences in the transfer locations tested in the present study (i.e., 6° eccentricity above or below the horizontal meridian) and in the study of Valsecchi and Gegenfurtner (i.e., mirror location in the opposite hemifield). Because participants in the present study learned transsaccadic associations for both hemifields (6° to the left and right), it was not possible to test transfer to the mirror location, which sometimes shows specific characteristics (e.g., in the pooling of attention as demonstrated by Tse, Sheinberg, & Logothetis, 2003). Finally, in Valsecchi and Gegenfurtner's study the trained and untrained position were both relevant in the size comparison task, which might have induced some kind of double training. 
One obvious question is how the current observation of location specificity fits to the more general idea that predictions help to conceal acuity limitations in the periphery (Herwig & Schneider, 2014). To make the latter work, foveal features of peripheral objects should be predicted across a large range of retinal locations not just a single location. However, generalization is only one possibility to solve this problem. It implies, however, that typically the quantity or scope of applications is treated against the quality of the prediction. The other possible solution is relying on memory-intensive rather than computation-intensive processes and requires a multitude of position- and stimulus-specific learning events. Such a rather memory-intensive solution is also considered in the acquisition of position-invariant representations of an object (e.g., Cox et al., 2005; Dill & Fahle, 1997). Thus, an important question for future research will be to further determine the interplay of memory and online computation taxing processes in transsaccadic feature prediction. It might be worthwhile to also manipulate the number of learning locations to test whether generalization across locations requires a certain amount of position-specific learning events. Along this line, such manipulations might also reveal whether learning at different locations occurs independently or not. 
No evidence for a special role of the saccade target location
Experiment 1 also addressed the question as to whether transsaccadic feature prediction is bound to the saccade target only or whether it applies also to other peripheral locations. This question was addressed by disentangling the saccade target location and the peripheral location for the perceptual task. We found no indication for a special role of the saccade target location in transsaccadic feature prediction. More specifically, biases were still observable if the perceptual task involved the old learning location but saccades had to be directed to other locations at the same eccentricity. This finding is remarkable because visual attention is typically bound to the target of an imminent saccade in an obligatory and spatially selective fashion (Deubel, 2008; Deubel & Schneider, 1996; Hoffman & Subramaniam, 1995; Kowler, Anderson, Dosher, & Blaser, 1995; Schneider & Deubel, 2002). This does not, however, directly imply that feature prediction is independent from attention. There are also reports that attention can be directed to other locations than the saccade target, although this often leads to impaired saccade performance (e.g., longer saccade latencies as reported by Deubel, 2008). Likewise, we observed longer saccadic latencies for conditions were the perceptual target and the saccade target was disentangled, suggesting that participants tried hard to keep attention on the perceptual target. Thus, it is reasonable to assume that the perceptual target was covertly attended and stored in visual working memory at first also when it was not presented at the saccade target location. 
The present finding also fits to recent studies on postsaccadic integration of peripheral and foveal information (e.g., Ganmor, Landy, & Simoncelli, 2015; Oostwoud Wijdenes, Marshall, & Bays, 2015; Wittenberg, Bremmer, & Wachtler, 2008; Wolf & Schütz, 2015; for a review, see Herwig, 2015). In this line of research, postsaccadic perception of a visual feature is affected by presaccadic feature information at the same world-centered location, even if the saccade has been directed to a different location (Oostwoud Wijdenes et al., 2015; Wittenberg et al., 2008). Likewise, studies on location prediction across saccades showed that nonsaccade target locations, either attended due to salient external event (Jonikaitis et al., 2013) or task instruction (Szinte et al., 2015), are also remapped. Moreover, recent studies on feature prediction suggest that covert shifts of attention might be sufficient for feature prediction to occur (Paeye et al., 2018; Valsecchi & Gegenfurtner, 2016). For example, Paeye and colleagues demonstrated that peripheral shape perception is biased toward associated foveal input even under steady fixation when no saccade has to be executed. Future studies should systematically investigate the role of covert attention in transsaccadic feature prediction. 
Spatial specificity is tied to retinal locations
Experiment 2 finally investigated whether the location specificity observed in Experiment 1 is tied to a retinal location (eye-centered or retinotopic) or a location “out there” in space (world-centered or spatiotopic). We addressed this question by manipulating the starting position and the target position of the saccadic eye movement in the test phase so that peripheral objects were presented either in the same or different retinotopic and the same or different spatiotopic coordinates as during learning. Experiment 2 provided first evidence that the spatial specificity of transsaccadic feature prediction might be tied to a retinal location. More precisely, we only found biases in peripheral perception if the perceptual target was presented at the same retinotopic coordinates. This was true irrespective of whether this also corresponded to the same spatiotopic coordinates or not. It has to be noted, however, that a closer look at the descriptive pattern depicted in Figure 5 seems to indicate some degree of transfer to the same spatiotopic but different retinotopic condition. While this pattern was not backed up by a significant interaction of spatiotopy and retinotopy in the current study, further research is needed to clarify whether there is an additional role of spatiotopy or not. 
Retinotopic spatial specificity is also a key feature of perceptual learning studies (e.g., Karni & Sagi, 1991; Shiu & Pashler, 1991) as well as studies on visual adaptation (e.g., Afraz & Cavanagh, 2009; Knapen et al., 2009; Mathôt & Theeuwes, 2013; Zhang & Li, 2010). It points to effects at lower stages in the visual processing hierarchy where the retinotopic organization of the visual input is still retained. Demonstrating comparable retinotopic specificity in the present study thus provides further evidence that peripheral perception is affected by transsaccadic feature prediction at lower levels of visual processing. Such modifications in visual processing might finally help to conceal perceptual distortions due to the inhomogeneity of the visual field. 
Conclusions
Probing the location specificity of transsaccadic prediction, we have shown that spatial frequency perception is biased toward previously associated foveal input only at the previous learning location in retionotopic coordinates. Moreover, this location specificity was not bound to the saccade target location. These findings resemble hallmarks of perceptual learning indicating that the underlying mechanisms might not be as different as previously thought (cf. Rolfs et al., 2018; Valsecchi & Gegenfurtner, 2016). Thus, our results point to retinotopically organized visual areas such as the neural substrate where feature prediction enters visual processing. 
Acknowledgments
This work was supported by a grant from the German Research Council (Deutsche Forschungsgemeinschaft) to A.H. (He6388/1-2). We thank Janine Druhmann for help with data acquisition. 
Commercial relationships: none. 
Corresponding author: Arvid Herwig. 
Address: Department of Psychology, Bielefeld University, Bielefeld, Germany. 
References
Adab, H. Z., & Vogels, R. (2011). Practising coarse orientation discrimination improves orientation signals in macaque cortical area V4. Current Biology, 21, 1661–1666.
Afraz, A., & Cavanagh, P. (2009). The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations. Journal of Vision, 9 (10): 10, 1–17, https://doi.org/10.1167/9.10.10. [PubMed] [Article]
Arcaro, M. J., & Livingstone, M. S. (2017). Retinotopic organization of scene areas in macaque inferior temporal cortex. The Journal of Neuroscience, 37, 7373–7389.
Bosco, A., Lappe, M., & Fattori, P. (2015). Adaptation of saccades and perceived size after trans-saccadic changes of object size. Journal of Neuroscience, 35, 14448–14456.
Bundesen, C., Habekost, T., & Kyllingsbaek, S. (2005). A neural theory of visual attention: Bridging cognition and neurophysiology. Psychological Review, 112, 291–328.
Cavanagh, P., Hunt, A. R., Afraz, A., & Rolfs, M. (2010). Visual stability based on remapping of attention pointers. Trends in Cognitive Sciences, 14, 147–153.
Cox, D., Meier, P., Oertelt, N., & DiCarlo, J. (2005). Breaking position-invariant object recognition. Nature Neuroscience, 8, 1145–1147.
Deubel, H. (2008). The time course of presaccadic attention shifts. Psychological Research, 72, 630–640.
Deubel, H., & Schneider, W. X. (1996). Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36, 1827–1837.
DiCarlo, J. J., Zoccolan, D., & Rust, N. C. (2012). How does the brain solve visual object recognition? Neuron, 7, 415–434.
Dill, M., & Fahle, M. (1997). The role of visual field position in pattern-discrimination. Proceedings of the Royal Society of London, B, 264, 1031–1036.
Duhamel, J. R., Colby, C. L., & Goldberg, M. E. (1992, January 3). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255, 90–92.
Fahle, M. (1994). Human pattern recognition: Parallel processing and perceptual learning. Perception 23, 411–427.
Fahle, M. (2004). Perceptual learning: A case for early selection. Journal of Vision, 4 (10): 4, 879–890, https://doi.org/10.1167/4.10.4. [PubMed] [Article]
Felleman, D. J., & Van Essen, D.C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47.
Fiorentini, A., & Berardi, N. (1981). Learning of grating waveform discrimination: Specificity for orientation and spatial frequency. Vision Research, 21, 1149–1158.
Ganmor, E., Landy M., & Simoncelli E. P. (2015). Near-optimal integration of orientation information across saccades. Journal of Vision, 15 (16): 8, 1–12, https://doi.org/10.1167/15.16.8. [PubMed] [Article]
Gegenfurtner, K. R. (2016). The interaction between vision and eye movements. Perception, 45, 1333–1357.
Goldstone, R. L., & Byrge, L. A. (2015). Perceptual Learning. In The Oxford handbook of philosophy of perception (pp. 1001–1016). Oxford, England: Oxford University Press.
Herwig, A. (2015). Transsaccadic integration and perceptual continuity. Journal of Vision, 15 (16): 7, 1–6, https://doi.org/10.1167/15.16.7. [PubMed] [Article]
Herwig, A., & Schneider, W. X. (2014). Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 143, 1903–1922.
Herwig, A., Weiß, K., & Schneider, W. X. (2015). When circles become triangular: How transsaccadic predictions shape the perception of shape. Annals of the New York Academy of Sciences, 1339, 97–105.
Hoffman J. E., & Subramaniam, B. (1995). The role of visual-attention in saccadic eye-movements. Perception & Psychophysics, 57, 787–795.
Hollingworth, A., Richard, A. M., & Luck, S. J. (2008). Understanding the function of visual short-term memory: Transsaccadic memory, object correspondence, and gaze correction. Journal of Experimental Psychology: General, 137, 163–181.
Hung, S.-C., & Seitz, A. R. (2014). Prolonged training at threshold promotes robust retinotopic specificity in perceptual learning. The Journal of Neuroscience, 34, 8423–8431.
Jonikaitis, D., Szinte, M., Rolfs, M., & Cavanagh, P. (2013). Allocation of attention across saccades. Journal of Neurophysiology, 109, 1425–1434.
Jüttner, M., & Rentschler, I. (1996). Reduced perceptual dimensionality in extrafoveal vision. Vision Research, 36, 1007–1022.
Karni, A., & Sagi, D. (1991). Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences, 88, 4966–4970.
Knapen, T., Rolfs, M., & Cavanagh, P. (2009). The reference frame of the motion aftereffect is retinotopic. Journal of Vision, 9 (5): 16, 1–6, https://doi.org/10.1167/9.5.16. [PubMed] [Article]
Köller, C. P., Poth, C. H., & Herwig, A. (2018). Object discrepancy modulates feature prediction across eye movements. Psychological Research, https://doi.org/10.1007/s00426-018-0988-5.
Kowler, E., Anderson, E., Dosher, B., & Blaser, E. (1995). The role of attention in the programming of saccades. Vision Research, 35, 1897–1916.
Krauzlis, R. J., & Nummela, S. U. (2011). Attention points to the future. Nature Neuroscience, 14, 130–131.
Kravitz, D. J., Saleem, K. S., Baker, C. I., Ungerleider, L. G., & Mishkin, M. (2013). The ventral visual pathway: An expanded neural framework for the processing of object quality. Trends in Cognitive Sciences, 17, 26–49.
Le Dantec, C. C., & Seitz, A. R. (2012). High resolution, high capacity, spatial specificity in perceptual learning. Frontiers in Psychology, 3, 222.
Mathôt, S., & Theeuwes, J. (2013). A reinvestigation of the reference frame of the tilt-adaptation aftereffect. Scientific Reports, 3, 1152.
Melcher, D. (2007). Predictive re-mapping of visual features precedes saccadic eye movements. Nature Neuroscience, 10, 903–907.
O'Callaghan, C., Kveraga, K., Shine, J. M., Adams, R. B.,Jr., & Bar, M. (2017). Predictions penetrate perception: Converging insights from brain, behaviour and disorder. Consciousness and Cognition, 47, 63–74.
Oostwoud Wijdenes, L., Marshall, L., & Bays, P. M. (2015). Evidence for optimal integration of visual feature representations across saccades. Journal of Neuroscience, 35, 10146–10153.
Paeye, C., Collins, T., Cavanagh, P., & Herwig, A. (2018). Calibration of peripheral perception of shape with and without eye movements. Attention, Perception & Psychophysics, 80 (3), 723–737, https://doi.org/10.3758/s13414-017-1478-3.
Pilly, P. K., Grossberg, S., & Seitz, A. R. (2010). Low-level sensory plasticity during task-irrelevant perceptual learning: Evidence from conventional and double training procedures. Vision Research, 50, 424–432.
Poggio, T., Fahle, M., & Edelman, S. (1992, May 15). Fast perceptual learning in visual hyperacuity. Science, 256, 1018–1021.
Poth, C. H., & Horstmann, G. (2017). Assessing the monitor warm-up time required before a psychological experiment can begin. The Quantitative Methods for Psychology, 13, 166–173.
Rentschler, I., Jüttner, M., & Caelli, T. (1994). Probabilistic analysis of human supervised learning and classification. Vision Research, 34, 669–687.
Rolfs, M., Murray-Smith, N., & Carrasco, M. (2018). Perceptual learning while preparing saccades. Vision Research, https://doi.org/0.1016/j.visres.2017.11.009.
Rolfs, M., & Szinte, M. (2016). Remapping attention pointers: Linking physiology and behavior. Trends in Cognitive Sciences, 20, 399–401.
Schenck, W. (2013). Robot studies on saccade-triggered visual prediction. New Ideas in Psychology, 31, 221–238.
Schneider, W. X. (2013). Selective visual processing across competition episodes: A theory of task-driven visual attention and working memory. Philosophical Transactions of the Royal Society of London B, 368, 1–13.
Schneider, W. X., & Deubel, H. (2002). Selection-for-perception and selection-for-spatial-motor-action are coupled by visual attention: A review of recent findings and new evidence from stimulus-driven saccade control. In Attention and performance XIX: Common mechanisms in perception and action (pp. 609–627). Oxford: Oxford University Press.
Schoups, A., Vogels, R., Qian, N., & Orban, G. (2001, August 2). Practising orientation identification improves orientation coding in V1 neurons. Nature, 412, 549–553.
Shiu, L.-P., & Pashler, H. (1992). Improvement in line orientation discrimination is retinally local but dependent on cognitive set. Perception & Psychophysics, 52, 582–588.
Sommer, M. A., & Wurtz, R. H. (2006, November 16). Influence of the thalamus on spatial visual processing in frontal cortex. Nature, 444, 374–377.
Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11 (5): 13, 1–82, https://doi.org/10.1167/11.5.13. [PubMed] [Article]
Summerfield, C., & Egner, T., (2009). Expectation (and attention) in visual cognition. Trends in Cognitive Sciences, 13, 403–409.
Szinte, M., Carrasco, M., Cavanagh, P., & Rolfs, M. (2015). Attentional tradeoffs maintain the tracking of moving objects across saccades. Journal of Neurophysiology, 113, 2220–2231.
Tse, P. U., Sheinberg, N. K., & Logothetis, N. (2003). Attentional enhancement opposite a peripheral flash revealed using change blindness. Psychological Science, 14, 91–99.
Valsecchi, M., & Gegenfurtner, K. R. (2016). Dynamic re-calibration of perceived size in fovea and periphery through predictable size changes. Current Biology, 26, 59–63.
Wittenberg, M., Bremmer, F., & Wachtler, T. (2008). Perceptual evidence for saccadic updating of color stimuli. Journal of Vision, 8 (14): 9, 1–9, https://doi.org/10.1167/8.14.9. [PubMed] [Article]
Wolf, C., & Schütz, A. (2015). Trans-saccadic integration of peripheral and foveal feature information is close to optimal. Journal of Vision, 15 (16): 1, 1–18, https://doi.org/10.1167/15.16.1. [PubMed] [Article]
Wurtz, R. H., Joiner, W. M., & Berman, R. A. (2011). Neuronal mechanisms for visual stability: Progress and problems. Philosophical Transactions of the Royal Society of London, B, 366, 492–503.
Xiao, L.-Q., Zhang, J.-Y., Wang, R., Klein, S. A., Levi, D. M., & Yu, C. (2008). Complete transfer of perceptual learning across retinal locations enabled by double training. Current Biology, 18, 1922–1926.
Zhang, E., & Li, W. (2010). Perceptual learning beyond retinotopic reference frame. Proceedings of the National Academy of Sciences, USA, 107, 15969–15974.
Figure 1
 
(a) Trial structure of the acquisition phase. Participants freely decided to saccade to one out of two objects. The normal object did not change its frequency during the saccade, whereas the swapped object changed its frequency. (b) Frequency pairings used in the acquisition phase. (c) Trial structure of the test phase in Experiment 1. Participants were required to saccade to a peripheral saccade target (either the perceptual target or a plus character). Peripheral stimuli disappeared as soon as the eyes started to move. Following the saccade, a test object was presented, and participants had to match the frequency of the test object to the frequency of the presaccadic perceptual target. Note, stimuli in (a) and (b) are not drawn to scale. PT = perceptual target, ST = saccade target.
Figure 1
 
(a) Trial structure of the acquisition phase. Participants freely decided to saccade to one out of two objects. The normal object did not change its frequency during the saccade, whereas the swapped object changed its frequency. (b) Frequency pairings used in the acquisition phase. (c) Trial structure of the test phase in Experiment 1. Participants were required to saccade to a peripheral saccade target (either the perceptual target or a plus character). Peripheral stimuli disappeared as soon as the eyes started to move. Following the saccade, a test object was presented, and participants had to match the frequency of the test object to the frequency of the presaccadic perceptual target. Note, stimuli in (a) and (b) are not drawn to scale. PT = perceptual target, ST = saccade target.
Figure 2
 
Experimental conditions in the test phase of Experiment 1. Manipulating the saccade task as well as the stimulus arrangement resulted in four different conditions composed of a factorial combination of PT location (old location vs. new location) and saccade task (ST = PT vs. ST ≠ PT). See text for more details. PT = perceptual target, ST = saccade target.
Figure 2
 
Experimental conditions in the test phase of Experiment 1. Manipulating the saccade task as well as the stimulus arrangement resulted in four different conditions composed of a factorial combination of PT location (old location vs. new location) and saccade task (ST = PT vs. ST ≠ PT). See text for more details. PT = perceptual target, ST = saccade target.
Figure 3
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 1 for each participant (filled dots) and mean values across participants (empty dots) as a function of saccade task, test location, and change direction (left side) and mean signed judgment differences across participants as a function of saccade task and test location (right side). Error bars represent standard errors of the mean. ST = saccade task, PT = perceptual target.
Figure 3
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 1 for each participant (filled dots) and mean values across participants (empty dots) as a function of saccade task, test location, and change direction (left side) and mean signed judgment differences across participants as a function of saccade task and test location (right side). Error bars represent standard errors of the mean. ST = saccade task, PT = perceptual target.
Figure 4
 
Arrangement of possible saccade starting locations (depicted in green) and target locations (depicted in black) in the test phase of Experiment 2. This arrangement resulted in four different stimulus configurations composed of a factorial combination of spatiotopic (same vs. different) and retinotopic (same vs. different) coordinates. See text for more details.
Figure 4
 
Arrangement of possible saccade starting locations (depicted in green) and target locations (depicted in black) in the test phase of Experiment 2. This arrangement resulted in four different stimulus configurations composed of a factorial combination of spatiotopic (same vs. different) and retinotopic (same vs. different) coordinates. See text for more details.
Figure 5
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 2 for each participant (filled dots) and mean values across participants (empty dots) as a function of spatiotopy, retinotopy, and change direction (left side) and mean signed judgment differences across participants as a function of spatiotopy and retinotopy (right side). Error bars represent standard errors of the mean.
Figure 5
 
Mean frequency judgments of the normal and swapped object in the test phase of Experiment 2 for each participant (filled dots) and mean values across participants (empty dots) as a function of spatiotopy, retinotopy, and change direction (left side) and mean signed judgment differences across participants as a function of spatiotopy and retinotopy (right side). Error bars represent standard errors of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×