Free
Article  |   September 2014
Associating peripheral and foveal visual input across saccades: A default mode of the human visual system?
Author Affiliations
Journal of Vision September 2014, Vol.14, 7. doi:10.1167/14.11.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Katharina Weiß, Werner X. Schneider, Arvid Herwig; Associating peripheral and foveal visual input across saccades: A default mode of the human visual system?. Journal of Vision 2014;14(11):7. doi: 10.1167/14.11.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Spatial processing resolution of a particular object in the visual field can differ considerably due to eye movements. The same object will be represented with high acuity in the fovea but only coarsely in periphery. Herwig and Schneider (in press) proposed that the visual system counteracts such resolution differences by predicting, based on previous experience, how foveal objects will look in the periphery and vice versa. They demonstrated that previously learned transsaccadic associations between peripheral and foveal object information facilitate performance in visual search, irrespective of the correctness of these associations. False associations were learned by replacing the presaccadic object with a slightly different object during the saccade. Importantly, participants usually did not notice this object change. This raises the question of whether perception of object continuity is a critical factor in building transsaccadic associations. We disturbed object continuity during learning with a postsaccadic blank or a task-irrelevant shape change. Interestingly, visual search performance revealed that neither disruption of temporal object continuity (blank) nor disruption of spatial object continuity (shape change) impaired transsaccadic learning. Thus, transsaccadic learning seems to be a very robust default mechanism of the visual system that is probably related to the more general concept of action–effect learning.

Introduction
While looking around, humans have usually a clear, precise, and homogeneous impression of their environment. However, this homogeneity of visual experience is very puzzling considering the inherent nonhomogeneity of the human visual system. For instance, the ability to process visual information is not evenly distributed over the retina. Visual accuracy drops steeply from the fovea—which covers only 2° of visual angle at the center of gaze—toward the periphery. More precisely, at 20° eccentricity spatial resolution is reduced to a tenth of its foveal value (Land & Tatler, 2009). Thus, whereas a foveated object will be represented with high visual acuity, objects in the periphery—the far larger part of the human retina—will only be coarsely represented. Fortunately, saccades—fast, ballistic eye movements that are executed about three to four times a second—enable us to process any part of our environment in rich detail by orienting our fovea toward it. Nevertheless, saccades provide new challenges to the visual system, including spatial resolution differences within objects. Across a saccade spatial resolution changes not only between different objects but also within a particular object. Consequently, changes in spatial resolution are dramatic if a previously peripheral object is foveated after the saccade. 
How our visual system deals with such (sometimes dramatic) saccade-induced intraobject resolution differences is critical for many everyday tasks in human life, as for instance in visual search (e.g., Treisman & Gelade, 1980; Wolfe, 1994; for a recent review see Eckstein, 2011). Imagine you are at the seaside, eating a sandwich. Unfortunately, it is unavoidable to watch out for seagulls in pursuit of your sandwich. Probably, you have a relatively elaborate, richly detailed representation of a seagull in memory. But since visual input from the periphery has low spatial resolution, your visual system has to find your foveal search template of a seagull within low-resolution sensory input (for the idea that visual search is guided by foveal search templates, see Zelinsky, 2008, and see Figure 1 for an illustration of the visual system's inhomogeneity). How exactly could our visual system solve such a problem? Saccading randomly through the environment to foveate potential objects in the periphery seems to be a very poor and time-consuming strategy. Fortunately, our knowledge and previous experience can facilitate visual search substantially (e.g., Henderson, 2003; Hillstrom, Scholey, Liversedge, & Benson, 2012; Hollingworth, 2012). In the context of the above-mentioned visual search example, the knowledge that seagulls are birds and are therefore more likely be found up in the sky than on the ground can restrict the area of your search (see Torralba, Oliva, Castelhano, & Henderson, 2006). As we shall see below, guidance by context is, however, just one way to solve the inhomogeneity problem of the visual system. Another important factor is that the visual system associates peripheral and foveal visual (object) input over saccades (i.e., transsaccadic learning,1 see Herwig & Horstmann, 2011) and uses these learned associations to predict the peripheral appearance of a search target (Herwig & Schneider, in press). In the present study, we investigate this second factor in more detail. More specifically, we address the question of whether spatial and/or temporal object continuity is a precondition for transsaccadic learning. 
Figure 1
 
Spatial resolution differences across a saccade. When the observer foveates the dome of the pier (left upper scene) the seagull in the periphery is only coarsely represented, whereas the seagull is represented in high resolution after it is foveated due to a saccade (right upper scene). The bottom row represents the retinal image of the seagull before and after the saccade that foveates the seagull.
Figure 1
 
Spatial resolution differences across a saccade. When the observer foveates the dome of the pier (left upper scene) the seagull in the periphery is only coarsely represented, whereas the seagull is represented in high resolution after it is foveated due to a saccade (right upper scene). The bottom row represents the retinal image of the seagull before and after the saccade that foveates the seagull.
Predictive mechanisms in visual search
There is ample of evidence demonstrating that human search behavior is guided by context information, knowledge, and previous experience (e.g., Castelhano & Heaven, 2011; Chun & Jiang, 1998, 1999; Henderson, 2008; Henderson & Ferreira, 2004; Henderson, Weeks, & Hollingworth, 1999). Moreover, it has been repeatedly suggested that this knowledge and previous experience facilitate visual search performance by allowing the visual system to make predictions (see Bar, 2004, 2009) about future events (i.e., about the likely location or identity of a search target). Interestingly, however, only few studies on visual search (Meinecke, 1989; Nuthmann, 2014; Wolfe, O'Neill, & Bennett, 1998) have taken the inhomogeneity of the human retina into account. One interesting exception provides Zelinsky (2008), who proposed a computational model of eye movements in visual search that transforms a search image to reflect the spatial resolution limitations of the visual system (see also Wischnewski, Belardinelli, Schneider, & Steil, 2010). Likewise, in an extension of ideal Bayesian observer theory, Najemnik and Geisler (2005) addressed differences in the detectability of the target across the retina and proposed that in visual search eye movements that gain the most information about possible target locations are selected.2 In a recent study, Herwig and Schneider (in press) specifically took both the inhomogeneity of the human visual system and the role of predictions in visual search into account. They proposed that due to a lifetime of experience with spatial resolution changes of objects or features of objects across saccades, humans learn associations between presaccadic peripheral and postsaccadic foveal visual input. These associations can be utilized for predictions of peripheral or foveal visual input to deal with the nonhomogeneity of the visual system. In the context of visual search, this means that peripheral information that is associated with a foveal search template (i.e., a seagull) due to previous transsaccadic experience is used to predict the related sensory peripheral appearance. This peripheral prediction will be used to compare it with the actual peripheral sensory input from potential target candidates. 
To test this transsaccadic feature prediction mechanism, Herwig and Schneider adapted a transsaccadic learning paradigm (see Cox, Meier, Oertelt, & DiCarlo, 2005) to teach participants in an acquisition phase new—physically incorrect—associations between peripheral and foveal visual input. Incorrect associations were created by changing features of the saccade target—i.e., the spatial frequency of a sinus grating—during midsaccade. Due to the fact that humans are almost blind during a saccade (e.g., Burr, Morrone, & Ross, 1994; Matin, 1974; Thiele, Henning, Kubischik, & Hoffmann, 2002), this feature change went unnoticed by almost all participants. In a following test phase, participants made use of the previously learned associations in visual search as well as in object recognition. In the visual search test phase search performance was biased toward previously associated peripheral input, whereas in the object recognition test phase perceptual judgments were biased toward previously associated foveal input. Thus, participants used previously associated peripheral or foveal visual input to predict how objects/object features will look like in the periphery (visual search) or the fovea (object recognition). Thus, Herwig and Schneider's findings provide striking evidence for the plasticity of the human visual system: Half an hour of learning physically incorrect transsaccadic associations was sufficient to overrule physical feature correspondence. 
Object continuity in visual search and across saccades
Visual search is not only challenged by the inhomogeneity of the visual system but is also often complicated by occlusions (e.g., a seagull that flies behind a pier and is therefore momentarily out of sight). How does our visual system keep track of (temporally) occluded objects? For instance, how do we know that the seagull that disappeared behind one side of the pier is the same seagull that appears at the other side? Interestingly, this object correspondence problem already exists on a microlevel of visual perception. Since processing of visual input is suppressed during saccadic eye movements as well as eye blinks (e.g., Burr et al., 1994; Matin, 1974; Thiele et al., 2002), short—internally triggered—occlusions of visual input occur several times a second. The most important cue for our visual system in solving this object correspondence problem is spatiotemporal continuity (e.g., Cox et al., 2005; Flombaum, Scholl, & Santos, 2009; Li & DiCarlo, 2008, 2010; Schneider, 2013; Schreij & Olivers, 2013). Spatial as well as temporal information about an object before its occlusion allow the visual system to predict its location and time of reappearance (e.g., when and where a seagull will reappear behind the pier). If this prediction is violated (e.g., by a seagull appearing behind the fish and chips shop instead of the pier), object correspondence is not established because it is most likely a different object. The prominent role of spatiotemporal continuity in object persistence is evident from behavioral studies (e.g., Cox et al., 2005; Schreij & Olivers, 2013) as well as neurophysiological studies (e.g., Li & DiCarlo, 2008, 2010). For instance, Schreij and Olivers (2013) demonstrated that visual search benefits due to repetition of the target location from a previous trial disappeared if the spatial or temporal continuity of the search display's motion trajectory were disrupted. Object persistence across saccades seems to be mainly driven by spatiotemporal continuity as well. For example, Cox et al. (2005) demonstrated that humans are able to associate (slightly) different objects across saccades while the spatiotemporal continuity of the situation was preserved. Moreover, Li and DiCarlo (2008) demonstrated in a single cell study with rhesus monkeys that under spatiotemporal continuity even completely different objects can be associated across saccades. An important mechanism that contributes to the establishment of object continuity across saccades is predictive remapping, which has been demonstrated on a behavioral as well as neuronal level (Duhamel, Colby, & Goldberg, 1992; Rolfs, Jonikaitis, Deubel, & Cavanagh, 2011). In retinotopic organized brain areas, the activity of neurons is increased if a planned saccade will bring a stimulus into their receptive fields (Duhamel et al., 1992). It has been shown on a behavioral level that discrimination performance increases up to 75 ms at the future, predicted retinotopic location of the saccade target, the fovea (for further discussion of location prediction and its possible relationship to feature prediction, see Herwig and Schneider, in press). Several recent behavioral studies (Boi, Ögmen, Krummenacher, Otto, & Herzog, 2009; Hein & Cavanagh, 2012; Hunt & Cavanagh, 2011; Szinte & Cavanagh, 2011) revealed that spatiotopic coordinates seem to be more important than retinotopic coordinates in establishing object continuity across saccades. Hunt and Cavanagh (2011) demonstrated that if object continuity of a presaccadically presented Landolt C oriented to the left or right was disrupted by a presaccadic mask, a mask at the predicted/remapped location was much more effective than a mask at the presaccadic spatial location of the target. Moreover, the work by Szinte and Cavanagh (2011) indicates that space constancy is a result of predictive remapping and is obtained for only a few attended objects. Participants saw two dots: one before and one after the saccade. The second dot was displaced about 3° vertically from the first dot, but due to the saccade the second dot was additionally displaced by 10° horizontally from the first dot. Participants perceived apparent motion across the saccade to be stronger vertically than horizontally, indicating that the visual system corrects for the horizontal displacement due to the saccade. Furthermore, Boi et al. (2009) demonstrated that nonretinotopic processing plays an important role in motion perception and visual search. To summarize, spatiotemporal continuity has proven to be an important factor in establishing object persistence across frequently occurring interruptions of visual input, particularly saccadic eye movements. 
Spatiotemporal object continuity and transsaccadic learning: The present study
Herwig and Schneider used only an inconspicuous transsaccadic feature change (see Cox et al., 2005) to learn physically incorrect associations. Consequently, only few participants detected the subtle change in spatial frequency that was completely task irrelevant. This raises the question about the generalizability of the discovered transsaccadic feature prediction mechanism. What extent of change between peripheral and foveal input can be tolerated to be associated across a saccade? Although there is a bulk of empirical evidence that humans are usually insensitive to transsaccadic changes (i.e., location changes up to 2° of visual angle; e.g., Bridgeman, Hendry, & Stark, 1975), their change detection and discrimination performance can be strongly improved by a substantial temporal (postsaccadic blank: Deubel, Bridgeman, & Schneider, 1998, 2004; Deubel & Schneider, 1994; Deubel, Schneider, & Bridgeman, 1996; the effect is maximal between 200 and 300 ms) or spatial disruption of object continuity (transsaccadic shape change: Demeyer, De Graef, Wagemans, & Verfaillie, 2010). Note that Demeyer et al. (2010) showed that a temporal blank was more effective than a shape change (square, diamond, circle, and cross in all combinations). The shape change effect was about half as large as the effect of the temporal blank. These findings are sometimes explained by the idea that if object continuity is disrupted, a new postsaccadic internal object—a new “object file” (see Kahnemann, Treisman, & Gibbs, 1992)—will be created instead of updating the old presaccadic object file and that this should facilitate change discrimination (Schneider, 2013; Tas, Moore, & Hollingworth, 2012). 
But is object continuity across a saccade also a necessary precondition for learning transsaccadic associations for later feature prediction? The aim of the present study was to test this question by disrupting temporal (Experiment 1b) and spatial (Experiment 1c) object continuity. Moreover, we conducted Experiment 1a where spatiotemporal object continuity was not disrupted as a baseline. Two possible outcomes could be expected in the present study: On the one hand, learning of transsaccadic associations between peripheral and foveal input could be a default modus of the human visual system so that even substantial object discontinuity (200 ms postsaccadic blank; Experiment 1b) or a postsaccadic shape change (circle to triangle; Experiment 1c) would not prevent or at least impair transsaccadic learning. There is indeed empirical evidence in favor of this assumption: Humans are astonishingly insensitive to transsaccadic changes (e.g., Bridgeman et al., 1975; Bridgeman & Stark, 1979; Grimes, 1996; Henderson & Hollingworth, 2003a); even global changes in a visual scene are rarely detected (Henderson & Hollingworth, 2003b). Such findings of transsaccadic change blindness are frequently explained by a strong default assumption of visual stability (e.g., MacKay, 1972; Mathôt & Theeuwes, 2011). The default assumption that our world is stable seems adaptive because outside the laboratory a saccade target—let alone a whole scene—changes rarely in the short time we need to make a saccade. 
On the other hand, transsaccadic learning could depend on object continuity. Thus, disruption of object continuity would impair or even prevent transsaccadic learning. There is indeed also empirical evidence in favor of such a crucial role of object continuity in transsaccadic learning. First, all previous behavioral transsaccadic learning studies (Cox et al., 2005; Herwig & Schneider, in press) used only a subtle, inconspicuous change in the incorrect association trials of the learning phase. Therefore, this change would probably not have disrupted spatial object continuity. Second, temporal object continuity was always maintained across saccades in behavioral as well as neurophysiological learning studies (Cox et al., 2005; Herwig & Schneider, in press; Li & DiCarlo, 2008, 2010) and was assumed to be critical for (transsaccadic) learning (Li & DiCarlo, 2008, 2010; temporal contiguity hypothesis). For instance, Li and DiCarlo (2008), who confronted monkeys with a conspicuous object change during a saccade (i.e., ship to cup) while critically maintaining temporal object continuity, showed that object selectivity of inferior temporal cortex neurons at positions where conspicuous object changes took place decreased but that object selectivity of the same neurons showed no decrease at positions where no object changes occurred across saccades. Thus, inferior temporal cortex neurons learned tolerance to conspicuous object changes after a short, 1-hr learning phase by associating neural activity patterns of different objects. Note that this finding implies that disruption of spatial object continuity by a shape change is not critical for transsaccadic learning in monkeys. Whether this finding holds also for humans is tested in the present study. 
To ensure that we had the best possibility to test the influence of spatiotemporal object continuity on transsaccadic learning, we used a postsaccadic blank (see Deubel et al., 1996; Experiment 1b) and a postsaccadic shape change to disrupt object continuity (see Demeyer et al., 2010; Experiment 1c). 
Method
Experiment 1
We expected to replicate Herwig and Schneider's (in press) facilitation effect of transsaccadic learning on visual search. Visual search performance (percentage of correct first saccades) should be better if the pairing between search target and peripheral target in the test phase is congruent with respect to the transsaccadic learning of the acquisition phase than if it is incongruent. If temporal and/or spatial object continuity could prevent transsaccadic learning, we should reveal this effect only in baseline Experiment 1a. If, on the other hand, transsaccadic learning is a robust default mechanism of the human visual system, then the findings should reveal a transsaccadic learning effect of comparable size for all parts of Experiment 1. 
Participants
Thirty-six participants aged between 19 and 32 years (mean age = 24.47 years) took part in Experiment 1. Twenty-seven of the participants were female. All participants were naïve with respect to the aim of the experiment and reported normal or corrected-to-normal vision. Since it was unclear whether disruption of transsaccadic learning had any long-term effects, we decided to manipulate the disruption of object continuity between participants; 12 participants participated in each part of Experiment 1. 
Apparatus and stimuli
The experiment was performed in a dimly lit room, with a viewing distance of 71 cm to a 19-in. display monitor running with 100 Hz. Screen resolution was set to 1024 × 768 pixels, which corresponded to physical dimensions of 36 cm (width) × 27 cm (height). Eye movements were recorded with a video-based tower-mounted eye tracker (Eye Link1000, SR Research, Ontario, Canada). The sampling rate of the eye tracker used for recording eye movements was 1000 Hz. In all participants, the right eye was monitored and the head and chin were stabilized by a forehead or chin rest, respectively. In all experiments, a black fixation cross was displayed in the middle of the screen (0.3° × 0.3°, line width 2 pixels). In Experiments 1a and 1b, the stimuli were circular objects (1.5°) filled with sinusoidal gratings of different spatial frequency (2.45 or 3.95 cpd, orientation 0° vs. 45°). In Experiment 1c, presaccadic stimuli were circular objects (1.5°) filled with sinusoidal gratings of different spatial frequency (2.45 or 3.95 cpd, orientation 0°), whereas postsaccadic stimuli were triangular objects (edge length 1.5°). In all parts of Experiment 1, stimuli were presented on a gray background with a mean luminance of 30 cd/m2. Examples of the utilized stimuli are shown in Figure 2
Figure 2
 
The acquisition phase for Experiments 1a to 1c. The left column shows trials in which a correct transsaccadic association is learned because the participant made a saccade to the “normal object” (here the vertical object), which did not change spatial frequency across the saccade. The right column shows acquisition trials in which an incorrect transsaccadic association (spatial frequency change high to low) is learned because the participant made a saccade to the “swapped object” (here the tilted object), which changed spatial frequency across the saccade. Orange circles depict the gaze position. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Figure 2
 
The acquisition phase for Experiments 1a to 1c. The left column shows trials in which a correct transsaccadic association is learned because the participant made a saccade to the “normal object” (here the vertical object), which did not change spatial frequency across the saccade. The right column shows acquisition trials in which an incorrect transsaccadic association (spatial frequency change high to low) is learned because the participant made a saccade to the “swapped object” (here the tilted object), which changed spatial frequency across the saccade. Orange circles depict the gaze position. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Procedure and design
Each part of Experiment 1 consisted of an acquisition phase and a following test phase (see Figures 2 and 3), which were executed in a single session. A session's duration was approximately 45 min. Before each experimental phase a 9-point grid-calibration procedure was conducted. After a fixed fixation interval (500 ms) each acquisition trial started with the presentation of two circular objects that were filled with sinusoidal gratings of vertical (0°) or tilted (45°) orientation. The objects were randomly displayed at 6° to the right or left of center of the screen. Participants were instructed to saccade to one of the objects (vertical or tilted) by their own choice but to ensure that they foveated each object equally often over trials and in random order. After every 48th trial feedback about the number of saccades made to each of the objects was provided. One of the foveal objects (vertical or tilted) was defined as the high spatial frequency object (3.95 cpd) and the other was defined as the low spatial frequency object (2.45 cpd). The mapping of object orientation (tilted vs. vertical) and foveal spatial frequency (low vs. high) was counterbalanced over participants. Critically, in all parts of Experiment 1, one of the peripheral objects with the high spatial frequency (3.95 cpd; see Figure 2) was replaced by an object with low spatial frequency (2.95 cpd) during the time of the saccade. This “swapped” object had the same circular shape in Experiments 1a and 1b but a different triangular shape in Experiment 1c. In Experiment 1c, the shape of the object that was not saccaded to was also replaced by a triangular object (see Figure 2). Since this object replacement was conducted during the saccade, different objects (with respect to spatial frequency in Experiments 1a and 1b; with respect to spatial frequency and object shape in Experiment 1c: swapped objects) were presented to the presaccadic peripheral and postsaccadic foveal retina. Saccades to the other object (normal object) led not to a replacement in Experiments 1a and 1b. In Experiment 1c, the normal object was replaced by an object with the same frequency but triangular shape. With these manipulations we ensured that participants always saw objects with the same high spatial frequency in the periphery. The spatial frequency of the swapped and normal object differed only in the fovea after the saccade. After the saccade, both objects were presented for 250 ms either immediately (Experiments 1a and 1c) or after a 200-ms postsaccadic blank (Experiment 1b) and then the objects were replaced by a blank of 1500-ms duration. The acquisition phase consisted of 240 trials, which were run in five blocks of 48 trials. In the visual search task of the test phase, each trial began with a presentation of the search target in the center of the screen for 100 ms. The search target was always a foveated version of the normal or swapped object (vertical or tilted) in the acquisition phase and it was varied trial by trial. Thus, the search target was circular for Experiments 1a and 1b but triangular for Experiment 1c. After an interstimulus interval of 900 ms, the search display appeared containing two peripheral objects (a tilted one and a vertical one) presented with a horizontal distance of 6° to the right and left of the fixation cross. The task of the participants was to search for the previously foveally presented search target and to saccade to it as fast as possible. Each object's spatial frequency was chosen randomly from 3.95 cpd and 2.45 cpd. Thus, spatial frequencies of both objects in the search display (target and distractor identifiable by their orientation) were defined on two dimensions in their relation to the previously presented search target: (1) physical match and (2) learning congruence with respect to the acquisition phase. This means that if participants are searching for the swapped object, a spatial frequency of 3.95 cpd would match their learning experience because objects of different spatial frequency were presented to the peripheral and foveal retina, but it would not be a physical match to the foveally presented spatial frequency of 2.45 cpd. By contrast, if they were searching for the normal object, a spatial frequency of 2.45 cpd would be learning incongruent as well as a physical mismatch (see Figure 3 for an illustration). To prevent renewed transsaccadic learning during the test phase, the objects in the search display were removed when the eyes of the participants started to move. A test trial was aborted if the participants did not make a saccade within 1000 ms after the onset of the search display, and they received feedback to execute their eye movements faster. The test phase constituted 192 trials, which were divided into four blocks of 48 trials. After the test phase, participants were asked in a debriefing whether they noticed the spatial frequency change (Experiments 1a–1c) or the shape change (Experiment 1c) during the acquisition phase or whether they saw other things that seemed unusual. Note that we did not choose a more direct approach in investigating the participants' awareness of the spatial frequency change, as for instance by measuring detection performance of the spatial frequency change after each trial in the acquisition phase, because we did not want to make participants explicitly aware of this change during transsaccadic learning. 
Figure 3
 
The visual search test phases of all experiments. First, a visual search target was presented for 100 ms, which was either the foveated version of the normal or swapped object of the acquisition phase. After a 900-ms delay the participant had to make a saccade as fast as possible to the peripheral target (defined by orientation). For the normal search target, the pairing between search target and peripheral target was either a physical match and learning congruent or a physical mismatch and learning incongruent. For the swapped search target, the target pairing was either a physical match and learning incongruent or a physical mismatch and learning congruent. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Figure 3
 
The visual search test phases of all experiments. First, a visual search target was presented for 100 ms, which was either the foveated version of the normal or swapped object of the acquisition phase. After a 900-ms delay the participant had to make a saccade as fast as possible to the peripheral target (defined by orientation). For the normal search target, the pairing between search target and peripheral target was either a physical match and learning congruent or a physical mismatch and learning incongruent. For the swapped search target, the target pairing was either a physical match and learning incongruent or a physical mismatch and learning congruent. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Data analysis
Trials were discarded from data analysis if they matched one or more of the following criteria: (1) saccade latency was smaller than 100 ms (anticipatory saccade), (2) the gaze deviated from the fixation point by more than 1.5° at saccade onset, (3) the landing position of the saccade deviated by more than 3° from the position of the target or distractor, or (4) the latency of the saccade was longer than 1000 ms. Due to these criteria, 14.2% of all acquisition trials and 7.7% of test phase trials had to be discarded in Experiment 1a. In Experiment 1b, 7.9% of the trials in the acquisition phase and 5.7% of trials in the test phase had to be discarded. In Experiment 1c, 19.1% of the trials in the acquisition phase and 9.2% of the trials in the test phase had to be discarded. 
Results
Overall data analysis
Acquisition phase
For all parts of Experiment 1, participants looked equally often at the normal and swapped object; Experiment 1a: 50.35% versus 49.64%, t(11) = 0.37, p = 0.72; Experiment 1b: 49.77% versus 50.23%, t(11) = −0.42, p = 0.68; Experiment 1c: 50.22% versus 49.78%, t(11) = 0.23, p = 0.82. Furthermore, participants looked equally fast at normal and swapped object; Experiment 1a: normal object (M = 291 ms) versus swapped object (M = 283 ms), t(11) = 0.97, p = 0.35; Experiment 1b: normal object (M = 282 ms) versus swapped object (M = 284 ms), t(11) = −0.37, p = 0.72; Experiment 1c: normal object (M = 287 ms) versus swapped object (M = 289 ms), t(11) = −0.32, p = 0.76. 
Test phase
A mixed analysis of variance (ANOVA) with the factors 2 (match: within-subject factor) × 2 (congruency: within-subject factor) × 3 (object continuity: between-subjects factor) was conducted. Replicating Herwig and Schneider's (in press) finding, we found a main effect of learning congruency, F(1, 33) = 19.14, p < 0.001, ηp2 = 0.37. Search performance was better when the combination between foveal search target and peripheral target was learning congruent (M = 71.3%) than when it was learning incongruent (M = 66.5%; see Figure 4). In contrast to our previous study, we found now a main effect of match, F(1, 33) = 8.42, p < 0.01, ηp2 = 0.20. Search performance was better if there was a spatial frequency match (M = 70.5%) than if there was none (M = 67.3%). Interestingly, we found a significant interaction of congruency × match as well, F(1, 33) = 5.77, p < 0.05, ηp2 = 0.15 (see Figure 5). Posthoc Bonferroni-adjusted t-tests revealed that this interaction was due to a lack of a significant congruency effect if there was a physical match. If there was a physical mismatch, learning congruency had a huge effect; search performance was better if the combination was learning congruent (M = 71.5%) than if the combination was learning incongruent (M = 63.2%). Most interestingly, the between-subjects factor object continuity was not significant, F(2, 33) = 0.25, p = 0.78, ηp2 = 0.01. Additionally, none of the interactions including the factor object continuity were significant, all Fs < 1, all ps > 0.42. The analysis of mean saccadic latencies revealed only a main effect of match, F(1, 33) = 7.74, p < 0.01, ηp2 = 0.19 (see Figure 6). Participants were faster to saccade to a physically matching target (M = 289 ms) than to a nonmatching target (M = 298 ms). None of the other main effects and none of the interactions were significant, all Fs < 1.31, all ps > 0.26. It is important to note that saccadic latency data did not counteract search performance data; thus, our results could not be explained by a speed–accuracy tradeoff. 
Figure 4
 
The percentage of correct first saccades in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
Figure 4
 
The percentage of correct first saccades in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
Figure 5
 
The interaction between match and congruency over all experiments (N = 36). Error bars represent the standard error of the mean.
Figure 5
 
The interaction between match and congruency over all experiments (N = 36). Error bars represent the standard error of the mean.
Figure 6
 
The mean saccade latencies in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
Figure 6
 
The mean saccade latencies in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
Analysis of overt and covert swap detection
The postexperiment debriefing revealed that only few participants detected the critical spatial frequency change but that, in contrast, most participants detected the manipulation to disrupt spatial object continuity in Experiment 1c. In the baseline Experiment 1a, one participant out of 12 detected the frequency change and could report the correct direction (high to low). In Experiment 1b, only five of the 12 participants reported that they noticed the frequency change and could report the correct direction. We found a similar picture in Experiment 1c: Although nine of 12 participants noticed the task-irrelevant shape change, only one of 12 participants noticed the spatial frequency change and could report the correct direction (high to low). Thus, overt change detection of the critical spatial frequency change was low, whereas the overt change detection of the task-irrelevant shape change in Experiment 1c was high. Taken together with the empirical finding of Demeyer et al. (2010) that participants detected transsaccadic blanks (50 ms) and shape changes in almost every trial (98%), this indicates that our manipulations to disrupt object continuity in Experiments 1b and 1c were successful. This means that the report of a transsaccadic change was not necessary for mediating effects of object continuity failure. Nevertheless, we were interested in whether spatial frequency change detection could be revealed by an indirect change detection measure. To this aim we analyzed overall fixation durations for the normal and swapped object in Experiment 1a. An indirect detection of the spatial frequency change would be revealed by prolonged fixation duration at the swapped object in the acquisition phase. Note that an analysis of fixation durations in Experiments 1b and 1c was not reasonable because blank and shape change were applied to the normal and swapped objects and should therefore conceal any differences. Henderson and Hollingworth (2003a, 2003b) demonstrated with a similar procedure that overt change detection measures can underestimate change detection. Even in trials in which participants did not report a change, elevated fixation duration indicated change detection. Although only one participant in Experiment 1a reported detection of the spatial frequency change in the acquisition phase, elevated fixation durations at the swapped target (M = 499 ms versus M = 483 ms, t(11) = −2.47, p < 0.05) revealed indirect, nonreportable detection of the spatial frequency change. 
Discussion
Associating peripheral and foveal visual input across saccades to predict object features is astonishingly robust. Visual search performance was successfully biased toward the predicted peripheral input regardless of whether object continuity was given during transsaccadic learning or not. Neither disruption of temporal (Experiment 1b) nor spatial (Experiment 1c) object continuity could prevent transsaccadic learning. Furthermore, we demonstrated that transsaccadic feature prediction can be generalized to conspicuous transsaccadic changes (Experiment 1c: shape change, circle to triangle). This indicates that associations between peripheral and foveal visual input can be learned, not only for the same stimuli but also for stimuli with clearly different object shapes (see Li & DiCarlo, 2008, for similar results in monkeys). Taken together these findings imply that transsaccadic learning is a default mode of the human visual system that cannot be disturbed easily. 
In contrast to Herwig and Schneider's (in press) study, we revealed feature prediction in visual search only if foveal search target and peripheral target were a physical mismatch and not if they were a physical match. This missing influence of transsaccadic learning history under physical match conditions can be ascribed to natural transsaccadic learning in everyday life. A lifetime of experience with transsaccadic changes in spatial resolution probably conceals transsaccadic learning for a physical match between foveal search target and peripheral target because there is no need to learn transsaccadic associations for the unaltered, “normal object” of the acquisition phase. Therefore, it should not matter for visual search performance whether a participant had learned the physically correct association in the acquisition phase or not. Since the statistical power in the present study was higher (N = 36) than that in Herwig and Schneider's (in press) recent study, the influence of physical match was presumably concealed by the lower statistical power in the previous study. Before we delve into the question of how predictions might play a role in visual search, we first elaborate on the relationship between transsaccadic learning and discrimination performance as well as the relationship between transsaccadic learning and more general forms of learning in further detail. 
Transsaccadic learning and transsaccadic discrimination
Our results on transsaccadic learning seem to be in disagreement with studies on transsaccadic discrimination (e.g., Demeyer et al., 2010; Deubel et al., 1996; Tas et al., 2012) that demonstrated that discrimination performance across a saccade can be strongly improved by disruption of different dimensions of object continuity (temporal object continuity, spatial object continuity, or continuity of surface features). Recently, this improvement was explained by a failure of transsaccadic object continuity testing (Schneider, 2013; Tas et al., 2012). The discontinuity in perceptual input due to a saccade will trigger an object continuity test. If object continuity is detected, that means the visual input sampled after the saccade seems to be the same as the presaccadically sampled object, the presaccadic object representation is updated, overwritten with the newly sampled postsaccadic information. Therefore, discrimination between presaccadic and postsaccadic target features is not possible or strongly impaired. Note that Schneider (2013) made more specific assumptions about object continuity testing across a saccade: He postulated that testing relies on midlevel features of priority map regions of objects (e.g., their rough shape or attentional weight). But if the visual system detects an object continuity failure, a new representation of the postsaccadic object is established, which allows a comparison between presaccadic and postsaccadic object information. Therefore, disruption of object continuity across saccades facilitates transsaccadic discrimination judgments. However, in the present study, learning of associations between peripheral and foveal visual input was neither impaired nor improved by disruption of object continuity. This might indicate that transsaccadic learning and transsaccadic discrimination recruit distinct mechanisms. 
Furthermore, our results could not be due to ineffective disruption of object continuity. Although only few participants reported awareness of the spatial frequency change in the acquisition phase during postsession debriefing (one in baseline Experiment 1a, five in Experiment 1b, and one in Experiment 1c), overall fixation duration as an indirect measure of change detection (for a similar procedure see Henderson, 2003a, 2003b) revealed that the spatial frequency change for the swapped object of the acquisition phase was detected by the visual system. In the baseline Experiment 1a, participants looked longer at the postsaccadic swapped object (M = 499 ms) than at the postsaccadic normal object (M = 483 ms). 
Transsaccadic learning: One form of action–effect learning?
The results of the present study are in accordance with studies on response–outcome or action–effect learning. These studies address the more general question of how participants learn that a particular action can produce a particular action-effect by focusing typically on manual actions like key presses and auditory effects like tones (e.g., Elsner & Hommel, 2001; Herwig, Prinz, & Waszak, 2007; Herwig & Waszak, 2009, 2012; Waszak & Herwig, 2007). For example, Elsner and Hommel (2004) demonstrated that disruption of temporal contiguity (up to a delay of 1000 ms between action and effect) does not prevent learning of action–effect associations. That is, learning occurred even over relatively long disruptions of temporal contiguity and collapsed only after inserting delays of 2000 ms. Comparable intervals have been also found to affect causality judgments, i.e., the perceived causal effectiveness of a movement to bring about a certain outcome. If movements and outcomes are separated by delays of 0 to 2000 ms, causality judgments are quite accurate but judgments quickly drop in acuity for delays longer than 2000 ms (Shanks, Pearson, & Dickinson, 1989). Moreover, in a recent transsaccadic learning study, Herwig and Horstmann (2011) showed that similar learning mechanisms that are involved in goal-directed hand movements to produce effects in the environment also apply to eye movements conducted to bring about effects in the animate social world. 
Although only rarely discussed in the context of action–effect learning (for an exception see Waszak, Cardoso-Leite, & Hughes, 2012), studies on chromatic adaptation due to eye movements associated with the perception of particular color emphasize as well the similarities between transsaccadic learning and action–effect learning (Bompas & O'Regan, 2006a, 2006b; Richters, 2008; Richters & Eskew, 2009). For example, after repeated exposure to a pairing of leftward saccades with the appearance of a red spot and rightward saccades with the appearance of a green spot, observers judged a spot presented with leftward saccades as greener than they did before exposure to the pairing (Bompas & O'Regan, 2006b; Richters & Eskew, 2009). There are two interesting observations that might be helpful in further specifying the kind of learning underlying effects on chromatic judgments. First, learning is not restricted to eye movements because the same effects were observed after associating left- and rightward movements of a joystick with the presentation of red and green stimuli (Richters, 2008). Second, learning is, however, particularly associated with performing an action because replacing the action with a sensory event (i.e., a tone) eliminated the effect (Richters & Eskew, 2009). This latter observation fits nicely with transsaccadic learning as investigated by Herwig and Schneider (in press; Experiment 2). Here, the exclusion of the motor component also reduced the learning effect substantially. 
Taken together, these findings indicate that transsaccadic learning as demonstrated in the present study might be more related to action–effect learning and causality judgments than to mechanisms underlying online transsaccadic change detection and object continuity testing. Thus, transsaccadic learning might be probably better conceived as one form of action–effect learning rather than as an isolated learning phenomenon restricted to the oculomotor system. However, whether the time course/time scale of transsaccadic learning and learning of action–effects is comparable across action modalities is a topic for future research. 
Prediction in visual search
Making predictions is an important mechanism in visual search. For instance, context information and prior knowledge can be used to generate predictions for guiding visual search: Research on complex visual scenes as well as simple stimuli (i.e., shapes and letters) demonstrated that search targets were found sooner if likely target locations or identities could be predicted from the scene context or context of surrounding distractors (contextual cueing: Chun & Jiang, 1998, 1999; see Chun, 2000, 2008, for an overview of associative learning). In complex scenes, search targets that were semantically consistent with a scene (e.g., a loaf of bread in a kitchen vs. a microscope in a bar) were found sooner than were semantically inconsistent search targets (Henderson et al., 1999). Interestingly, putting a search target at spatially consistent locations facilitates visual search even if the object is semantically inconsistent with the scene (i.e., a toothbrush on a kitchen table vs. a toothbrush on the floor; Castelhano & Heaven, 2011). In the present study, previously learned associations between peripheral and foveal object features are used to predict the peripheral sensory appearance of the search target and guide visual search accordingly. It is an interesting question whether transsaccadic feature prediction can explain at least a part of visual search performance without previous transsaccadic learning in the laboratory. The results of Nuthmann (2014) point in that direction: She found, when she restricted foveal vision by blind spots, that visual search performance in natural scenes was not impaired. Thus, foveal vision is not necessary for successful search performance. 
Conclusions
Associating peripheral input and foveal input across saccades is a robust, default mode of the human visual system. In contrast to discrimination judgments across saccades (i.e., spatial displacement discrimination; Deubel et al., 1996), transsaccadic learning seems not to depend on the outcome of transsaccadic object continuity testing. Presaccadic peripheral and postsaccadic foveal object features are associated regardless of whether an object continuity failure would be detected (postsaccadic blank, shape change) or not. This independence of transsaccadic learning of object continuity testing implies a close link between transsaccadic learning and the more general concept of action–effect learning, which has been shown to not be affected by short disruptions of temporal contiguity. The same pattern was found for different actions like eye movements, key presses, and joystick movements. In everyday life such a mechanism would not be a disadvantage because objects rarely change during the time of a saccade. For instance, it is more likely that the same object reappears after a short occlusion than it is that a different object appears after 200 ms. 
Acknowledgments
This work was supported by a grant of Excellence Cluster “Cognitive Interaction Technology (CITEC)” to Werner Schneider and by a grant from the German Research Council (Deutsche Forschungsgemeinschaft; DFG) to Arvid Herwig and Werner Schneider (He6388/1-1). 
Commercial relationships: none. 
Corresponding author: Katharina Weiß. 
Email: katharina.weiss@uni-bielefeld.de. 
Address: Department of Psychology, Bielefeld University, Bielefeld, Germany. 
References
Bar M. (2004). Visual objects in context. Nature Reviews Neuroscience, 5, 617–629. [CrossRef] [PubMed]
Bar M. (2009). The proactive brain: Memory for predictions. Philosophical Transactions of the Royal Society B, 364, 1235–1243. [CrossRef]
Boi M. Ögmen H. Krummenacher J. Otto T. U. Herzog M. H. (2009). A (fascinating) litmus test for human retino- vs. non-retinotopic processing. Journal of Vision, 9 (13): 5, 1–11, http://journalofvision.org/9/13/5/, doi:10.1167/9.13.5. [PubMed] [Article] [PubMed]
Bompas A. O'Regan J. K. (2006a). Evidence for a role of action in color perception. Perception, 35, 65–78. [CrossRef]
Bompas A. O'Regan J. K. (2006b). More evidence for sensorimotor adaptation in color perception. Journal of Vision, 6 (2): 5, 145–153, http://www.journalofvision.org/content/6/2/5, doi:10.1167/6.2.5. [PubMed] [Article]
Bridgeman B. Hendry D. Stark L. (1975). Failure to detect displacement of the visual world during saccadic eye movements. Vision Research, 15, 719–722. [CrossRef] [PubMed]
Bridgeman B. Stark L. (1979). Omnidirectional increase in threshold for image shifts during saccadic eye movements. Perception and Psychophysics, 25, 241–243. [CrossRef] [PubMed]
Burr D. C. Morrone M. C. Ross J. (1994). Selective suppression of the magnocellular pathway during saccadic eye movements. Nature, 371, 511–513. [CrossRef] [PubMed]
Castelhano M. S. Heaven C. (2011). Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychonomic Bulletin and Review, 18, 890–896. [CrossRef] [PubMed]
Chun M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4, 170–178. [CrossRef] [PubMed]
Chun M. M. Jiang Y. (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36, 28–71. [CrossRef] [PubMed]
Chun M. M. Jiang Y. (1999). Top-down attentional guidance based on implicit learning of visual covariation. Psychological Science, 10, 360–365. [CrossRef]
Chun M. M. Turk-Browne N. B. (2008). Associative learning mechanisms in vision. In Luck S. J. Hollingsworth A. (Eds.), Visual memory (Oxford series in visual cognition) (pp. 209–245). Oxford, UK: Oxford University Press.
Cox D. Meier P. Oertelt N. DiCarlo J. (2005). Breaking position-invariant object recognition. Nature Neuroscience, 8, 1145–1147. [CrossRef] [PubMed]
Demeyer M. De Graef P. Wagemans J. Verfaillie K. (2010). Object form discontinuity facilitates displacement discrimination across saccades. Journal of Vision, 10 (6): 17, 1–14, http://www.journalofvision.org/content/10/6/17, doi:10.1167/10.6.17. [PubMed] [Article]
Deubel H. Bridgeman B. Schneider W. X. (1998). Immediate post-saccadic information mediates space constancy. Vision Research, 38, 3147–3159. [CrossRef] [PubMed]
Deubel H. Bridgeman B. Schneider W. X. (2004). Different effects of eyelid blinks and target blanking on saccadic suppression of displacement. Perception and Psychophysics, 66, 772–778. [CrossRef] [PubMed]
Deubel H. Schneider W. X. (1994). Perceptual stability and postsaccadic visual information: Can man bridge a gap? Behavioral and Brain Sciences, 17, 259–260. [CrossRef]
Deubel H. Schneider W. X. Bridgeman B. (1996). Postsaccadic target blanking prevents saccadic suppression of image displacement. Vision Research, 36, 985–996. [CrossRef] [PubMed]
Duhamel J. R. Colby C. L. Goldberg M. E. (1992). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255, 90–92. [CrossRef] [PubMed]
Eckstein M. P. (2011). Visual search: A retrospective. Journal of Vision, 11 (5): 14, 1–36, http://www.journalofvision.org/content/11/5/14, doi:10.1167/11.5.14. [PubMed] [Article] [PubMed]
Elsner B. Hommel B. (2001). Effect anticipation and action-control. Journal of Experimental Psychology: Human Perception and Performance, 27, 229–240. [CrossRef] [PubMed]
Elsner B. Hommel B. (2004). Contiguity and contingency in action-effect learning. Psychological Research, 68, 138–154. [CrossRef] [PubMed]
Flombaum J. I. Scholl B. J. Santos L. R. (2009). Spatiotemporal priority as fundamental principle of object persistence. In Hood B. Santos L. (Eds.), The origins of object knowledge (pp. 135–164). Oxford, UK: Oxford University Press.
Grimes J. (1996). On the failure to detect changes in scenes across saccades. In Akins K. (Ed.), Perception (Vancouver studies in cognitive science) ( Vol. 5; pp. 89–110). Oxford, UK: Oxford University Press.
Henderson J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7, 498–504. [CrossRef] [PubMed]
Henderson J. M. (2008). Eye movements and scene memory. In Luck S. J. Hollingworth A. (Eds.), Visual memory (pp. 87–121). Oxford, UK: Oxford University Press.
Henderson J. M. Ferreira F. (2004). Introduction to interface of vision, language, and action. In Henderson J. M. Ferreira F. (Eds.), The interface of language, vision and action: Eye movements and the visual world (pp. 1–58). New York: Psychology Press.
Henderson J. M. Hollingworth A. (2003a). Eye movements and visual memory: Detecting changes to saccade targets in scenes. Perception and Psychophysics, 65, 58–71. [CrossRef]
Henderson J. M. Hollingworth A. (2003b). Global transsaccadic change blindness during scene perception. Psychological Science, 14, 493–497. [CrossRef]
Henderson J. M. Weeks P. A. Hollingworth A. (1999). The effects of semantic consistency on eye movements during complex scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 25, 210–228. [CrossRef]
Herwig A. Horstmann G. (2011). Action-effect associations revealed by eye movements. Psychonomic Bulletin and Review, 18, 531–537. [CrossRef] [PubMed]
Herwig A. Prinz W. Waszak F. (2007). Two modes of sensorimotor integration in intention-based and stimulus-based actions. The Quarterly Journal of Experimental Psychology, 60, 1540–1554. [CrossRef] [PubMed]
Herwig A. Schneider W. X. (in press). Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General.
Herwig A. Waszak F. (2009). Intention and attention in ideomotor learning. The Quarterly Journal of Experimental Psychology, 62, 219–227. [CrossRef] [PubMed]
Herwig A. Waszak F. (2012). Action-effect bindings and ideomotor learning in intention- and stimulus-based actions. Frontiers in Psychology, 3 (444), 1–18, doi:10.3389/fpsyg.2012.00444. [PubMed]
Hillstrom A. P. Scholey H. Liversedge S. P. Benson V. (2012). The effect of the first glimpse at scene on eye movements during search. Psychonomic Bulletin and Review, 19, 204–210. [CrossRef] [PubMed]
Hein E. Cavanagh P. (2012). Motion correspondence in the Ternus display shows feature bias in spatiotopic coordinates. Journal of Vision, 12 (7): 16, 1–14, http://www.journalofvision.org/content/12/7/16, doi:10.1167/12.7.16. [PubMed] [Article]
Hollingworth A. (2012). Guidance of visual search by memory and knowledge. In Dodd M. D. Flowers J. H. (Eds.), The influence of attention, learning, and motivation on visual search (pp. 63–89). New York: Springer Science and Business Media.
Hunt A. R. Cavanagh P. (2011). Remapped visual masking. Journal of Vision, 11 (1): 13, 1–8, http://www.journalofvision.org/content/11/1/1/13, doi:10.1167/11.1.13. [PubMed] [Article]
Kahnemann D. Treisman A. Gibbs B. J. (1992). The reviewing of object files: Object specific integration of information. Cognitive Psychology, 24, 175–219. [CrossRef] [PubMed]
Land M. F. Tatler B. W. (2009). Looking and acting: Vision and action in natural behaviour. Oxford, UK: Oxford University Press.
Li N. DiCarlo J. J. (2008). Unsupervised natural experience rapidly alters invariant object representation in visual cortex. Science, 321, 1502–1507. [CrossRef] [PubMed]
Li N. DiCarlo J. J. (2010). Unsupervised natural visual experience rapidly reshapes size-invariant object representation in inferior temporal cortex. Neuron, 67, 1062–1075. [CrossRef] [PubMed]
MacKay D. M. (1972). Visual stability. Investigative Ophthalmology, 11, 518–524. [PubMed]
Mathôt S. Theeuwes J. (2011). Visual attention and stability. Philosophical Transactions of the Royal Society B, 366, 516–527, doi:10.1098/rstb.2010.0187. [CrossRef]
Matin E. (1974). Saccadic suppression: A review and analysis. Psychological Bulletin, 81, 899–917. [CrossRef] [PubMed]
Meinecke C. (1989). Retinal eccentricity and the detection of targets. Psychological Research, 51, 107–116. [CrossRef] [PubMed]
Najemnik J. Geisler W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387–391. [CrossRef] [PubMed]
Nuthmann A. (2014). How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements. Journal of Experimental Psychology: Human Perception and Performance, 40, 342–360. [CrossRef] [PubMed]
Pomplun M. Reingold E. M. Shen J. (2003). Area activation: A computational model of saccadic selectivity in visual search. Cognitive Science, 27, 299–312. [CrossRef]
Richters D. (2008). Hand-eye correlation: An arbitrary sensorimotor contingency can alter visual sensitivity. Psychology Dissertations, paper 6, http://hdl.handle.net/2047/d1001651x.
Richters D. P. Eskew R. T. (2009). Quantifying the effect of natural and arbitrary sensorimotor contingencies on chromatic judgments. Journal of Vision, 9 (4): 27, 1–11, http://www.journalofvision.org/content/9/4/27, doi:10.1167/9.4.27. [PubMed] [Article] [PubMed]
Rolfs M. Jonikaitis D. Deubel H. Cavanagh P. (2011). Predictive remapping of attention across eye movements. Nature Neuroscience, 14, 252–256. [CrossRef] [PubMed]
Schreij D. Olivers C. N. L. (2013). The role of space and time in object-based visual search. Visual Cognition, 21, 306–329. [CrossRef]
Schneider W. X. (2013). Selective visual processing across competition episodes: A theory of task driven visual attention and working memory. Philosophical Transactions of the Royal Society B, 368, 1–13.
Shanks D. R. Pearson S. M. Dickinson A. (1989). Temporal contiguity and the judgment of causality by human subjects. Quarterly Journal of Experimental Psychology, 41B, 139–159.
Szinte M. Cavanagh P. (2011). Spatiotopic apparent motion reveals local variations in space constancy. Journal of Vision, 11 (2): 4, 1–20, http://www.journalofvision.org/content/11/2/4, doi:10.1167/11.2.4. [PubMed] [Article]
Tas A. C. Moore C. M. Hollingworth A. (2012). An object-mediated updating account of insensitivity to transsaccadic change. Journal of Vision, 12 (11): 18, 1–13, http://www.journalofvision.org/content/12/11/18, doi:10.1167/12.11.18. [PubMed] [Article]
Thiele A. Henning P. Kubischik M. Hoffmann K.-P. (2002). Neural mechanisms of saccadic suppression. Science, 295, 2460–2462. [CrossRef] [PubMed]
Torralba A. Oliva A. Castelhano M. S. Henderson J. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113, 766–786. [CrossRef] [PubMed]
Treisman A. M. Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136. [CrossRef] [PubMed]
Waszak F. Cardoso-Leite P. Hughes G. (2012). Action effect anticipation: Neurophysiological basis and functional consequences. Neuroscience and Biobehavioral Reviews, 36, 943–959. [CrossRef] [PubMed]
Waszak F. Herwig A. (2007). Effect anticipation modulates deviance processing in the brain. Brain Research, 1183, 74–82. [CrossRef] [PubMed]
Wischnewski M. Belardinelli A. Schneider W. X. Steil J. J. (2010). Where to look next? Combining static and dynamic proto-objects in a TVA-based model of visual attention. Cognitive Computation, 2, 326–343. [CrossRef]
Wolfe J. M. (1994). Guided Search 2.0—A revised model of visual search. Psychonomic Bulletin and Review, 1, 202–238. [CrossRef] [PubMed]
Wolfe J. M. O'Neill P. Bennett S. C. (1998). Why are there eccentricity effects in visual search? Visual and attentional hypotheses. Perception and Psychophysics, 60, 140–156. [CrossRef] [PubMed]
Zelinsky G. J. (2008). A theory of eye movements during target acquisition. Psychological Review, 115, 787–835. [CrossRef] [PubMed]
Footnotes
1  We use the term transsaccadic learning to refer to a mechanism that “learns to associate for which saccadic eye movement which peripheral input pattern leads to which foveal pattern” (Herwig & Schneider, in press, p. 4). This definition of transsaccadic learning bears some similarities to the more general term action–effect learning, which refers to the acquisition of associations between actions and their ensuing perceptual effects (e.g., Elsner & Hommel, 2001; Herwig & Waszak, 2012). In the Discussion we come back to the question of how transsaccadic learning and action–effect learning are related.
Footnotes
2  The model of Najemnik and Geisler (2005) can also be described as a model incorporating some kind of predictive processing. That is, to compute the optimal next fixation location, their model assumes that for each possible next fixation the predicted information gain is determined and compared across all possible next fixation locations. The two-phase model of transsaccadic feature prediction of Herwig and Schneider (in press) extends and specifies this and related models (e.g., Pomplun, Reingold, & Shen, 2003; Zelinsky, 2008) by incorporating a learning mechanism linking foveal and peripheral object representations in the first place and by specifying the prediction mechanism as a mechanism that is based on associated pre- and postsaccadic feature information of the saccade target object.
Figure 1
 
Spatial resolution differences across a saccade. When the observer foveates the dome of the pier (left upper scene) the seagull in the periphery is only coarsely represented, whereas the seagull is represented in high resolution after it is foveated due to a saccade (right upper scene). The bottom row represents the retinal image of the seagull before and after the saccade that foveates the seagull.
Figure 1
 
Spatial resolution differences across a saccade. When the observer foveates the dome of the pier (left upper scene) the seagull in the periphery is only coarsely represented, whereas the seagull is represented in high resolution after it is foveated due to a saccade (right upper scene). The bottom row represents the retinal image of the seagull before and after the saccade that foveates the seagull.
Figure 2
 
The acquisition phase for Experiments 1a to 1c. The left column shows trials in which a correct transsaccadic association is learned because the participant made a saccade to the “normal object” (here the vertical object), which did not change spatial frequency across the saccade. The right column shows acquisition trials in which an incorrect transsaccadic association (spatial frequency change high to low) is learned because the participant made a saccade to the “swapped object” (here the tilted object), which changed spatial frequency across the saccade. Orange circles depict the gaze position. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Figure 2
 
The acquisition phase for Experiments 1a to 1c. The left column shows trials in which a correct transsaccadic association is learned because the participant made a saccade to the “normal object” (here the vertical object), which did not change spatial frequency across the saccade. The right column shows acquisition trials in which an incorrect transsaccadic association (spatial frequency change high to low) is learned because the participant made a saccade to the “swapped object” (here the tilted object), which changed spatial frequency across the saccade. Orange circles depict the gaze position. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Figure 3
 
The visual search test phases of all experiments. First, a visual search target was presented for 100 ms, which was either the foveated version of the normal or swapped object of the acquisition phase. After a 900-ms delay the participant had to make a saccade as fast as possible to the peripheral target (defined by orientation). For the normal search target, the pairing between search target and peripheral target was either a physical match and learning congruent or a physical mismatch and learning incongruent. For the swapped search target, the target pairing was either a physical match and learning incongruent or a physical mismatch and learning congruent. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Figure 3
 
The visual search test phases of all experiments. First, a visual search target was presented for 100 ms, which was either the foveated version of the normal or swapped object of the acquisition phase. After a 900-ms delay the participant had to make a saccade as fast as possible to the peripheral target (defined by orientation). For the normal search target, the pairing between search target and peripheral target was either a physical match and learning congruent or a physical mismatch and learning incongruent. For the swapped search target, the target pairing was either a physical match and learning incongruent or a physical mismatch and learning congruent. Stimuli are not drawn to scale; their size is exaggerated for a better visibility of the frequency change.
Figure 4
 
The percentage of correct first saccades in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
Figure 4
 
The percentage of correct first saccades in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
Figure 5
 
The interaction between match and congruency over all experiments (N = 36). Error bars represent the standard error of the mean.
Figure 5
 
The interaction between match and congruency over all experiments (N = 36). Error bars represent the standard error of the mean.
Figure 6
 
The mean saccade latencies in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
Figure 6
 
The mean saccade latencies in dependence of the between-subjects factor object continuity and the within-subject factors match and learning congruency. Error bars represent standard errors of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×