Open Access
Article  |   February 2020
Introduction to special issue on “Prediction in Perception and Action”
Author Affiliations
  • Mary Hayhoe
    Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
  • Katja Fiehler
    Department of Psychology, Justus Liebig University, Giessen, Germany
    Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
  • Miriam Spering
    Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada
  • Eli Brenner
    Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
  • Karl R. Gegenfurtner
    Department of Psychology, Justus Liebig University, Giessen, Germany
    Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Giessen, Germany
Journal of Vision February 2020, Vol.20, 8. doi:https://doi.org/10.1167/jov.20.2.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mary Hayhoe, Katja Fiehler, Miriam Spering, Eli Brenner, Karl R. Gegenfurtner; Introduction to special issue on “Prediction in Perception and Action”. Journal of Vision 2020;20(2):8. https://doi.org/10.1167/jov.20.2.8.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The wide diversity of articles in this issue reveals an explosion of evidence for the mechanisms of prediction in the visual system. When thought of as visual priors, predictive mechanisms can be seen as tightly interwoven with incoming sensory data. Prediction is thus a fundamental and essential aspect not only of visual perception but of the actions that are guided by perception.

Introduction
Prediction has long been recognized as an important feature of human behavior, and predictive mechanisms are found at many different levels of processing. Significant sensory-motor delays present a problem in a dynamically changing environment, and it is therefore necessary to predict the future state of the environment based on past experience of how it is likely to change over time. Predictive mechanisms help us to anticipate how our environment will change and to adjust our behavior accordingly. They also allow us to take account of the time required both to process sensory information and to move the body when interacting with our environment. Besides anticipating changes in the outside world, predictive mechanisms allow us to anticipate the future consequences of our own actions, making it easier to distinguish between external and self-generated sensory events. In general, prediction is important from basic levels of sensory-motor control, such as making an eye movement toward a moving object, to the most abstract levels of processing, such as predicting social behavior. 
Some of the clearest examples of prediction come from motor control. Babies learn to predict a moving object's future position within the first year of life (von Hofsten, 2004; Kubicek, Jovanovic, & Schwarzer, 2017). This is more advanced in adult observers, who even appear to be able to predict the trajectory of bouncing balls based on inferences about a ball's physical properties (e.g., Land & McLeod, 2000; Hayhoe & Ballard, 2005; Nusseck et al., 2007; Diaz et al., 2013). In the somatosensory system, it is commonly accepted that the proprioceptive consequences of a planned movement are predicted ahead of time using stored internal models of the body's dynamics (Mulliken & Andersen, 2009; Shadmehr et al., 2010; Wolpert, Miall, & Kawato, 1998), and the comparison of actual and predicted somatosensory feedback is a critical component of the control of movement. Indeed, when somatosensory feedback is severely compromised by somatosensory loss, the consequences for movement can be devastating (Cole & Paillard, 1995). 
Prediction plays a role at many different levels in motor control. For example, in eye movements, efference-copy mechanisms serve as a very basic form of prediction for differentiating self-induced from external visual motion patterns. Accurate motor control for more complex actions is based on processes ranging from simple sensory predictions to internal simulations of complete action sequences (Jeannerod, 1997). Despite the importance of somatosensory predictions, evidence for predictive visual representations has been less clear. The current issue of Journal of Vision provides a wealth of evidence for the importance of visual prediction at all levels of visual representation. 
A number of aspects to prediction are considered in this special issue and need to be distinguished. The most obvious is predicting future visual states of the world, such as the future location of a moving target. Another issue is the need to predict the visual consequences of self-motion. This is essential for separating externally from internally generated retinal motion. This is important for analysis of sensory information at all levels of abstraction, from using retinal motion signals to control pursuit eye movements to using cognitive information to make strategic decisions. Because prediction is based on past experience, the encoding of scene statistics (as visual priors or “internal models” of the world; Kersten et al., 2004; Fiser et al., 2010) is also important, both in terms of how such encoding is achieved and maintained and in terms of how they are encoded within the nervous system. Articles in this special issue cover all these aspects of prediction. This Introduction to the Special Issue provides an overview of the articles published in this issue in the context of the different ways in which visual predictions are important. It starts with predictions that allow one to anticipate changes in the world on the basis of visual information. Next are predictions that allow one to understand the sensory consequences of one's own motion, including eye and arm movements. It ends with studies that combine predictive mechanisms of self-motion and motion in the environment. 
Predicting sensory changes in the world
In its purest manifestation, the visual system should be able to compute perceptual representations that correspond to a predicted future state. This is perhaps easiest to understand in the case of object motion. Assad and Maunsell (1995) showed that neurons in the parietal cortex responded to a moving target throughout a period of stimulus occlusion. That is, the neurons responded as if they were able to extrapolate the response to the currently invisible target from previous exposure. A variety of perceptual phenomena might be related to a predictive representation of motion. Such mechanisms could lead to static stimuli being mislocalized relative to moving stimuli. One such phenomenon is called the “flash-grab” effect. Two articles in this issue by Hogendoorn and colleagues (van Heusden et al., 2019; Blom et al., 2019) provide support for extrapolation using motion signals. They explore the mechanisms involved, suggesting that a predictive signal is active in monocular parts of the human visual pathway. 
When moving to more complex predictions, one must consider how such predictions arise. The statistics of the visual environment shape the visual system, and this in turn shapes visual perception to allow one to make perceptual inferences about the state of the world. Vullings and Madelain (2019) test the idea that predictive mechanisms control learning of temporal and spatial properties of the environment. They use a visual search task in which targets are presented contingent on saccadic reaction times, essentially reinforcing either short- or long-latency saccades by presenting the visual target at a specific time. Their results show that saccade latencies are finely tuned to prediction-driven reinforcement contingencies. Notaro et al. (2019) also provide a demonstration of the process of learning environmental statistics. They monitor such learning by examining small anticipatory drifts and saccades in the direction of the most likely upcoming target. Zoeller et al. (2019) show that such learning can be very specific. They show that somatosensory experience with hard or soft objects modifies the force observers use when interacting with the object. Interestingly, visual or semantic information does not, indicating that the predictions are purely somatosensory. 
Several articles in this Special Issue are concerned with the neural mechanisms underlying prediction. In a now-classic article, Rao and Ballard (1999) introduced the idea of predictive coding. In their model of object recognition, high-level object representations are propagated to early visual areas, where they are subtracted from the incoming visual signals. The mismatch, or residual, reflects the unexplained sensory input that may need a revised model. At all stages of processing, sensory information is compared against predictions of expected sensory events made by higher-level perceptual areas, and the residuals, or prediction errors, are propagated upward to update perceptual models of the environment. This model is the percept. This idea naturally allows prediction in time, although Rao and Ballard (1999) did not explicitly address that issue, and the stored memory representations can be thought of as Bayesian priors. 
A number of recent articles have provided compelling evidence for low-level visual activity prior to stimulus presentation, consistent with the predictive coding hypothesis (Kok et al., 2017; de Lange et al., 2018). Two articles in the Special Issue provide evidence for high-level predictive representations using EEG. Oxner et al. (2019) show that prediction errors in surface segmentation are associated with visual mismatch negativity and with the P2 wave in the event-related potential. Based on amplitude differences in the error-related negativity, Maurer et al. (2019) suggest that the brain generates error predictions that can dissociate relevant from irrelevant errors and that relevant errors lead to larger behavioral adjustments than irrelevant ones. Krala et al. (2019) show the involvement of high-level cortical regions in prediction, irrespective of the sensory modality involved. These articles highlight the ubiquitous comparison of sensory information with predicted outcome across low-level and high-level cortical areas. 
Predicting the consequences of one's own actions
Perhaps the most fundamental aspect of prediction is the need to take account of the visual consequences of self-motion—in particular, the image displacement on the retina that accompanies an eye movement. There has been a substantial body of work demonstrating remapping of visual receptive fields before a saccade (Duhamel et al., 1997; Melcher & Colby, 2008). Predictive remapping occurs not only in lateral intraparietal cortex but also in superior colliculus, frontal eye fields, and area V3. Evidence from neurophysiological studies indicates that predictive remapping is mediated by a corollary discharge signal originating in the superior colliculus and mediodorsal nucleus of the thalamus (Sommer & Wurtz, 2004, 2008). This predictive remapping might be part of a mechanism for visual stability that relates the pre- and postsaccadic images of a stimulus (Melcher & Colby, 2008; Cicchini et al., 2012; but see Maij et al., 2009). In the current issue, Murdison et al. (2019) demonstrate that predictive remapping not only takes account of vertical and horizontal displacements caused by saccadic eye movements but also extends to the torsional change that occurs during an oblique saccade to a new location. This suggests that observers have finely calibrated expectations resulting from their own movements and are able to learn the complex image remappings that result from self-movements. 
Recent studies have shown that predictive remapping shifts the focus of attention prior to saccade onset (Rolfs et al., 2011) and leads to lingering attention after the saccade (Golomb et al., 2008). A novel computational model accounts for both types of attentional updating and shows that these phenomena rely on the same neural circuit (Bergelt & Hamker, 2019). Predictive remapping allows constancy of visual direction, but other aspects of integrating information across saccades need to be considered, such as relating the appearance of visual stimuli in peripheral and central vision, given that the spatial filtering in the retina provides such disparate signals. We do not typically perceive an object as entirely different when we look at it. Valsecchi et al. (2018) used a novel method of manipulating images to show that peripheral stimuli appear to have sharper edges than would be expected on the basis of peripheral acuity losses. That is, they appear as they would when viewed foveally. Thus, humans appear to learn the relation between peripheral and central images in order to maintain constancy of appearance as well as constancy of direction. 
One interesting development revealed in this issue is the multisensory nature of predictive mechanisms that take account of self-motion. It has been demonstrated that when participants reach to grasp an object, somatosensory sensitivity is suppressed at movement-relevant locations shortly before and during the movement (Buckingham et al., 2010; Colino, Buckingham, Cheng, van Donkelaar, & Binsted, 2014). Voudouris, Broda, and Fiehler (2019) had participants reach to grasp an object whose distribution of mass was either predictable from its visual appearance or unpredictable. They found that somatosensory sensitivity was suppressed more when participants could predict the mass distribution from visual features. Thus, visual information can be used to generate somatosensory predictions for the control of reaching and grasping. 
Arikan et al. (2019) also demonstrate the multisensory nature of suppression during self-generated movements. They found reduced blood oxygenation level dependent (BOLD) activity in somatosensory, visual, and auditory regions during self-generated movements (vs. passive, externally generated movements). Moreover, they found stronger suppression for multisensory than unisensory movements and confirmed the role of the cerebellum in detecting delays between the action and its visual consequences. Such predictive mechanisms seem to be enhanced in older age when sensory input becomes increasingly noisy. Klever et al. (2019) found stronger suppression of somatosensory sensitivity in a group of older compared to younger participants in a visually guided reaching task. Interestingly, the strength of suppression correlated negatively with individual executive capacities, highlighting the interaction between sensory, motor, and predictive processes modulated by cognitive resources. 
Krugwasser et al. (2019) investigated how predictions of sensory action consequences are processed. Participants saw a virtual hand moving either in the same manner as their own or with a temporal, spatial, or anatomical alteration. They had to attribute an action to the self or an external source. There were similarities in the sense of agency across temporal, spatial, and anatomical manipulations, indicating joint processing of the sense of agency across different sensorimotor aspects. The review by Fiehler, Brenner, and Spering (2019) further discusses the implications of sensory attenuations of predicted movement consequences, how they are linked to task demands, and the processing of such signals. 
Combining predictions of eye, arm, and object movements
Prediction plays an important role in the oculomotor system. Both smooth pursuit and saccadic eye movements reveal predictions of the future visual stimulus in both laboratory and real-world contexts (see review in Kowler, 2014; Diaz et al., 2013). In this issue, Fiehler et al. (2019) review the role of prediction in goal-directed movements. This review covers classic paradigms and novel approaches investigating predictions in the planning of eye and hand movements and touches on many of the other aspects of visual prediction introduced here. 
Combining predictions of eye, arm, and object movements is challenging, as they differ in latencies and dynamics. For example, a predictive component is necessary for the smooth coordination of pursuit and saccades. Goettker et al. (2019) show that the pursuit and saccadic system share a common internal representation of the target movement and interact closely to improve tracking responses rapidly. Congruently, Kwon et al. (2019) show that the integration of position and motion information extends to the ocular following response. Rothwell et al. (in press) compared the role of different cues in driving anticipatory smooth pursuit and anticipatory ocular torsion and found that the predictive drive for these two types of eye movements might be partly decoupled. Delle Monache et al. (2019) demonstrate a role of an internal model of gravity for oculomotor control, tailored to the requirements of the visual context, for both predictive saccades and pursuit. Eye movements scaled with gravity accelerations, but only when observers tracked a ball in the context of a pictorial scene, not when faced with a uniform background. These results emphasize that predictive eye movements are tuned to realistic scene properties. 
Another important aspect of the predictive component of pursuit is that it allows better performance in intercepting moving targets (Spering et al., 2011; Fooken et al., 2016). Binaee and Diaz (2019) show that predictive saccades and hand movements share a representation that is presumably important for interceptions. In their experiment, participants intercepted virtual balls with a racquet while the ball was occluded for varying periods. When occlusion increased the spatiotemporal demands of the task, some participants demonstrated a strong correlation between saccade prediction accuracy and the accuracy of hand placement. Fooken and Spering (2019) used a go/no-go manual interception paradigm to show that predictive eye and hand movements jointly signal the upcoming decision about a future target. Similar predictive behavioral responses are shown in a manual tracking task, where different cursor speeds were related to different perceptual decisions (Zeljko et al., 2019). Mann et al. (in press) used a virtual tennis environment to show how a combination of tracking and predictive eye and head movements helps keep gaze close to the ball despite the ball bouncing as it approaches. These studies focus on tracking and intercepting moving objects, but prediction is similarly important in other real-world tasks such as driving (Macuga et al., 2019; Smith & Loschky, 2019; Wolfe et al., 2019). 
It should be noted that the mechanisms underlying predictive eye movements or body movements are not entirely clear. Zhao and Warren (2015) argue that interception depends on some visual parameter that is monitored continuously and controlled by the interceptive action, such as maintaining a constant bearing angle, and that the role of spatial memory or prediction is very modest. In cases where the action precedes the moving object in time, it can be argued that the prediction is based purely on recent sensory data and that a visual representation mediating prediction is unnecessary. The most likely resolution of this issue is that the domains where these different mechanisms operate depend on factors such as internal and external noise and the time available for the response (see de la Malla & Lopez-Moliner, 2015; Hayhoe, 2017). Aguilar-Lleyda et al. (2018) used a timing coincidence task and presented a Kalman filter model that samples and updates the position of a moving target by optimally combining spatial and temporal information. The authors suggest that a single mechanism based on position tracking can account for both spatial and temporal relations between physical target and response. Finally, insights into the relation between manual actions and prediction abilities can be gained from developmental studies. In this issue, Gehb and colleagues (2019) show that object experiences gathered by specific manual exploratory actions might facilitate infants’ predictive abilities when reaching and grasping. 
Conclusions
The wide diversity of articles in this issue provides a broad overview of the role and mechanisms of prediction in the visual system. This includes predicting ongoing motion as well as building visual priors about the world with which to anticipate future events. In all cases, predictive mechanisms are tightly interwoven with incoming sensory data. The articles in this issue show that prediction is a fundamental and essential aspect of visual perception, as well as of the actions that are guided by perception. 
Acknowledgments
Commercial relationships: None. 
Corresponding author: Mary Hayhoe. 
E-mail: hayhoe@utexas.edu. 
Address: Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA. 
References
Aguilar-Lleyda, D., Tubau, E., & López-Moliner, J. (2018). An object-tracking model that combines position and speed explains spatial and temporal responses in a timing task. Journal of Vision, 18 (12), 12. [CrossRef]
Arikan, B. E., Kemenade, B. M., Podranski, K., Steinstrater, O., Straube, B., & Kircher, T. (2019). Perceiving your hand moving: BOLD suppression in sensory cortices and the role of the cerebellum in the detection of feedback delays. Journal of Vision, 19 (4), https://doi.org/10.1167/19.14.4.
Assad, J. A., & Maunsell, J. H. (1995). Neuronal correlates of inferred motion in primate posterior parietal cortex. Nature, 373 (6514), 518. [CrossRef] [PubMed]
Bergelt, J., & Hamker, F. H. (2019). Spatial updating of attention across eye movements: A neuro-computational approach. Journal of Vision, 19 (7), 10.
Binaee, K., & Diaz, G. (2019). Movements of the eyes and hands are coordinated by a common predictive strategy. Journal of Vision, 19(12):3, 1–16, https://doi.org/10.1167/19.12.3. [CrossRef]
Blom, T., Liang, Q., & Hogendoorn, H. (2019). When predictions fail: correction for extrapolation in the flash-grab effect. Journal of Vision, 19 (2), 3. [CrossRef]
Buckingham, G., Carey, D. P., Colino, F. L., Degrosbois, J., & Binsted, G. (2010). Gating of vibrotactile detection during visually guided bimanual reaches. Experimental Brain Research, 201 (3), 411–419. [CrossRef] [PubMed]
Cicchini, G. M., Binda, P., Burr, D. C., & Morrone, M. C. (2012). Transient spatiotopic integration across saccadic eye movements mediates visual stability. Journal of Neurophysiology, 109 (4), 1117–1125. [PubMed]
Cole, J., & Paillard, J. (1995). Living without touch and peripheral information about body position and movement: Studies with a deafferented subjects. In Bermudez J.L. Marcel A., & Eilan J. (Eds.) The body and the self, Cambridge, MA: MIT Press (pp. 245–266).
Colino, F. L., Buckingham, G., Cheng, D. T., van Donkelaar, P., & Binsted, G. (2014). Tactile gating in a reaching and grasping task. Physiological Reports, 2 (3), 1–11.
de la Malla, C., & López-Moliner, J. (2015). Predictive plus online visual information optimizes temporal precision in interception. Journal of Experimental Psychology: Human Perception and Performance, 41 (5), 1271. [CrossRef] [PubMed]
de Lange, F. P., Heilbron, M., & Kok, P. (2018). How do expectations shape perception? Trends in Cognitive Sciences, 22 (9), 764–779. [CrossRef] [PubMed]
Delle Monache, S., Lacquaniti, F., & Bosco, G. (2019). Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion. Journal of Vision, 19 (4), 13. [CrossRef]
Diaz, G., Cooper, J., Rothkopf, C., & Hayhoe, M. (2013). Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. Journal of Vision, 13 (1), 20. [CrossRef]
Duhamel, J. R., Bremmer, F., Hamed, S. B., & Graf, W. (1997). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389 (6653), 845. [CrossRef] [PubMed]
Fiehler, K., Brenner, E., & Spering, M. (2019). Prediction in goal-directed action. Journal of Vision, 19 (9), 10. [CrossRef]
Fiser, J., Berkes, P., Orbán, G., & Lengyel, M. (2010). Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences, 14 (3), 119–130. [CrossRef] [PubMed]
Fooken, J., & Spering, M. (2019). Decoding go/no-go decisions from eye movements. Journal of Vision, 19 (2), 5. [CrossRef]
Fooken, J., Yeo, S. H., Pai, D. K., & Spering, M. (2016). Eye movement accuracy determines natural interception strategies. Journal of Vision, 16 (14), 1. [CrossRef]
Gehb, G., Kubicek, C., Jovanovic, B., Schwarzer, G. (2019). The positive influence of manual object exploration on predictive grasping for a moving object in 9-month-old infants. Journal of Vision, 19 (13), https://doi.org/10.1167/19.14.13.
Goettker, A., Braun, D. I., & Gegenfurtner, K. R. (2019). Dynamic combination of position and motion information when tracking moving targets. Journal of Vision, 19 (7), 2. [CrossRef]
Golomb, J. D., Chun, M. M., & Mazer, J. A. (2008). The native coordinate system of spatial attention is retinotopic. Journal of Neuroscience, 28, 10654–10662. [CrossRef] [PubMed]
Hayhoe, M. (2017). Perception and action. Annual Review of Vision Science, 3 (4), 389–413. [CrossRef] [PubMed]
Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9 (4), 188–194. [CrossRef] [PubMed]
Jeannerod, M. (1997). The cognitive neuroscience of action. Oxford, UK: Blackwell.
Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as Bayesian inference. Annual Review of Psychology, 55, 271–304. [CrossRef] [PubMed]
Klever, L., Voudouris, D., Fiehler, K., & Billino, J. (2019). Age effects on sensorimotor predictions: What drives increased tactile suppression during reaching? Journal of Vision, 19 (9), 9. [CrossRef]
Kok, P., Mostert, P., & De Lange, F. P. (2017). Prior expectations induce prestimulus sensory templates. Proceedings of the National Academy of Sciences, 114 (39), 10473–10478. [CrossRef]
Kowler, E., Aitkin, C. D., Ross, N. M., Santos, E. M., & Zhao, M. (2014). Davida Teller Award Lecture 2013: The importance of prediction and anticipation in the control of smooth pursuit eye movements. Journal of Vision, 14 (5), 10. [CrossRef]
Krala, M., van Kemenade, B., Straube, B., Kircher, T., & Bremmer, F. (2019). Sensory specific BOLD enhancement and supramodal BOLD suppression in a multisensory path integration task. Journal of Vision, 19 (11), 13. [CrossRef]
Krugwasser, A. R., Harel, E. V., & Salomon, R. (2019). The boundaries of the self: The sense of agency across different sensorimotor aspects. Journal of Vision, 19 (4), 14. [CrossRef]
Kubicek, C., Jovanovic, B., & Schwarzer, G. (2017). The relation between crawling and 9-month-old infants’ visual prediction abilities in spatial object processing. Journal of Experimental Child Psychology, 158, 64–76. [CrossRef] [PubMed]
Kwon, S., Rolfs, M., & Mitchell, J. F. (2019). Presaccadic motion integration drives a predictive postsaccadic following response. Journal of Vision, 19 (11), 12. [CrossRef]
Land, M. F., & McLeod, P. (2000). From eye movements to actions: how batsmen hit the ball. Nature Neuroscience, 3 (12), 1340. [CrossRef] [PubMed]
Macuga, K. L., Beall, A. C., Smith, R. S., & Loomis, J. M. (2019). Visual control of steering in curve driving. Journal of Vision, 19 (5), 1. [CrossRef]
Maij, F., Brenner, E., & Smeets, J. B. J. (2009). Temporal information can influence spatial localization. Journal of Neurophysiology, 102, 490–495. [CrossRef] [PubMed]
Mann, D., Nakamoto, H., Logt, N., Sikkink, L., Brenner, E. (in press). Predictive eye movements when hitting a bouncing ball. Journal of Vision.
Maurer, L. K., Joch, M., Hegele, M., Maurer, H., & Müller, H. (2019). Predictive error processing distinguishes between relevant and irrelevant errors after visuomotor learning. Journal of Vision, 19 (4), 18. [CrossRef]
Melcher, D., & Colby, C. L. (2008). Trans-saccadic perception. Trends in Cognitive Sciences, 12, 466–473. [CrossRef] [PubMed]
Mulliken, G. H., & Andersen, R. A. (2009). Forward models and state estimation in posterior parietal cortex. The Cognitive Neurosciences, 4, 599–611.
Murdison, S., Blohm, G., & Bremmer, F. (2019) Saccade-induced changes in ocular torsion reveal predictive orientation perception. Journal of Vision, 19, 10. [CrossRef]
Notaro, G., van Zoest, W., Altman, M., Melcher, D., & Hasson, U. (2019). Predictions as a window into learning: Anticipatory fixation offsets carry more information about environmental statistics than reactive stimulus-responses. Journal of Vision, 19 (2), 8. [CrossRef]
Nusseck, M., Lagarde, J., Bardy, B., Fleming, R., & Bülthoff, H. H. (2007, July). Perception and prediction of simple object interactions. In Proceedings of the 4th symposium on applied perception in graphics and visualization (pp. 27–34). ACM.
Oxner, M., Rosentreter, E. T., Hayward, W. G., & Corballis, P. M. (2019). Prediction errors in surface segmentation are reflected in the visual mismatch negativity, independently of task and surface features. Journal of Vision, 19 (6), 9. [CrossRef]
Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2 (1), 79. [CrossRef] [PubMed]
Rolfs, M., Jonikaitis, D., Deubel, H., & Cavanagh, P. (2011). Predictive remapping of attention across eye movements. Nature Neuroscience, 14, 252–256. [CrossRef] [PubMed]
Rothwell, A. C., Wu, X., Edinger, J., & Spering, M. (in press). On the relation between anticipatory ocular torsion and anticipatory smooth pursuit. Journal of Vision.
Shadmehr, R., Smith, M. A., & Krakauer, J. W. (2010). Error correction, sensory prediction, and adaptation in motor control. Annual Review of Neuroscience, 33, 89–108. [CrossRef] [PubMed]
Smith, M. E., & Loschky, L. C. (2019). The influence of sequential predictions on scene gist recognition. Journal of Vision, 19(12):14, 1–24, https://doi.org/10.1167/19.12.14. [CrossRef]
Sommer, M. A. & Wurtz, R. H. (2004). What the brain stem tells the frontal cortex: II. Role of the SC-MD-FEF pathway in corollary discharge. Journal of Neurophysiology, 91, 1403–1423. [CrossRef] [PubMed]
Sommer, M. A., & Wurtz, R. H. (2008). Brain circuits for the internal monitoring of movements. Annual Review of Neuroscience, 31, 317–338. [CrossRef] [PubMed]
Spering, M., Schütz, A. C., Braun, D. I., & Gegenfurtner, K. R. (2011). Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion. Journal of Neurophysiology, 105 (4), 1756–1767. [CrossRef] [PubMed]
Valsecchi, M., Koenderink, J., van Doorn, A., & Gegenfurtner, K. R. (2018). Prediction shapes peripheral appearance. Journal of Vision, 18 (13), 21. [CrossRef]
van Heusden, E., Harris, A. M., Garrido, M. I., & Hogendoorn, H. (2019). Predictive coding of visual motion in both monocular and binocular human visual processing. Journal of Vision, 19 (1), 3. [CrossRef]
Von Hofsten, C. (2004). An action perspective on motor development. Trends in Cognitive Sciences, 8 (6), 266–272. [CrossRef] [PubMed]
Voudouris, D., Broda, M. D., & Fiehler, K. (2019). Anticipatory grasping control modulates somatosensory perception. Journal of Vision, 19 (5), 4. [CrossRef]
Vullings, C., & Madelain, L. (2019). Discriminative control of saccade latencies. Journal of Vision, 19 (3), 16.
Wolfe, B., Fridman, L., Kosovicheva, A., Seppelt, B., Mehler, B., Reimer, B., & Rosenholtz, R. (2019). Predicting road scenes from brief views of driving video. Journal of Vision, 19 (5), 8. [CrossRef]
Wolpert, D. M., Miall, R. C., & Kawato, M. (1998). Internal models in the cerebellum. Trends in Cognitive Sciences, 2 (9), 338–347. [CrossRef] [PubMed]
Zeljko, M., Kritikos, A., & Grove, P. M. (2019). Temporal dynamics of a perceptual decision. Journal of Vision, 19 (5), 7. [CrossRef]
Zhao, H., & Warren, W. H. (2015). On-line and model-based approaches to the visual control of action. Vision Research, 110, 190–202. [CrossRef] [PubMed]
Zoeller, A. C., Lezkan, A., Paulun, V. C., Fleming, R. W., & Drewing, K. (2019). Integration of prior knowledge during haptic exploration depends on information type. Journal of Vision, 19 (4), 20. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×