In our previous research, we analyzed event-related potentials (ERPs) during the execution of a semivirtual, goal-oriented throwing task in order to probe forward model predictions in the course of motor learning (e.g., Maurer, Maurer, & Müller,
2015). The temporal separation between movement execution (throwing) and the observation of its outcome (hitting or missing a target) allows us to separately examine two ERP components that have been related to error processing: (a) The error-related negativity (Ne/ERN; Falkenstein, Hohnsbein, Hoormann, & Blanke,
1991; Gehring, Goss, Coles, Meyer, & Donchin,
1993), which refers to a fronto-central signal with negative polarity occurring shortly after the onset of an erroneous motor action and prior to feedback about the terminal action outcome, and (b) the feedback-related negativity (FRN; Miltner, Braun, & Coles,
1997), which shares brain topography and polarity with the Ne/ERN but occurs 100–200 ms after feedback about the action outcome (Holroyd et al.,
2004). According to the reinforcement theory of the error-related negativity of Holroyd and Coles (
2002), the Ne/ERN is the first indicator that an action outcome will be worse than expected on the basis of
pre-diction (comparison between intended and predicted terminal action outcome). The FRN is the first indicator that an action result is worse than expected on the basis of
post-diction (comparison between intended and actual sensory feedback about the result modulated by prediction). The efference copy, based on which the forward model computes its prediction, comes from a so-called inverse model (Kawato,
1999; Wolpert, Miall, & Kawato,
1998). The inverse model selects motor commands that will produce a certain desired change in state or action outcome (in our biathlon example, hitting the target). Importantly, inverse and forward models need to be trained with respect to a certain task, and they learn differently (Cisek,
2005; Jordan & Rumelhart,
1992). Especially early in the learning phase, the inverse model can be inaccurate, sending out motor commands that ultimately fail to produce the desired action outcome. In addition, unsystematic fluctuations or environmental changes can affect the output. The forward model receives these wrong efferences (in form of the efference copy), and it can predict the failure to achieve the intended action outcome. Moreover, it continuously receives information and can update its prediction even after sensory information inflow to the inverse model has terminated.