We compared human performance to the performance of an ideal movement planner maximizing expected gain (“ideal movement planner”). The model is a generalization of the model introduced by Trommershäuser et al. (
2003). It determines which visuomotor strategy
S—resulting in mean movement endpoint (
,
)—yields the highest expected gain, taking into account the stimulus configuration, the associated gains and penalties, and the task-relevant endpoint variability.
Here we compute optimal performance under the assumption that movement endpoints
= (x,y) are distributed according to a bivariate Gaussian distribution,
where
= (
μ x,
μ y) and ∣
∣ are the mean and the determinant of the covariance matrix of the movement endpoint distribution on the screen, respectively. This means that movement endpoints are distributed around the mean endpoint
according to a bivariate Gaussian distribution, and
λ 1 and
λ 2 are the eigenvalues of the covariance matrix, representing the standard deviations in the direction of largest endpoint error and in the direction orthogonal to the direction of largest endpoint error (see also
Anisotropy of endpoint error section). We will refer to
as the “aim point.” In our task, we find that the covariance matrices of the endpoint error distributions differ across target locations and target-to-penalty orientations (
Figures 6,
7,
10, and
S2). Therefore, the probability of hitting into a specific region
R i (
i = 1,2), that is, the probability of hitting inside the target (
R 1) or penalty disk (
R 2), or both, when aiming for
, varies for each target location
t (
t = 0,…, 7) and is defined by
In other words, the choice of aim point
t = (
μ x,t,
μ y,t) determines the probability
P(
R i∣
S) (
i = 1,2) of hitting regions
R i. Here we used a single target region (gain
G 1 = 1) and a single penalty region (gain
G 2 = 0 or −2) per trial. The expected gain of aiming at
t is then defined by
When aiming at target location
t, the optimal movement strategy is to aim at
t* = (
μ x,t*,
μ y,t*) maximizing
Equation 3. (The asterisks indicate optimal strategies.) When the penalty is zero, the optimal aim point is the center of the target region. For nonzero penalties, the optimal aim point shifts away from the penalty region and therefore away from the target center. The predicted optimal shift is larger in the direction of larger endpoint variability as we show next.