Abstract
The visuomotor system is highly optimised for visually guided reaching in simplified tasks capturing real-world problems (Battaglia & Schrater, 2007; Dean, Wu, & Maloney, 2007; Trommerhauser et al, 2005; Faisal & Wolpert, 2009). However, in typical whole-body movements, task-relevant visual cues and motoric degrees of freedom increase dramatically. Can our system still account for these complexities, or does visuomotor efficiency break down under more realistic circumstances? To test this, we asked 15 participants to 'pop' water balloons with a hand-held laser in virtual reality, as they fell from two chutes to their left and right. Participants could freely move between chutes to hit as many balloons as possible. Probability of balloon interception at a chute increased when moving closer to that chute. However, balloon values varied across the two chutes, so the more valuable chute would yield more points at same viewing distance (value conditions: 1:1, 1:2, 1:3 points). As such, participants had to stand somewhere that optimally traded-off between balloon value and their own interception probability. Prior to the main task, participants were asked to intercept balloons whilst standing in fixed locations across the game area. This allowed us to interpolate how hit probabilities for each chute varied by location for each individual. Based on these measures we computed where each subject should stand to maximise their score under different balloon value conditions (ideal observer prediction). Participant behaviour was close to ideal in all conditions, with a slight tendency to stay too close to the less valuable chute for score maximisation. Interestingly, even in the 1:1 condition where chutes had equal value, participants positioned themselves optimally to account for idiosyncrasies in their own interception performance. Thus, when positioning the body in action, the visuomotor system optimally accounts for idiosyncratic biases in visuomotor execution and cost-factors in the environment.
Meeting abstract presented at VSS 2018