Abstract
In a previous study of rapid goal-directed reaches with brief visual samples, we observed that visual information gathered by the dominant eye (cf. non-dominant eye) is sufficient to engage in online trajectory amendments. However, it is not clear if this dominant eye advantage is due to perceptual or sensorimotor processes. To elucidate this question, we asked participants to make perceptual estimations about their endpoint accuracy based on a very brief visual sample provided during the movement. If the dominant eye advantage for online control is perceptual in nature, then judgments should be better with dominant than non-dominant monocular information. In contrast, comparable judgments across vision conditions would point to a sensorimotor explanation. Participants (n = 12) performed a 30 cm reaching movement to a target while wearing liquid crystal goggles. During the reaching movement, the goggles provided monocular dominant, monocular non-dominant, or binocular information for 20 ms. After each movement, participants reported whether their endpoint had undershot or overshot the target. A 2 Judgment (undershoot or overshoot) x 3 Vision condition (monocular dominant, monocular non-dominant, and binocular) repeated measures ANOVA contrasted the average movement endpoint and variability results. A main effect of judgment was observed for average endpoint location, which simply indicated that participants were able to use the 20 ms of vision to judge if their limb was about to yield shorter or longer movement amplitudes. Also, trials judged as overshoots yielded larger variable error values than those judged as undershoots. However, these effects were true across all vision conditions, which did not differ from each other. Thus, judgments of target undershoot vs. overshoot can be obtained from brief monocular dominant and non-dominant cues as well as from brief binocular cues. Consequently, the monocular dominant eye advantage for online trajectory amendments requires further investigations.
Meeting abstract presented at VSS 2016