Abstract
Antipointing (i.e., reaching mirror symmetrical to a stimulus) requires top-down executive control to inhibit a pre-potent response (i.e., response suppression) and remap a target's visual coordinates (180° spatial transformation: i.e., visual vector inversion). Notably, antipointing displays an under- and overshooting bias for responses in the left and right visual fields, respectively. This visual-field specific endpoint bias demonstrates that antipointing is mediated via the same relative visual cues as perceptual judgments (i.e., via ventral visuoperceptual networks) (Maraj and Heath 2010: Exp Brain Res). It is, however, important to recognize that other reaching responses involving decoupled stimulus-response (SR) relations (i.e., reaching to a spatial location parallel to a target) have been shown to be mediated via absolute visual information (Thaler and Goodale 2011: Front Hum Neurosci). The present work sought to determine whether the top-down demands of SR decoupling (i.e., vector inversion and parallel remapping of target coordinates) are sensitive to target-based perceptual asymmetries. Participants performed target directed (i.e., propointing) and antipointing movements to targets in left and right space and responses were performed in conditions wherein the movement and target vectors were overlapping or parallel (i.e., in the parallel condition the target was 10 cm above the required movement vector). Importantly, for overlapping and parallel conditions participants fixated on a central cross and performed reaching movements along the same horizontal axis to ensure that responses were biomechanically equivalent. Results indicated that reaction time and endpoint variability was greater for the antipointing and parallel reaching conditions. Most interestingly, anti-, but not propointing, displayed a visual-field specific pattern of endpoint bias regardless of whether responses were completed in overlapping or parallel conditions. Thus, results demonstrate that the top-down demands of vector inversion results in a movement plan that is supported via relative and perception-based visual information.
Meeting abstract presented at VSS 2017