It is essential to understand how retinocentric information is transformed into the body-centric frame of reference to fully understand the integration of visual and motor processing. This transformation allows organisms to use visual information about external objects to accurately perform motor tasks such as reaching and pointing. This is distinct from tasks that involve making relative judgements about two stimuli in the environment, which do not require the transformation.
Transformation of retinocentric information into a body-centric frame of reference requires intersensory integration where retinal inputs are combined with inputs from other senses such as proprioception (Harris,
1994; Lackner & Levine,
1979; Roll, Roll, & Velay,
1991). Proprioception informs about the movement and position of body parts. It is required for visually guided action because it specifies the position of the eyes relative to the head and the position of the head relative to the body (in fact, a change in the position of any body part is relevant to visually guided action if it also displaces the eyes relative to the environment; see Roll et al.,
1991). A turn of the head unaccounted for may result in misreaching toward an object.
The integration of visual and proprioceptive information discussed above is nonredundant. Proprioception does not inform about position of visual objects, and we do not depend on vision for information about the position of the head relative to the body. The two senses are linked in a system that requires both components to produce an output such as the position of a visual object in the body-centric frame of reference. Howard (
1997) called this kind of intersensory system a nested sensory system.
1 One can test if a sensory signal is part of a nested system for a particular task by disrupting that signal and observing the effect on task performance.
Proprioceptive signals can be disrupted using muscle vibration to investigate the role of neck proprioception in visually guided action. Vibration selectively activates muscle spindle receptors, and this induces an illusory sensation of body part movement that is consistent with the extension of that muscle (Goodwin, McCloskey, & Matthews,
1972). When applied to the dorsolateral muscles in the neck, vibration induces a visual illusion where observers perceive a stationary visual target as moving and displacing toward the hemifield contralateral to the vibration site (Biguer, Donaldson, Hein, & Jeannerod,
1988; Karnath et al.,
1994; Lackner & Levine,
1979; Seizova-Cajic, Sachtler, & Curthoys,
2006; Strupp, Arbusow, Borges Pereira, Dieterich, & Brand,
1999; Taylor & McCloskey,
1991). This effect is a close relative of the oculogyral illusion, illusory motion seen as a consequence of vestibular stimulation (see Graybiel & Hupp,
1946). A parsimonious explanation of both illusions views them as the product of the spatial constancy mechanism usually responsible for accurate perception of position and motion in the body-centric frame of reference (Lackner & Levine,
1979). Both neck proprioception and the vestibular signals inform about head movement, and an illusory signal about this event creates an illusory end percept in vision (motion and displacement of the target).
The spatial constancy account of the vibration-induced visual illusion is robust: Similarly displacing effects can be induced by vibrating extraretinal muscles (Velay, Roll, Lennerstrand, & Roll,
1994), and the effects of neck vibration on localization generalize to the auditory domain (Lewald, Karnath, & Ehrenstein,
1999). Neck muscle vibration also induces monkeys to make systematic errors on a task in which they have been trained to make saccades to the remembered location of a target (Corneil & Andersen,
2004).
A puzzling result of vibration studies is that the illusory effect is absent or much reduced in an illuminated environment with rich visual cues. This was shown using subjective reports (Biguer et al.,
1988; Velay et al.,
1994), as well as quantitative measures of perceived motion (Seizova-Cajic & Sachtler,
2007, their Figure 6B). This is a great challenge to the proposed spatial constancy explanation because the rich visual context is the normal context in which sensorimotor systems operate. The spatial constancy explanation argues that neck proprioception is a
necessary component of a nested sensory system concerned with body-centric position of visual targets because no other cue can indicate where the head is relative to the body (the only exception to this is when we have vision of the body itself, but we can still reach accurately if we do not see our body). If neck proprioception is necessary, as we argue, then it should not be suppressed in the normal, fully illuminated operating environment.
This was our rationale, and our goal was to provide evidence that, even in full cues, neck proprioception influences perceived body-centric position used to guide action. We used vibration of dorsal neck muscles to bias the registered head position and, with it, the perceived position of visual objects. The observer attempted to point at the visual target with an unseen hand. In the critical condition, a relatively rich visual field was presented, and the question was whether a vibration-induced proprioceptive signal would affect pointing.