Purchase this article with an account.
Joseph DeSouza, Aaron Kucyi, Laura Pynn, Cecilia Jobst, Paula Di Noto, Gerry Keith, Uta Wolfe; A multisensory visuotactile illusion induced by monocular occlusion with a black contact lens does not depend on touch signals on the face: Evidence from behavioural and modelling studies. Journal of Vision 2011;11(11):780. doi: 10.1167/11.11.780.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Neural integration of different sensory modalities provides a meaningful, unified representation of the world. However, conflict between modalities can cause illusory perceptions when inputs are sufficiently incongruent. As shown previously (Wolfe & Carpinella, 2008, ECVP), monocular blindness induced by an occluder contact lens in the absence of congruent tactile input causes ipsilateral facial paresthesias and, in some cases, neglect-like symptoms. The strength and extent of the effect is stronger when the dominant (rather than non-dominant) eye is occluded. More recent work (Jobst et al., 2010, SfN) shows that, furthermore, everyday experience modulates the strength of the effect as non-contact wearers have larger facial areas of paresthesias than contact wearers. The paresthesias are experienced without any corresponding elevation of tactile detection threshold as tested with an aesthesiometer (Di Noto & DeSouza, 2010, SfN). Consistent with findings by Wolfe and colleagues (2007, P&P), in all studies, paresthesias were found mainly ipsilateral to the occlusion and were accompanied by an illusory ipsilateral eyelid droop. We developed a computational model of this illusion that has inputs from both eyes and a somatosensory signal from the face. We trained the network to make gaze shifts to visual and somatosensory targets. After the network was trained, we removed the input from one eye to model the effect of an occluder lens. We discovered that the network could still make gaze shifts, but that the signals from the hidden and output layer units to the space ipsilateral to the occlusion were not as efficient as signals to the contralateral space. Underlying mechanisms may include top-down signaling from bimodal visuo-tactile brain regions to somatosensory areas and/or bottom-up signaling from superior colliculus and related structures. Our results demonstrate that congruent inputs from visual, somatosensory, and proprioceptive modalities are necessary for the unified interpretation and efficient navigation of peripersonal space.
This PDF is available to Subscribers Only