Abstract
Body ownership can be modulated through illusory visual–tactile integrations (Botvinick and Cohen, 1998) or observing mirror reflections of motor actions (Gonzalez-Franco et al., 2010). Illusory ownership of an invisible body is induced by the illusory visual–tactile integration (Guterstam et al., 2015). However, this method does not make us perceive our body shape or action. We aimed to develop a method to perceive one's own invisible body in shape and action through real-time visual motions of the hands and feet contingent with observers' actions and to evaluate its effect on perception. Twenty participants observed left and right white gloves and socks in front of them at a distance of 2 meters in a virtual room through a head-mounted display (Oculus Rift DK2, 90×110 deg). They wore white gloves and socks before the experiments and answered 8 questions on 7-point scales after 5 min of observation with voluntary actions. In half the trials, their actions were captured by a motion sensor (Kinect2), and the hands and feet in a virtual-reality environment moved contingently with their own actions. In the remaining, the hands and feet were virtually attached to another person and moved independently of the participants. We found that participants rated perception of their own invisible body between the hands and feet significantly higher in the vision–action contingent than in the independent condition. Thus, this phenomenon required vision–action contingency and elicited perception of a complex shape and action of an invisible body. We presented a knife to threaten the participants after their invisible body experiences. Participants avoided the knife more often in the contingent condition, but the difference was not statistically significant. This suggests that we can observe our own invisible bodies by completing body parts only from visible hands and feet if their motions are contingent with our actions.
Meeting abstract presented at VSS 2016