Abstract
There are many definitions of ‘up’. Body orientation, gravity and vision each provide their own estimate but they are normally combined into a single percept. Often the cues coincide, as when standing in a well-lit environment. But what happens when they disagree? Does one dominate? Or do they all contribute to an average? We examined the contribution of body orientation, gravity and visual cues on ‘up’ perception when these cues were not in agreement. The perception of 3D shape from 2D shading served as an indirect measure of the perception of ‘up’, as light is normally assumed to come from above in the absence of illumination cues.
Observers were (i) sitting upright in an upright room, (ii) lying on their side in an upright room, (iii) sitting upright in a room tilted 90 , or (iv) lying in a tilted room. Stimuli were shown on a grey laptop screen arranged with the keyboard in the normal configuration relative to the body and that was surrounded by a clearly visible room. Each stimulus was a 2D disc shaded from black to white. Each trial started with the disc's shading axis randomly aligned. Observers rotated the disc until it appeared ‘most convex’.
The pattern of responses indicated that the perceived direction of ‘up’ is influenced by the direction of gravity, the orientation of the body and the orientation of the visual frame. The judgements were modelled as a weighted sum of vectors corresponding to the orientations of the body, gravity and the surrounding visual polarity. These data illustrate how the brain can resolve a common dilemma: how to deal with many sources providing normally redundant information about a single parameter. Knowing the relative weighting of these factors may be helpful in predicting performance on other related tasks, such as balancing, orienting or navigating in normal or unusual environments.