How people interpret the light reaching their eyes is influenced by the assumptions that they make about the environment. For instance, when the motion parallax that people create by moving around is used to judge objects' shapes, it is assumed that the objects themselves are static and rigid. Similarly, when people rely on shading to judge an object's shape they are making assumptions about the reflectance of the object's surface and about the illumination. Without such assumptions the possibilities to interpret visual stimulation would be very limited, so it makes sense to accept assumptions that are seldom violated in daily life. And indeed, people readily assume that surfaces' textures are homogeneous or isotropic in order to judge their slants (Knill,
2003) and that the illumination is from above in order to distinguish bumps on the surface from dents in the surface (Mamassian & Goutcher,
2001), in accordance with the texture of many natural surfaces being isotropic and the illumination usually being from above.
When an assumption is less unlikely to be violated it is less certain that the assumption should be accepted, so the likelihood that it is correct to do so must somehow be considered. This likelihood is expressed in the extent to which people rely on information based on an assumption to make a certain judgement, in comparison with alternative sources of information that do not rely on the same assumption, and in comparison with prior knowledge of the likelihood of certain judgements being true. These comparisons can readily be modelled within a Bayesian framework (for a review see Kersten, Mamassian, & Yuille,
2004). Doing so allows one to attribute biases in people's judgments to priors that are directly related to the assumptions that people make. This method can reveal less self-evident assumptions, such as that motion is likely to be slow (Weiss, Simoncelli, & Adelson,
2002).
The extent to which one relies on various assumptions presumably arises through interactions with the environment, during development and through evolution, just as various other aspects of the visual system develop or evolve to suit the environment (e.g., for colour see Brenner & Cornelissen,
2005; Párraga, Troscianko, & Tolhurst,
2002; Purves, Lotto, Williams, Nundy, & Yang,
2001). Thus one may expect to find priors that are related to one's interactions with the environment as well as ones related to the statistics of the environment itself. An obvious example is the viewpoint from which an object is seen. A circular outline in the distance is more likely to be judged to originate from a sphere than from a rod oriented exactly along one's line of sight, despite the fact that both would give this outline. The reason is clear: if it is a rod it is quite unlikely that it should be seen from exactly this angle. Thus even if it is just as likely to find a rod in that place as it is to find a sphere, the likelihood that the image is caused by a rod is smaller. The assumption is that one is not looking at the object from a special position.
In a recent study, moving targets were judged to be too near to where the observer was looking (Brenner, van Beers, Rotman, & Smeets,
2006). The bias was only along the target's path. This raised the suggestion that people are biased towards believing that they are looking at what they see. The reasoning that was presented was similar to that given above: people are more likely to see something if their eyes are directed towards it, so if they saw something it is likely to have been close to where they were looking. A bias towards localising targets near where one is looking is equivalent to a bias towards small eccentricities. In this case the origin of the bias is not to be found in the statistics of the scenes that we encounter in daily life. Its origin lies in the way in which visual information is processed within the eye and brain, with most neuronal resources being devoted to a small area on the retina, and eye movements directing this part of the retina (the fovea) towards selected parts of the scene. In the present study we demonstrate that people have such a bias.
Since we do not expect a strong bias towards small retinal eccentricities, we can only expect such a bias to become apparent when there is considerable uncertainty about targets' positions. We did not want to achieve such uncertainty by using targets that are difficult to detect, because it is obvious that detection thresholds increase with eccentricity. We could have tried to correct for differences in detection across the retina, but that would require knowledge of the particular aspect that is limiting for our task (Raninen, Franssila, & Rovamo,
1991), and we could never be certain that the bias that we are looking for did not influence the tasks on which the correction is based. We therefore used clearly visible targets, and introduced temporal rather than spatial uncertainty about the target's position. Our experiment is somewhat similar to Murakami's (
2001) experiment in which he asked subjects to indicate whether a flash was to the left or to the right of a jumping target. He found that people systematically related the position of the flash to a slightly later position of the target. We asked subjects to indicate the position at which a jumping target had been at the moment of a flash or tone, so we expect to find a similar systematic temporal error. Our main interest, however, was whether we would also find a spatial bias.