Abstract
Motion extrapolation for multiple targets across the visual periphery is a necessary skill in many dynamic environments. For example, approaching a street intersection requires prediction of motion of multiple road users separated by large visual angles. This kind of skill requires multiple cognitive capacities such as peripheral vision, distribution of (covert) attention and judging times-to-contact (TTC) for multiple targets. Models that can merge various such types of information processing under a unified description could offer insight into the situation. We devised a “widescreen” visual task in which two objects appear near the corners of a rectangular display (90 degrees horizontal and vertical), approach the centre, disappearing after 0.5 s. The subjects’ task is to judge the relative TTC of the objects with the centre, taking into account both target positions and speeds. The brief appearance of objects combined with large visual separation effectively prevents saccadic strategies, and requires the use of peripheral vision which was ascertained by recording eye movements. We found differences in performance when the objects are approaching from corners that fall into separate visual hemifields, compared to the case when both appear in either left or right hemifield. This indicates asymmetry in the integration of speed and position information across the visual field, based on geometry. We model the performance by considering the variability of the observer's time to contact estimate (that results from uncertainty of position and speed perception) that is related to visual angle. The results shed new light on the integration of motion information across the wide visual periphery.