August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Motion Extrapolation Across the Visual Periphery
Author Affiliations
  • Tero Hakala
    Finnish National Defense University, Helsinki, Finland
  • Jami Pekkanen
    University of Helsinki, Helsinki, Finland
  • Otto Lappi
    University of Helsinki, Helsinki, Finland
  • Samuel Tuhkanen
    University of Helsinki, Helsinki, Finland
  • Lauri Oksama
    University of Turku, Turku, Finland
Journal of Vision August 2023, Vol.23, 5067. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tero Hakala, Jami Pekkanen, Otto Lappi, Samuel Tuhkanen, Lauri Oksama; Motion Extrapolation Across the Visual Periphery. Journal of Vision 2023;23(9):5067.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Motion extrapolation for multiple targets across the visual periphery is a necessary skill in many dynamic environments. For example, approaching a street intersection requires prediction of motion of multiple road users separated by large visual angles. This kind of skill requires multiple cognitive capacities such as peripheral vision, distribution of (covert) attention and judging times-to-contact (TTC) for multiple targets. Models that can merge various such types of information processing under a unified description could offer insight into the situation. We devised a “widescreen” visual task in which two objects appear near the corners of a rectangular display (90 degrees horizontal and vertical), approach the centre, disappearing after 0.5 s. The subjects’ task is to judge the relative TTC of the objects with the centre, taking into account both target positions and speeds. The brief appearance of objects combined with large visual separation effectively prevents saccadic strategies, and requires the use of peripheral vision which was ascertained by recording eye movements. We found differences in performance when the objects are approaching from corners that fall into separate visual hemifields, compared to the case when both appear in either left or right hemifield. This indicates asymmetry in the integration of speed and position information across the visual field, based on geometry. We model the performance by considering the variability of the observer's time to contact estimate (that results from uncertainty of position and speed perception) that is related to visual angle. The results shed new light on the integration of motion information across the wide visual periphery.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.