December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Do the saliency features of a scene fade over time?
Author Affiliations & Notes
  • Mahboubeh Habibi
    Philipps-University Marburg
    Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
  • Brian White
    Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
  • Wolfgang Oertel
    Philipps-University Marburg
  • Douglas Munoz
    Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
    Department of Biomedical and Molecular Sciences, Queen's University, Kingston Ontario, Canada
  • Footnotes
    Acknowledgements  IRTG 1901
Journal of Vision December 2022, Vol.22, 4241. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mahboubeh Habibi, Brian White, Wolfgang Oertel, Douglas Munoz; Do the saliency features of a scene fade over time?. Journal of Vision 2022;22(14):4241.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Specific components may draw attention simply by standing out from their surroundings. Numerous research has been conducted to construct a model that considers all of the variables influencing visuomotor behavior. While most of these models work well on images, they can not predict all salient things in a video. The models, in particular, fail when a predicted but unseen incoming item directs a person's attention to an area that may be vacant. We employ video-based eye tracking with instruction-free viewing of video clips to assess the gaze pattern of the control (CTRL) subjects. We recruited 280 CTRLs from different ages (>20 years) while they sat in front of a video-based monocular eye tracker (Eyelink-1000 Plus) in a light-controlled, quiet room. A monitor-mounted camera with a 500 Hz sampling rate was used to measure eye-movements. All participants viewed 10 short videos (~1 minute), consisting of 16-17 clippets of 3-5s duration without further instructions. We then focused on one specific clippet that included multiple faces on screen and while the camera shifted to the left side, more faces entered the scene. We assessed each subject's scan paths on the screen. Simultaneously, we used a deep-gaze model to evaluate the salient locations of each frame and compared it to the participants' actual gaze location. The findings indicated that not only are faces the most prominent locations in a scene, but also that newly arrived faces aroused more attention than the recently presented faces. The deep-gaze model predicted all faces on the screen as salient objects but could not identify which face had priority. We indicated that time is critical for developing a gaze prediction model and that the arrival time of each feature would affect the attention. Additionally, the scene's scanned history and the positions of previously represented objects affect observation preferences.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.