September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Visual Salience Model of Active Viewing in 360° Real-World Scenes
Author Affiliations
  • caroline robertson
    McGovern Institute for Brain Research, MIT, Cambridge, MAHarvard Society of Fellows, Harvard, Cambridge, MA
  • jefferey mentch
    McGovern Institute for Brain Research, MIT, Cambridge, MA
  • nancy kanwisher
    McGovern Institute for Brain Research, MIT, Cambridge, MA
Journal of Vision September 2018, Vol.18, 1200. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      caroline robertson, jefferey mentch, nancy kanwisher; Visual Salience Model of Active Viewing in 360° Real-World Scenes. Journal of Vision 2018;18(10):1200.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Vision is an active process. We typically explore our 360° visual environment through self-directed movements – saccades and head turns. How does active vision impact the balance of semantic-level (meaning-based) and bottom-up (feature-level) signals during active, natural scene viewing? Here, we tested how well a traditional multi-level salience model of eye-tracking behavior captured viewing patterns during active viewing of 360° real-world scenes. 12 adults participated in a visual salience experiment in which 360° viewing behavior was measured using a head-mounted display (HMD) (Oculus Rift; resolution: 960x1080; field-of-view: ~100°; 75Hz) and an in-headset eye-tracker (120Hz; 5.7ms latency; 0.5° accuracy). We developed a stimulus bank of 300 complex, real-world 360° panoramic scenes and applied each image to a Virtual Reality environment built in Unity3D. During each trial of our Study Phase (duration: 15s), participants actively explored one novel 360° panoramic scene, using head turns to change their viewpoint as they would in a real-world environment. Participants were instructed that a "memory test" would follow the Study Phase, and were given ample breaks throughout the duration of the experiment. Previous studies of gaze behavior – in which participants view static, single-frame images on a fixed display – dispute whether gaze behavior is most guided by semantic-level salience or feature-level salience (e.g. Henderson et al., 2017; Anderson et al., 2015; Itti and Koch, 2001). Our results are consistent with the hypothesis that the balance of the contributions of each of these levels is mediated by active-viewing: gaze-behavior during active viewing conditions is relatively more aligned with meaning-based salience models than observed in previous studies. This study provides a quantitative measurement of visual behavior during active, real-world scene-viewing. Down the road, this paradigm will enable us to isolate levels of visual representation that drive atypical visual behavior in clinical populations, such as autism.

Meeting abstract presented at VSS 2018


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.