September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Understanding the time course and spatial biases of natural scene segmentation
Author Affiliations & Notes
  • Ruben Coen-Cagli
    Albert Einstein College of Medicine
  • Jonathan Vacher
    Université Paris Cité
  • Dennis Cregin
    Albert Einstein College of Medicine
  • Tringa Lecaj
    Albert Einstein College of Medicine
  • Sophie Molholm
    Albert Einstein College of Medicine
  • Pascal Mamassian
    Ecole Normale Supérieure
  • Footnotes
    Acknowledgements  This research was supported by a NIH-ANR CRCNS grant (NIH-EY031166 to R.C.C. and ANR-19-NEUC-0003 to P.M.) and NIH grant P50 HD105352 (Support for the Rose F. Kennedy IDD Research Center, S.M.).
Journal of Vision September 2024, Vol.24, 1056. doi:https://doi.org/10.1167/jov.24.10.1056
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ruben Coen-Cagli, Jonathan Vacher, Dennis Cregin, Tringa Lecaj, Sophie Molholm, Pascal Mamassian; Understanding the time course and spatial biases of natural scene segmentation. Journal of Vision 2024;24(10):1056. https://doi.org/10.1167/jov.24.10.1056.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Image segmentation is central to visual function, yet human’s ability to cut natural scenes into individual objects or segments remains largely unexplored because it is notoriously difficult to study experimentally. We present a new experimental paradigm that overcomes this barrier. We flash two dots briefly, before and during the presentation of a natural image, and the observers report whether they perceive that the image regions near the two dots belong to the same or different segments. By repeatedly sampling multiple locations on the image, we then reconstruct a perceptual probabilistic segmentation map, namely the probabilities that each pixel belongs to any segment. Leveraging this method, we addressed two fundamental questions. First, strong spatial biases (a preference to group together items that are close in visual space) have been revealed using synthetic stimuli, but are they part of natural vision? Our data–unsurprisingly, but for the first time–directly shows spatial biases in human perceptual segmentation of natural images. The probability that participants reported two regions as grouped together, decreased with the distance between the two dots, regardless of whether the two regions belonged to the same or different segments in the perceptual segmentation maps. Second, is perceptual segmentation of natural images fast and parallel across the visual field, or a serial, time-consuming process? A prominent theory proposes that judging if two regions are grouped requires a gradual spread of attention between those regions, thus taking longer at larger distances (e.g. Jeurissen et al 2016 eLife). Surprisingly, whereas reaction times in our task increased with distance when the two regions were judged to be in the same segment, consistent with the theory, reaction times decreased with distance otherwise. We show that a dynamic Bayesian ideal observer model unifies these findings, through the interaction between spatial biases and evidence accumulation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×