August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Natural scene statistics of figure-ground motion in MT receptive fields
Author Affiliations & Notes
  • Clara Tenia Wang
    University of California, Berkeley, Berkeley, CA
  • Minqi Wang
    University of California, Berkeley, Berkeley, CA
  • Xin Huang
    University of Wisconsin-Madison, Madison, WI
  • Emily A. Cooper
    University of California, Berkeley, Berkeley, CA
  • Footnotes
    Acknowledgements  This work was supported by NIH grant R01EY022443.
Journal of Vision August 2023, Vol.23, 4934. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Clara Tenia Wang, Minqi Wang, Xin Huang, Emily A. Cooper; Natural scene statistics of figure-ground motion in MT receptive fields. Journal of Vision 2023;23(9):4934.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Our ability to locate and identify objects in the surrounding environment supports a variety of tasks, such as guiding eye movements and directing attention. However, differentiating figural content such as objects and animals from their surroundings (figure-ground segregation) is challenging in natural environments that are cluttered with shapes, textures, and edges. Recent work suggests that neurons in middle-temporal cortex (area MT) support figure-ground segregation from motion in dynamic environments, but the principles underlying these neural computations are poorly understood. Here, we consider the hypothesis that there are natural statistical regularities in the motion of figure and ground regions that can be leveraged by MT neurons. We aimed to measure statistical features of motion within regions of natural scenes comparable to MT receptive fields (RFs), and to understand how these features differ between figure and ground regions. Natural movies that contained scene motion and simulated head motion were obtained from an existing dataset (Mély et al., 2016). Visual motion was quantified using standard optic flow algorithms. We then examined the distribution of speeds within simulated MT RFs at different eccentricities. We found that the speed distributions tended to have two peaks: one at relatively slow speeds and one at relatively fast speeds. We then used automated image segmentation to identify the locations of figure-ground borders. For simulated RFs occurring at figure-ground borders, the bimodality was larger than expected from randomly-selected locations. We hypothesized that figure regions tend to move faster than nearby ground regions. However, while the average speed of figure and ground regions within an RF tended to differ more than expected by chance, we did not find consistent evidence that the figure regions were associated with faster speeds. These results can guide future studies examining whether visual neurons leverage these statistical regularities to facilitate figure-ground segregation.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.