September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
FlyingObjects: Testing and aligning humans and machines in gamified object vision tasks
Author Affiliations & Notes
  • Benjamin Peters
    School of Psychology & Neuroscience, University of Glasgow, UK
  • Eivinas Butkus
    Department of Psychology, Columbia University
  • Matthew H. Retchin
    Zuckerman Mind Brain Behavior Institute, Columbia University
  • Nikolaus Kriegeskorte
    Department of Psychology, Columbia University
    Zuckerman Mind Brain Behavior Institute, Columbia University
    Department of Neuroscience, Columbia University
  • Footnotes
    Acknowledgements  B.P. has received funding from the EU Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 841578.
Journal of Vision September 2024, Vol.24, 1053. doi:https://doi.org/10.1167/jov.24.10.1053
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benjamin Peters, Eivinas Butkus, Matthew H. Retchin, Nikolaus Kriegeskorte; FlyingObjects: Testing and aligning humans and machines in gamified object vision tasks. Journal of Vision 2024;24(10):1053. https://doi.org/10.1167/jov.24.10.1053.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Tasks lend direction to modeling and drive progress in both cognitive computational neuroscience and AI. While these disciplines have some shared goals, they have traditionally navigated the space of possible tasks with different intentions in mind, leading to vastly different types of tasks. Cognitive scientists and neuroscientists often prioritize experimental control leading them to use abstract tasks that remove many of the complexities of real-world experience, which are considered unrelated to the question at hand. AI engineers, by contrast, often directly engage the complex structure and dynamism of the real world, trading explainability for performance under natural conditions. However, AI engineers, too, are interested in gaining an abstract understanding of their models and cognitive computational neuroscientists ultimately want to model cognition under real-world conditions. If science and engineering are to provide useful constraints for each other in this area, it will be essential that they engage a shared set of tasks. Here we attempt to bridge the divide for dynamic object vision. We present a conceptual framework and a practical software toolbox called “FlyingObjects” that enables the construction of task generative models that span a vast space of degrees of naturalism, interactive dynamism, and generalization challenge. Task generators enable procedural sampling of interactive experiences ad infinitum, scaling between abstracted toy tasks and real-world appearance of objects and complex dynamics, access to and control over the task-generative variables, and sampling of atypical and out-of-distribution experiences. FlyingObjects connects science and engineering and enables researchers to acquire large-scale human behavioral data through smartphones, web-browsers, or in the lab, and to evaluate the alignment of humans and machines in dynamic object vision.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×