October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Active Observers in a 3D World: The 3D Same-Different Task
Author Affiliations & Notes
  • Markus D. Solbach
    York University, Department of Electrical Engineering and Computer Science
  • John K. Tsotsos
    York University, Department of Electrical Engineering and Computer Science
  • Footnotes
    Acknowledgements  We want to thank Khatoll Ghauss for helping to conduct the experiments and Bir Dey Bikram for his help with CAD. This research was supported by grants to John K. Tsotsos from the Air Force Office of Scientific Research USA, the Canada Research Chairs Program, and the NSERC Canadian Robotics Network.
Journal of Vision October 2020, Vol.20, 253. doi:https://doi.org/10.1167/jov.20.11.253
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Markus D. Solbach, John K. Tsotsos; Active Observers in a 3D World: The 3D Same-Different Task. Journal of Vision 2020;20(11):253. https://doi.org/10.1167/jov.20.11.253.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Most past and present research in computer vision involves passively observed data. Humans, however, are active observers outside the lab; they explore, search, select what and how to look. Here, we are investigating active, visual observation in a 3D world. To focus, we ask subjects to decide if two 3D objects are the same or different, with no constraints on how they view those objects. Such 3D unconstrained, active observation seems under-studied. While many studies explore human performance, usually, they use line drawings portrayed in 2D, and no active observer is involved. The ability to compare two objects seems a core visual capability, one we use many times a day. It would also be essential for any robotic vision system whose role it is to be a real assistant at home, manufacturing or medical setting. To explore the 3D 'same-different task', we designed a novel experimental environment and created a set of twelve 3D printed objects with known complexity. The subject is allowed to move around freely in a 4m x 3m controlled environment, outfitted with eye gaze tracker and observed by head trackers. In this environment, two objects are presented at a time at a fixed 3D locations but with a varying 3D pose. We track precise 6D head motion, gaze and record a video of all actions, synchronized at microsecond resolution. Additionally, the subject is interviewed about how the task was approached. Our results show that at least six strategies for solving this task are employed, not always independently. We found that the strategy used is dependent on three variables: object complexity, object orientation, and initial viewpoint. Furthermore, we show that performance improves over time as subjects refine their strategies throughout the study. Since no external feedback is given, an internal feedback mechanism must exist that refines strategies.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.