September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Building 3D scene representations from different viewpoints: A contextual cueing study
Author Affiliations & Notes
  • Yibiao Liang
    University of Massachusetts Boston
  • Zsuzsa Kaldy
    University of Massachusetts Boston
  • Erik Blaser
    University of Massachusetts Boston
  • Footnotes
    Acknowledgements  This project was supported by a grant from NIH (R15HD086658)
Journal of Vision September 2021, Vol.21, 2702. doi:https://doi.org/10.1167/jov.21.9.2702
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yibiao Liang, Zsuzsa Kaldy, Erik Blaser; Building 3D scene representations from different viewpoints: A contextual cueing study. Journal of Vision 2021;21(9):2702. https://doi.org/10.1167/jov.21.9.2702.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Observers can build representations of panoramic scenes from snapshots (Robertson et al., 2016) and can form scene memories incidentally during visual search (Utochkin & Wolfe, 2018). Here, we used contextual cueing (Chun & Jiang, 1998) to test whether observers could build and exploit a 3D scene representation, acquired from exposure to varied viewpoints of a naturalistic scene, during a visual search task. We tested 29 observers (25 females; Mean = 19.34 years) in an online study. On each trial, observers saw a snapshot of the same computer-generated room, but from a different viewpoint (8 viewpoints total, varied trial-to-trial). In the Same-place condition, the target (a tablet) always appeared in the same relative location (e.g., on the table), no matter the viewpoint. In the Different-place condition, the target appeared in a different location in each viewpoint. (In both conditions, the target’s absolute location on the screen varied, due to the changes in viewpoint.) We collected reaction time data based on two redundant measures: keypress (participants reported the left-right orientation of a small probe that appeared on the target) and gaze latency. Overall, search times improved, nearing asymptote within the first third of the block of 48 trials. We performed a nonlinear (exponential decay) regression, and found a significant effect of condition (keypress: F(3,84)=25.5, p<0.001; gaze: F(3,88)=12.4, p<0.001). For keypress data, the Same-place condition had a significantly lower asymptotic search time (1092 ms) than the Different-place condition (1240 ms), a difference of 148 ms. Gaze latency data showed the same pattern, with significantly lower search time in the Same-place (778 ms) versus Different-place (913 ms) condition, a difference of 135 ms. These results are consistent with contextual cueing, and provide evidence that 3D scene representations (acquired from varied viewpoints) are built incidentally and used to facilitate search.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×