July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Viewpoint Independence in Implicit Scene Learning
Author Affiliations
  • Zhongting Wang
    Academy of Psychology and Behavior,Tianjin Normal University,Tianjin,China
  • Shiyi Li
    Academy of Psychology and Behavior,Tianjin Normal University,Tianjin,China
  • Haibo Yang
    Academy of Psychology and Behavior,Tianjin Normal University,Tianjin,China
  • Deli Shen
    Academy of Psychology and Behavior,Tianjin Normal University,Tianjin,China
  • Xuejun Bai
    Academy of Psychology and Behavior,Tianjin Normal University,Tianjin,China
  • Hong-jin Sun
    Department of Psychology, Neuroscience & Behaviour, McMaster University, Canada
Journal of Vision July 2013, Vol.13, 219. doi:https://doi.org/10.1167/13.9.219
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhongting Wang, Shiyi Li, Haibo Yang, Deli Shen, Xuejun Bai, Hong-jin Sun; Viewpoint Independence in Implicit Scene Learning. Journal of Vision 2013;13(9):219. https://doi.org/10.1167/13.9.219.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It has been well established that repeated configurations of random elements induce better search performance than that of the displays of novel random configurations (contextual cueing effect). However, whether spatial learning can be transferred to a different viewpoint of a 3D scene has not been well studied. In this study we examined search behavior in a computer rendered illustration of a realistic scene. Participants viewed the scene (with the viewpoint at 30 degrees above the ground) consisted of an array of chairs randomly positioned on the ground. Observers were presented with a sequence of trials in which they searched for and identified an arbitrarily located target letter positioned on the surface of the seat of the chair. In the training session, participants completed 20 blocks of 16 search trials. Eight of the trials in each block were in the repeated condition where a particular target location was consistently paired with a particular array. The other 8 trials were in the novel condition, where a target was presented on a chair within a randomly generated search array. The training session was followed by a transfer session of 5 blocks, in which the viewpoints of the scene were rotated for 40 degrees on the ground plane. Significant contextual cuing was found in the training session, with faster RTs in the repeated condition than in the novel condition as participants learned the relationship between repeated layout and target location. Contextual cuing with comparable magnitude was also found after the change of viewpoint, suggesting view-independent representation of the scene. Contrary to viewpoint dependency found by Chua and Chun (2003), our results suggest that when the scene contained clear indication of the view change (from ground texture and individual chairs), the spatial relation learned during training can be mentally transformed to a new viewpoint.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×