Abstract
As we learn a navigational route through an unfamiliar environment, we may attend to different aspects of the route in order to remember it later. Some people may focus on the specific sequence of turns involved (an egocentric method), while others may encode distal landmarks and stores, as well as the relative location of the target in relation to these items (an allocentric method). Our participants navigated through an unfamiliar city to a stable target location using a desktop virtual environment with several distal landmarks but no local cues. After varying the starting point and asking participants to mark the location of the target, we compared the participants' navigational path to the egocentric sequence of turns learned through training and the actual target location (which would not be reached from a different start point via the learned sequence). We found that most participants adhered to either a strict egocentric method of following the turn sequence or an allocentric method of identifying the distal landmarks and going to the correct location through a different path. In addition, participants had the option of using a "view mode", which allowed them to raise, lower, and rotate the camera at each intersection. Participants who spent more time in view mode were more accurate in identifying the target location compared to participants who did not use view mode, even when placed at different starting points from initial training. Finally, participants who used view mode were also more likely to adopt an allocentric navigational style, suggesting that these participants were aware of the need to encode distal visual cues in order to accurately find the target location later. By allowing participants to control their view of the environment, we were able to more closely link navigational performance to the encoding of distal visual information.
Meeting abstract presented at VSS 2012