Abstract
Spatial navigation can be supported by visual navigation cues that show the way to go. Before these cues can be useful, however, people need to attend to them and extract spatial directions. Previous work revealed that people comprehend spatial directions faster for arrows and words than for scenes despite their common neural representation of the presented direction (Weisberg, Marchette, & Chatterjee, 2018). We predict that the speed of processing is supported by an enhancement of visual attention for arrows and words. To investigate this, we conducted a pre-registered experiment in which 50 participants completed a modified version of Posner’s (1980) spatial cueing paradigm that included endogenous cues in the form of arrows, words, and scenes. These cues directed participants’ attention to the left or right side of their visual field. Following the disappearance of the cue, a target (i) appeared on the side indicated by the cue (70% valid trials), (ii) appeared on the opposite side (15% invalid trials), or (iii) did not appear (15% catch trials). Participants detected the filling-in of a target square. We observed a significant cue validity effect, such that valid trials were faster than invalid trials (d = 0.22). Also, the interaction between cue validity and cue format was significant. As predicted, significant cue validity effects were observed for arrows (d = 0.26) and words (d = 0.34) but not for scenes, supporting the notion that scenes require more costly computations to decode spatial direction (i.e., egocentric spatial direction by imagining the path of travel) and do not automatically engage visual attention. These findings confirm our hypothesis that spatial attention is captured by some representational formats of spatial direction, which explains why arrows and words were comprehended faster in previous research. Further, arrows and words may be more effective supports than scenes for navigation performance.