Abstract
How does the mind represent locations in space? Some work has explored the possibility that spatial representations are supported by specific coordinate systems (i.e., Cartesian or polar coordinates; e.g., Yang & Flombaum, 2018). Recent work suggests that the coordinate systems people use may be recoverable from errors during localization tasks. For example, Yousif & Keil (2021) find that, in a visual localization task, people’s errors in the polar dimensions of space are uniquely uncorrelated, implying representational independence and thus pointing to polar coordinates as the format of location representations. Do the same representational formats support both vision and action? In a first experiment, we used the same ‘error correlation’ approach to study the use of coordinates in a task with no visual input at all. Participants completed a localization task with the assistance of a robotic arm. On each trial, the robot directed their hand to a location in space, then returned it to a central location while the arm was fully occluded. The participant had to then move the robot arm back to the previous target location. Replicating what Yousif & Keil (2021) found in vision, we found that errors for polar dimensions, but not Cartesian dimensions, were uncorrelated, indicating that polar coordinates underlie spatial representation for action. In two subsequent experiments, we tested whether any one coordinate system supports translation across modalities (i.e., from vision to action). In these tasks, participants saw targets on a horizontal screen corresponding to the action space below it. They were tasked with moving the robot arm to match the locations presented on the visual plane. In both variants of this task (one with performance feedback, one without) we again found evidence that polar errors were uncorrelated. These results suggest that polar coordinates may underlie spatial representation for both vision and action.