Abstract
Teleoperation of robots and autonomous vehicles introduces an interesting series of questions with respect to perception and action at a distance. While the pragmatics of this problem has been considered in the human factors domain, there is little consideration of an overall theory of perception and action at a distance in the perceptual domain. Our work attempts to erect a scaffolding for the development of such a theory. Classically, studies of perception and action take place in the 1st-person, i.e., those where the embodiment of the perceiver and actor are the same entity. Our work considers the 2nd- and 3rd-person perspectives (e.g., watching a machine carrying out our action and watching from the machine carrying out the action). The framework is complicated by the fact that 2nd- and 3rd-person embodiments may have different action capabilities than the 1st-person, and 3rd-person embodiments may have additional sensor mechanisms able to provide information not available in the usual 1st-person sense. Our overall strategy consists of 2nd- and 3rd-person replication of classic 1st-person perception-action paradigms and investigation the resulting shifts (or lack thereof) in performance. Obviously some types of performance will have little or no difference when differently-embodied while others should experience significant modification. From these results, we can model and predict expected performance in alternative perception-action embodiments. Here, we present initial results from an affordance-based experiment modeled on Warren & Wang (1987) as well as navigation experiments after Foo et al. (2005), along with their relevant implications for our proposed theoretical framework.