Abstract
Visual experience involves not only physical features such as color and shape, but also higher-level properties such as animacy and goal-directed behavior. Perceiving animacy is an inherently dynamic experience, in part because agents' goals and mental states may be constantly in flux -- unlike many of their physical properties. How does the visual system maintain and update representations of agents' goal-directed behavior over time and motion? The present study explored this question in the context of a particularly salient form of perceived animacy: chasing, in which one shape (the ‘wolf’) pursues another shape (the ‘sheep’). The participants themselves controlled the movements of the sheep, and the perception of chasing was assessed in terms of their ability to avoid being caught by the wolf -- which looked identical to many moving distractors, and so could be identified only by its motion. In these experiments the wolf's pursuit was periodically interrupted by short intervals in which it did not chase the sheep. When the wolf moved randomly during these interruptions, the detection of chasing was greatly impaired. This could be for two reasons: decreased evidence in favor of chasing, or increased evidence against chasing. These interpretations were tested by having the wolf simply remain static (or jiggle in place) during the interruptions (among distractors that behaved similarly). In these cases chasing detection was unimpaired, supporting the ‘evidence against chasing’ model. Moreover, random-motion interruptions only impaired chasing detection when they were grouped into fewer temporally extended chunks rather than being dispersed into a greater number of shorter intervals. These results reveal (1) how perceived animacy is determined by the character and temporal grouping (rather than just the brute amount) of ‘pursuit’ over time; and (2) how these temporal dynamics can lead the visual system to either construct or actively reject interpretations of chasing.