Abstract
To understand event perception, we must determine how people process the sequence of actions that make up an event. Event Segmentation Theory (Zacks et al., 2007) proposes that event segmentation and understanding is driven by a continuous cycle of perceptual predictions and error detection. According to this model, an error detection mechanism compares predictions with perceptual input. Increases in prediction error lead to an updating of event models, causing an event boundary. However, previous research may have overemphasized the importance of ongoing perceptual prediction in event perception. This series of experiments tested whether individuals use such moment-to-moment predictions in real time. Participants viewed videos (consisting of a series of eight to fourteen different shots, with each shot lasting an average of 820 ms) of actors performing everyday events that either did or did not contain a misordered action (for example, a shot of an object being used before a shot of it being picked up). When instructed to look for misorderings, participants' detection of misordered events was low, and performance was close to floor when an incidental detection paradigm was used. Additionally, an interference task significantly lowered detection of misordered events, nearly to floor levels. Finally, participants were almost always able to detect the misordered events themselves (as opposed to detecting the fact that they were out of order), suggesting that error detection may not be an automatic process as previously argued. Combined, these results suggest that participants were able to clearly perceive individual actions within the misordered events, while perceiving the fact that they were misordered was far more difficult. These data suggest that automatic moment-to-moment predictions are not always the basis for understanding events.