Abstract
Research has shown that top-down processes produce different eye movement patterns when searching scene images (DeAngelus & Pelz, 2009; Yarbus 1967). However, do these same top-down processes extend to film stimuli? Researchers have theorized that viewers have different modes of reception, such that some viewers become emotionally engaged in film, whereas others do not (Michelle, 2007; Suckfull & Scharkow, 2009). This indicates that individual differences in reception would produce different eye movement patterns while viewing film. Conversely, recent research shows similar eye movement patterns, even though the viewer's comprehension of the film differed (Loschky, Larson, Magliano, & Smith, 2015). This research suggests an alternative hypothesis that bottom-up saliency in film would produce similar eye movements. The current experiment was designed to determine if viewer engagement would predict eye movements in short film clips. Emotional engagement was manipulated by presenting emotional versus non-emotional clips. Five emotional clips were presented to participants that depicted a suspenseful scene, whereas seven non-emotional clips depicted conversations between two or more individuals. Film clip durations ranged from 30 to 160 seconds. After each clip, participants were asked to rate how engaging/emotional it was on a five point Likert scale (1 = not engaging; 5 = engaging) and if they had seen the clip before. Scanpath scores were calculated for each participant, which measured the degree to which participants fixated on the same location at the same time compared to the remaining participants. The results show that emotional film clips were negatively related to scanpath scores, indicating that the bottom-up, emotional content depicted in film was associated with dissimilar scanpaths. Conversely, viewer engagement was positively related to scanpath scores, such that similar scanpaths were associated with increased film engagement. Thus, attentional selection processes while viewing film was influenced by both bottom-up and top-down processing.
Meeting abstract presented at VSS 2017