Abstract
To discern intention, based on the observation of complex human behaviour, typically requires eye-movements that actively explore the visual context. We examined whether the gaze patterns of novice and experienced observers would differentially inform intention judgments of naïve viewers. To achieve this we first obtained real-life surveillance (CCTV) videos and organised them into 24 videos of 16-second duration in four categories ('fight', 'confront', 'nothing', 'play'). 'Fight' and 'confront' included aggressive behaviour, but only 'fight' videos resulted in a violent incident (after the video ended). 'Play' videos showed playful behaviour and the 'nothing' videos showed a variety of everyday activity that did not include aggressive behavior. We then obtained eye-movement data from novice and experienced CCTV operators who viewed these videos with the goal of judging hostile intent. Next, a high resolution foveal window with a blurred periphery was overlaid on each video according to the gaze coordinates of a novice or experienced operator. Twenty-four of these videos were created and shown to naïve participants, with half from each category being processed through an operator's eye-movements and half through a novice's eye-movements. Participants were instructed to follow the foveated area and try to ignore the blurred surrounding. For each trial they made a Yes/No judgement of whether a violent incident occurred after the video ended, and rated their confidence on a 7-point scale. Signal detection analysis was carried out on the Yes/No responses with a 'yes' on the 'fight' videos coded as a hit, and on any other video as a false alarm. Results showed no significant differences in sensitivity or bias. Results of confidence judgments revealed a trend of increased confidence when following novices' eye-movements. These preliminary results suggest that naïve participants were not able to extract more information from the operator scanpaths but subjectively found the experience different.
Meeting abstract presented at VSS 2016