Abstract
Most visual search studies use a 2D, passive observation task, where subjects search through artificial stimuli on a screen. In contrast, real world search involves physical 3D scenes and searchers who choose relevant scene viewpoints as the search proceeds. Searchers employ eye, head, and body movements to investigate the scene; they are active observers. To investigate viewpoint selection in active observation during 3D search, an active search task was conducted in a controlled real-world environment, a 3x4m space furnished with tables and wire cage shelving acting as surfaces to place the stimuli. Stimuli were miniature everyday objects, scattered in various orientations on the tables and cages. Targets were placed in upright, sideways, face-up, or diagonally tilted positions, but the target image probe was always presented in an upright orientation. Observers moved freely, untethered, and their eye and head movements, reaction time, and accuracy, were synchronized and measured over 12 trials each. Results indicate that similar to 2D search tasks, target-absent trials take longer than present trials and require more fixations and head travel. Interestingly, efficiency in these metrics was found to increase over time only in target present trials, not in absent trials. Collected eye and head movement data further revealed head tilts for subjects to match canonical orientations of non-upright objects. Indeed, targets placed in non-canonical orientations required more fixations before subjects would confirm them as present. Subjects were also found to crouch in order to fixate on objects placed at lower levels (such as the table surface, which was approximately 70cm high). Our results provide novel analyses on eye and head movement metrics during search in an active observation environment, demonstrating the important nuances of movements that can be induced by requiring viewpoint selection to complete a task.