Abstract
Gaze is shown to be anticipatory to motor actions and is promising for understanding an individual’s intentions even before the action. The use of visual data in relation to the motor behavior is thus encouraging to engage an intuitive control of assistive technologies (prosthetic arm, exoskeleton) by people with impaired motor functions. Vast majority of studies investigating interaction with objects such as grasping and moving objects between different locations focused on isolated items. Yet, we daily interact with numerous objects in a rich environment. Emerging evidence demonstrates effect of scene context on human visual search behavior, but it remains unclear whether in a realistic context the eye-hand interaction remains the same. In the present study, we evaluated impact of the scene context on temporal and spatial characteristics of object manipulation in a pick-and-place task. To do so, three realistic experimental scenarios were implemented in VR: congruent (object fitted the context), incongruent (object didn’t fit the context), and isolated (no context was present). Three phases were examined: search, reach, and transport. Each of seven participants searched for the target object, picked it, and placed it on a predefined final location. Using linear mixed-effect model analysis, a significant increase of task performance duration and the scene coverage for incongruent condition was found. Furthermore, a significant effect of the scene context was found on the search phase indicated by longer search times in incongruent condition. The reach phase duration was longer for incongruent condition, but not significantly. Finally, transport phase was not affected by the scene context. Thus, in a pick-and-place task, this pilot study demonstrates impact of the scene context primarily on search behavior, but not significantly on reach and transport. The study brings insight to the development of assistive technologies for a pick-and-place task demonstrating impact of the realistic scene context.