Abstract
Introduction: There is an increasing demand for experiments in which participants are presented with realistic stimuli, complex tasks, and meaningful actions. The Unified Suite for Experiments (USE) is a complete hardware and software suite for the design and control of dynamic, game-like behavioral neuroscience experiments, with support for human, nonhuman, and AI agents. We present USE along with an example feature-based learning experiment coded in the suite. Methods: USE extends the game engine Unity3D with a hierarchical, modular state-based architecture that supports tasks of any complexity. The hardware, based around an Arduino Mega2560 board, governs communication between the experimental computer and any experimental hardware. Participants in our task had their eyes tracked as they navigated via joystick through a virtual arena, choosing between two objects on each trial, only one of which was rewarded. Objects were composed of multiple features, each with two possible values. Each context, signaled by the pattern of the floor, had a single rewarded feature value (e.g. red objects might be rewarded on a grass floor, pyramidal objects might be rewarded on a marble one). Results: USE’s hardware enables the synchronization of all data streams with precision and accuracy well under 1 ms. Gaze was classified into behaviors (e.g. fixations/saccades) which displayed appropriate characteristics (e.g. velocities/magnitudes), and demonstrated ecologically meaningful characteristics when re-presented over task videos. Rule learning was all-or nothing, moving from chance to near-perfect performance in one or two trials. Participants displayed standard effects of set switching, including worse performance when contexts differed from the previous trial, and when rules involved an extra-dimensional shift from the previous block. Conclusions USE enables the creation and temporally-precise reconstruction of highly complex tasks in dynamic environments. Our example task shows that costs associated with attentional set-switching generalize to such dynamic tasks.
Acknowledgement: Grant MOP 102482 from the Canadian Institutes of Health Research (TW) and by the Natural Sciences and Engineering Research Council of Canada Brain in Action CREATE-IRTG program (MRW and TW).