Purchase this article with an account.
Zoltan Derzsi, Robert Volcic; Getting started with the MOTOM toolbox – an Optotrak-Matlab interface: From the first beeps to fingertip tracking in virtual reality. Journal of Vision 2018;18(10):58. doi: 10.1167/18.10.58.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The Optotrak is a motion capture system which has been widely used in the past two decades in both industry and academia. Here we present a Matlab toolbox that allows users to initialise hardware and automate data collection. With previous Matlab-Optotrak interfaces, the user was required to manually compile C code in an external integrated development environment. The MOTOM toolbox does the vast majority of the initialisation and setting-up process, without the need of human interaction and programming skills once all the software requirements are met. It works on both 32 and 64-bit systems. The toolbox also detects the configuration of the hardware it is controlling: it can automatically assign many cameras together, and it can monitor memory usage and read diagnostic information. We present a number of code examples and what we perceive as best practices to allow fast, convenient and reliable data acquisition. In addition to the core features such as rigid body tracking which handles a number of assigned markers together as a single entity, we introduce additional functions that are frequently used in the day-to-day life of an experimenter, such as proximity detection, the automated creation of rigid body definitions, or the handling of virtual markers (coordinates calculated with respect to a rigid body). Virtual markers are especially useful when placing a physical marker on a particular body segment is impossible; they can be used to track the tips of fingers or the nodal points of the eyes. We present our implementation of fingertip tracking, to demonstrate the use of custom-built rigid bodies and the general capabilities of the MOTOM toolbox. The MOTOM toolbox can be used in conjunction with other toolboxes, which makes it a valuable addition to studies where obtaining position information is required: digitising 3D objects, studying grasping/reaching, calligraphy or virtual reality environments.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only