Abstract
Position sensors are widely used in virtual reality and other applications to track the location and orientation of the head or limbs. Measures of accuracy and precision are simple to obtain with a static sensor, or a sensor moving along a controlled path. However, such measures fail to characterize performance under the conditions in which the sensor is typically used, when it is subjected to complex acceleration profiles and moves throughout the working volume of the system. We compared the spatial accuracy of an Intersense IS900 acoustic/inertial sensor to that of a photogrammetric triangulation system. Two digital cameras streamed images of the sensor as it was moved freely within a 1×1×1m volume. Coordinate frames of the optical tracker and IS900 were aligned (using least-squares) to minimize point-to-point distances in the IS900 frame. When the sensor was static, the standard deviation of its reported 3D position was close to the 1.5mm that the device's specifications describe. On the other hand, when a freely moving sensor's position was compared with that reported by the optical tracking system, significantly larger deviations were observed. In particular, occasional deviations of up to 3cm were found, which could persist for several seconds before reverting to the reference path. A similar type of deviation occurs in the static state but with a smaller magnitude. The deviations were not consistently related to any particular feature of the motion track such as points of rapid acceleration. Varying the parameters that control the smoothing and prediction filters used by the IS900 resulted in the expected trade-off between high-frequency noise and smoothing of the output. These changes did not introduce significantly more lag or overshoot into the system, at least within the range of accelerations generated by a sensor carried in a human hand.