Abstract
Any Head Mounted Display (HMD) must be calibrated if it is to provide a realistic representation of space. This is especially true in visual psychophysics experiments which require accurate rendering of objects at specified 3D locations. Previous attempts at calibrating HMDs have involved human observers making position judgements which are error-prone and only work for see-through HMDs.
Instead, we placed a camera inside a stationary HMD and recorded images of objects placed in the world. The camera also recorded a superimposed regular array of dots generated by the HMD, allowing the image locations of objects to be re-expressed in the coordinate frame of the HMD image. The position and orientation of the HMD and world objects were recorded by a 6 degrees-of-freedom tracking system. We used standard camera calibration techniques to recover the optical parameters of the HMD (not the camera) and hence derive appropriate software frustums for rendering virtual scenes in the binocular HMD. Recovered parameters comprised the aspect ratio, focal length, centre pixel, optic centre location, principal ray, and radial and tangential distortion parameters to provide a full description of the left- and right-display frustums.
We quantified the improvement in calibration by measuring re-projection errors between real world and virtual points rendered to appear at the same spatial location and found less than 1 pixel root-mean-square error. We have applied the same method to the calibration of a non-see-through HMD. This requires the camera's location to be constant while an image of the HMD grid is taken (HMD on) and then an image of the world is taken (HMD removed).
We thus present an accurate, robust and fully automated method based on established camera calibration techniques which are applicable to both see-through and non-see-through HMDs.
Supported by Wellcome Trust and The Royal Society.