3D graphic scenes are only correctly rendered for one viewpoint. Without laborious calibration, however, observers seldom view the monitor from this viewpoint. Even in visual experiments using headrests, inter-subject variability in head-size and eye position result in many subjects viewing the display “off-axis”, and off-axis viewing produces well known distortions in perceptual judgments. The goal is to correctly render graphics displays for the current user of the application/experiment based on a simple set of perceptual judgments. Our approach uses point matches between points on a transparency placed between observer and monitor, and user-adjustable points on the monitor to derive a transformation matrix between observer and monitor. The solution adapts well-known calibration methods (prominently, Fundamental Matrix computation) from computer vision. A more challenging problem is to calibrate a visual-haptic display, in which 3D touchable points must be in register with graphics points, and in which observers view the display through a mirror. To solve this problem, the previous approach is extended by placing a transparency on the mirror, so that the user on a head-rest can match reflected monitor points with the transparency (placed on the mirror) points, thus estimating the transformation from the reflected monitor frame and the mirror. In addition, we have observers make point correspondences using the haptic device. These correspondences are used to derive mirror-monitor, eye-monitor, and monitor-haptic transformations, which can then be used to correctly render graphics.