**Geometrically, stereoscopic 3-D (S3D) content should appear distorted unless viewed from the position for which the content was produced. Almost all commercial and laboratory S3D content is generated assuming that it will be presented on a screen frontoparallel to the viewer. However, in cinema and the home, S3D content is regularly viewed from oblique angles, and yet shapes are not usually perceived to be distorted. It is not yet known whether this is simply because viewers are insensitive to incorrect viewing angles or because viewers automatically compensate for oblique viewing, as they do for 2-D content. Here, we investigate this using a canonical-form paradigm. We show that S3D content can indeed appear warped when viewed from oblique angles, and that this effect is more pronounced than for 2-D content. We hypothesized that motion cues in the content would aid in the correct perception of S3D content, making it appear more natural even when viewed obliquely, but we find little support for this idea. However, the perceptual distortions are still small, and viewers do compensate to some extent for oblique viewing. We conclude that, at least as regards object distortion, oblique viewing is unlikely to be substantially more of a problem for S3D content than it already is for 2-D.**

^{2}, as measured through the 3-D glasses with a Minolta LS100 photometer. Interocular cross talk was 1.4% when measured with the screen frontoparallel to the photometer, rising to 2.0% for a viewing angle of 20° and 7.1% for a viewing angle of 45°.

*θ*

_{view}to be the angle between the plane normal to the screen and the viewer's line of sight to the center of the screen (Figure 2A). In different experimental blocks, the turntable was rotated so that

*θ*

_{view}was either 0°, −45° (closer to the viewer on her right), or +20°. It was convenient to alter the viewing angle by moving the display screen rather than the participant (see Figure 2A). A chin rest was used to ensure that the participant's eyes were at the correct position, and the chair was adjustable to ensure that the participant was comfortable. In some experimental blocks, a fabric curtain with a hole was pulled across that occluded all four screen edges from the participant's view while allowing them to see the stimuli.

*θ*

_{rend}= −45° and

*θ*

_{rend}= +45°. We will refer to these as the normal-rendered and obliquely rendered cube, respectively. When

*θ*

_{rend}=

*θ*

_{view}, the obliquely rendered cube was rendered for the actual viewing angle of the participant. We will refer to this as geometrically correct. In the S3D condition, the geometrically correct stimulus is orthostereoscopic, i.e., each eye ideally saw the retinal image which would have been projected by a physical cube in front of the viewer, apart from accommodation effects. On each trial, the orientation of each cube was random: Each virtual cube was rotated through a random angle about all three axes in succession before being rendered.

*θ*

_{rend.}. The apparent distortion increases monotonically as the rendering angle departs from frontoparallel. Additionally, a given cube has a wider horizontal extent on the screen when rendered for oblique viewing (Figure 3). To help ensure that participants did not simply judge the “more cube-like” object to be the one with the smallest extent on-screen, the size of the virtual cubes was chosen randomly on each trial. The side length

*L*of one cube was picked from a uniform distribution between 6 and 14 cm, and the side length of the other cube was then set to 20 −

*L*cm. The sum of the two side lengths was therefore always 20 cm, ensuring that the rendered cubes never overlapped on the screen. This manipulation meant that the obliquely rendered cube could be either larger or smaller than the normal-rendered one.

*θ*

_{view}= −45°, 20°, or 0°, and had the curtain occluder either present or absent. In blocks where the occluder was present, it was always pulled across before the television's orientation was changed, so the participant had no prior knowledge of the screen orientation. Each participant did the six blocks in a random order chosen with a random number generator. In each block, the following four parameters were manipulated:

- The angle
*θ*_{rend}used to project the obliquely rendered cube (eight possible values: ±45°, ±35°, ±20°, and ±10°; see Figure 3B) - Whether the normal-rendered cube was at top or bottom of the screen (two possible values)
- Object motion (two possible values: static or rotating)
- Binocularity (four possible values: S3D [binocular; each eye sees a different image], B2D [binocular; each eye sees the same image on the screen], or M2D [monocular] left or right).

*θ*

_{rend}× 3 binocularity [S3D/B2D/M2D] × 2 object motion [static/rotating]). Altering the viewing and rendering angles enables us to assess the effectiveness of perceptual compensation for oblique viewing. Binocularity, object motion, and frame occlusion are the three viewing factors whose effect on compensation we wish to assess.

*θ*

_{rend}=

*θ*

_{view}. However, both our data and the existing literature indicate a second mechanism: Objects also appear more veridical when rendered for frontoparallel viewing,

*θ*

_{rend}= 0°, even if the screen is in fact viewed obliquely. We assume that the perceived veridicality due to each mechanism declines according to a Gaussian function as the value of

*θ*

_{rend}moves away from the optimum, and we further assume that the perceived veridicality of the object is simply the sum of contributions from each factor. Accordingly, we model the perceived veridicality

*V*of each object as where the free parameters

*s*and

*r*determine each factor's sensitivity to

*θ*

_{rend}, and

*A*and

*B*determine the relative weight of each factor.

*A*is the weight given to normal rendering, and

*B*the weight given to geometrical correctness. In our experiments, one of the cubes was always rendered for perpendicular viewing,

*θ*

_{rend}= 0°. The difference in perceived veridicality between this normal-rendered cube and the obliquely rendered cube is therefore

*θ*

_{rend}and

*θ*

_{view}, as well as trial-to-trial variation, we make the usual assumption that this signal is subject to internal noise, which we model as Gaussian. Without loss of generality, we set the standard deviation of the noise to 1, since this degree of freedom is already accounted for by the weights

*A*and

*B*. We assume that the viewer selects the normal-rendered object as most resembling a cube whenever their noisy internal estimate of Δ

*V*is greater than zero. The probability that the viewer will select the normal-rendered object as most resembling a cube is then given by

*θ*

_{rend}=

*θ*

_{view}= 0°, the model returns a probability of 0.5 for selecting either cube, which is correct, since at this point both cubes are rendered for the same viewing angle (they would not be identical on the screen, due to the randomization of size and orientation described earlier).

*B*= 0) and no compensation (red,

*A*= 0). With perfect compensation, the results are unaffected by viewing angle: The model always selects the normal-rendered cube when the obliquely rendered cube is rendered with a perceptibly different rendering angle. With no compensation, the model selects the obliquely rendered cube when this is closer to geometrically correct.

*A*,

*B*,

*r*, and

*s*do not change with the viewing angle

*θ*

_{view}. However, we allowed the model parameters to vary for the different viewing factors, i.e., frame occlusion, binocularity, and object motion, to account for the effect they may have on perceptual compensation. We used maximum likelihood fitting assuming simple binomial statistics, as follows. Suppose that on the

*j*th set of stimulus parameters, our subjects chose the normal-rendered object on

*M*out of

_{j}*N*trials. Then the log likelihood of the data set, apart from a constant which has no effect on the fitting, is where

_{j}*P*is the model probability for the

_{j}*j*th data point, which in turn depends on the stimulus parameters

*θ*

_{view},

*θ*

_{rend}, and the four model parameters, as described by Equations 2 and 3. We adjusted the model parameters to maximize this likelihood. The mathematical properties of the model meant that many different sets of model parameters gave virtually the same value for Δ

*V*and were thus indistinguishable. To avoid this degeneracy, we set the value of the parameter

*A*to 3 and allowed

*B*to vary. We thus fitted sets of three model parameters (

*B*,

*r*,

*s*) to sets of 24 data points (8 values of

*θ*

_{rend}× 3 values of

*θ*

_{view}).

*θ*

_{rend}, the viewing angle for which the obliquely rendered cube was drawn (Figure 2A). For

*θ*

_{rend}= 0, both cubes would be rendered for perpendicular viewing, so performance would necessarily be at chance. Figures 5 and 6 show results for the frame-visible and frame-occluded conditions, respectively. The three panels in each row show results for the three different viewing angles

*θ*

_{view}. The different colors and symbols show different binocularity conditions: Red squares = binocular viewing in S3D; blue triangles = binocular viewing in 2-D (same image on screen for both left and right eyes); green disks = monocular viewing (pooled left and right monocular results). The upper panels (A through C) show data for rotating stimuli, and the lower (D through F) for static.

*θ*

_{rend}=

*θ*

_{view}. In this case, for the S3D condition, the obliquely rendered cube should project the same image onto each retina as a real cube (geometrically correct stimulus). The horizontal line at 0.5 marks chance (i.e., both cubes looked equally cube-like to the participant, who thus selected one at random). If objects look veridical when rendered for normal viewing, even when viewed obliquely, data points should lie above this line. If objects look veridical when they are geometrically correct on the retina, where data points should lie depends on rendering and viewing angles. The white regions in each panel show where the normal-rendered cube is closer than the obliquely rendered cube to being geometrically correct for the particular viewing angle. Here the normal-rendered cube should look more veridical, so subjects should select it whenever they can detect a difference between the two render angles (probability ≥0.5). The fact that data points do lie in the white regions, rather than in the gray regions below them, confirms this but does not enable us to distinguish between a preference for normal rendering and a preference for geometrical correctness.

*would*create the same image on the retina as a real object

*if*the observer were viewing the screen perpendicularly. In the next two sections, we discuss in more detail several aspects of our data which confirm this conclusion.

*θ*

_{rend}

*θ*

_{view}= 0°, i.e., the screen was frontoparallel in the usual way. If

*θ*

_{rend}= 0°, both cubes would have the same projection, so performance would be at chance. As the obliquely rendered cube is drawn at ever more extreme angles, it appears progressively more distorted, and subjects become more likely to choose the normal-rendered cube. The rendering angle

*θ*

_{rend}is significant when considering only this subset of the data (

*χ*

^{2}= 42,080.1,

*p*< 0.0005). In agreement with previous studies (Cutting, 1987), subjects were fairly insensitive to incorrect rendering. At |

*θ*

_{rend}| = 10°, results do not differ significantly from chance for any binocularity conditions (95% confidence intervals in Figure 5 overlap chance). Even when

*θ*

_{view}was as large as 20°, the results are not significantly different from chance for a static cube viewed without S3D. For a rotating cube, or a static cube viewed in S3D, subjects were significantly more likely to choose the normal-rendered cube but did so only about 75% of the time. Even when the obliquely rendered cube was drawn for a viewing angle as extreme as 45°, subjects still chose it as being “more cube-like” on nearly 10% of trials when viewing a static cube in 2-D. This is surprising, given that a rendering angle of

*θ*

_{rend}= 45° produces a very different image on the screen from one of 0° (Figure 3A).

_{view}≠ 0

*θ*

_{rend}, participants were less likely to select the normal-rendered cube than when the screen was frontoparallel to them. In the yellow-shaded regions, where a preference for normal rendering conflicts with a preference for geometrical correctness, data points lie in the bright region below chance rather than the shaded region, i.e., participants were more likely to select the object which was closer to geometrically correct. This indicates that they were not able to compensate completely for the oblique viewing angle.

*θ*

_{view}= −45°. Thus at

*θ*

_{rend}= −45°, the obliquely rendered cube produced the geometrically correct image of a cube on the retina, whereas the normal-rendered cube was distorted. Figure 6B shows that subjects were quite capable of detecting a 45° error in rendering angle when the display is frontoparallel: They rejected the erroneous rendering over 80% of the time. However, when viewing obliquely at

*θ*

_{view}= −45° (Figure 6A), subjects did not show a comparably strong preference for the geometrically correct cube: They chose it only 25% of the time for the S3D stimulus at

*θ*

_{rend}= −45°, while for the 2-D stimuli, they picked both cubes equally often. This cannot be explained simply by a lack of sensitivity to distortion (Cutting, 1987; Gombrich, 1972), but must reflect a mechanism favoring normal rendering.

*θ*

_{rend}=

*θ*

_{view}in Figure 5C. Geometrically, the obliquely rendered cube should appear equally distorted for viewing-angle discrepancies of equal magnitude, |

*θ*

_{view}−

*θ*

_{rend}|. Thus, it should appear more distorted for

*θ*

_{rend}= −10° (a discrepancy of 30° from the true viewing angle,

*θ*

_{view}= 20°) than for

*θ*

_{rend}= 35° (a discrepancy of only 15°). Yet Figure 5C shows that in fact, for 2-D stimuli, subjects could not perceive the distortion at all for

*θ*

_{rend}= −10° (they picked the obliquely rendered cube as often as the normal-rendered cube), whereas it was fairly obvious to them at

*θ*

_{rend}= 35° (they picked the normal-rendered cube on 75% of trials). This asymmetry, along with the lack of a clear preference for the geometrically correct rendering, is another indication of a compensation mechanism which corrects for oblique viewing and makes objects rendered for normal, perpendicular viewing tend to appear correct even if the retinal image is in fact distorted. However, this compensation works only up to a point. If the compensation were perfect, then Figure 5A and C would be identical to Figure 5B (compare to Figure 4).

*θ*

_{rend}), and viewing angle (

*θ*

_{view}) as variables. The five-way interaction yielded significant results (

*p*< 0.0005; Table 1), but this could be simply due to one specific set of factors yielding a significant result rather than the significance of the factors themselves. Thus we evaluate the main factors and the different possible interactions between the factors in Table 1 at the end of the article. We discuss the nature and size of these differences in the following sections. The statistical significance of all main effects and interactions are reported in Table 1. We report chi-square values with the degrees of freedom specified.

*θ*

_{rend}=

*θ*

_{view}) and whether it would be correct if viewed perpendicularly (

*θ*

_{rend}= 0°). Much of our data confound these two effects, because often, both factors imply that the user should select the normal-rendered cube. This situation corresponds to the white regions in Figures 6 and 7. To assess how the different experimental conditions (occlusion, binocularity, rotation) affected the competition between the two model components, we also repeated this statistical analysis using only data where the two components pulled in opposite directions, i.e., the yellow regions in Figures 5 and 6. Here there is no overlap in the values of

*θ*

_{view}and

*θ*

_{rend}, so the statistical significance of

*θ*

_{view}cannot be determined. We therefore only consider the main factor influences and the interactions between frame occlusion or visibility, binocularity, rotation, and

*θ*

_{rend}. Table 2 shows the main effects and interaction terms for these four factors.

*θ*

_{rend}return significant results, whereas any interactions not including

*θ*

_{rend}are not significant. This makes sense, because clearly the rendering angle

*θ*

_{rend}is key to whether the object appears distorted. All analysis up to this point is independent of our model. Our statistical analysis implies that frame occlusion, binocularity, and object motion all affect the balance between the competing preferences for a geometrically correct and a normal rendering angle.

*either*for the geometrically correct viewing angle

*or*for normal, perpendicular viewing. An advantage of the model is that it also allows us to make quantitative comparisons between the two mechanisms, as follows.

*A*, representing the weight given to normal rendering, is generally larger than

*B*, the weight given to geometrically correct images. To quantify this, we define the compensation index (Table 3) as the ratio

*C*=

*A*/(

*A*+

*B*). A value of

*C*= 0 would indicate no compensation, such that perception reflected only the geometrical correctness of the image on the retina, without regard for whether the on-screen image would appear correct when viewed normally. A value of

*C*= 1 would indicate perfect compensation, such that viewing angle had no effect on perceived veridicality, and no preference for geometrical correctness. Another interpretation of the compensation index becomes apparent when we consider how the perceived veridicality of an object rendered for frontoparallel viewing declines monotonically with viewing angle, relative to its veridicality at frontoparallel viewing. From Equation 1, we have

*C*. Thus, in our model, the compensation index

*C*describes how good a normally rendered picture looks when viewed at the most extreme viewing angles.

*C*for the different viewing conditions in our experiment. All 12 data points in Figure 7 lie well above 0.5, indicating that the preference for normal rendering dominates. This may seem surprising, given that in the yellow regions of Figures 5 and 6 where the two preferences conflict, data and model fits both lie below 0.5, i.e., the geometrically correct cube is chosen preferentially. To see why this occurs, it is helpful to consider how the model compares cubes rendered for

*θ*

_{rend}= 0° (normal) and

*θ*

_{rend}= 30°, when the viewing angle is 45°. To the normal-rendering mechanism (the

*A*term in Equation 1), the normal-rendered cube is perfect and the other cube is less veridical because it is 30° away from the peak of the Gaussian. However, because the Gaussian is broad, the difference is not extreme, so the normal-rendering mechanism has only a weak preference for the normal-rendered cube. Conversely, to the geometric-correctness mechanism (the

*B*term in Equation 1), the obliquely rendered cube looks acceptable—the 15° error in render angle is less than one standard deviation—but the normal-rendered cube looks very poor, with a 45° error of two standard deviations. This mechanism therefore has a strong preference for the obliquely rendered cube. When the preferences of both mechanisms are summed, the strong preference for the obliquely rendered cube wins out over the weak preference for the normal-rendered cube.

*p*< 0.0005). Most of this appears to come from the significant effect binocularity had on the compensation index. In particular, there was a very significant difference between S3D and both B2D and monocular conditions (both

*p*s < 0.0005), but there was still a significant difference between B2D and monocular conditions (

*p*= 0.024). Frame occlusion (

*p*= 0.022) and rotation (

*p*< 0.0005) had a significant effect in the monocular condition but not in the other binocularity conditions.

*C*= 0.66 for binocular 2-D viewing to 0.575 for stereoscopic S3D viewing (averaged over other viewing conditions): a small but statistically significant difference. This effect can be seen in the raw data, when we compare the S3D results in Figure 5 to the 2-D (red squares vs. blue triangles). This is particularly clear in Figure 5D, where the viewing angle is extreme (

*θ*

_{view}= −45°). When the obliquely rendered cube is close to the correct retinal image (

*θ*

_{rend}close to

*θ*

_{view}), subjects perceive it as more cube-like than the normal-rendered cube when it is viewed in S3D, and select it >75% of the time. However, when viewed in 2-D, it appears nearly as distorted as the normal-rendered cube and is selected only slightly more than half the time. The effect of S3D is also apparent in Figure 5E, where the screen is viewed perpendicularly. Viewers are more sensitive to errors in rendering angle with S3D than with 2-D or monocular content. In 2-D, a rendering-angle error as large as 20° cannot be distinguished from the correct rendering angle of 0°. In S3D, performance at ±20° is around 75%, suggesting that the error is detected on about half of trials.

*C*= 0.63 when the frame was visible to 0.62 when it was occluded (averaged over other viewing conditions), which is not statistically significant. However, occluding the frame does produce a substantial—and significant—drop in compensation for the monocular static condition (Figure 7). This is in qualitative agreement with results from Vishwanath et al. (2005). These authors found some compensation with monocular viewing when the picture frame was visible, but none for monocular viewing through an aperture. We also saw a significant effect of occlusion when restricting our analysis to data where the normal-rendering and geometrical-correctness preferences make opposite predictions (

*p*< 0.0005; Table 2).

*θ*

_{view}=

*θ*

_{rend}= −45°. Pooling static and rotating stimuli in Figures 5 and 6, viewers were closer to chance when they could see the screen edges (chose the obliquely rendered cube on 120 out of 352 trials) and preferentially chose the obliquely rendered cube when the edges were occluded (90 out of 352 trials). A similar effect persists at

*θ*

_{rend}= −35°. Elsewhere, the lack of an effect seems to be because our participants were relatively insensitive to the distortions caused by rendering angle and thus did not notice when these distortions were corrected.

*p*= 0.011 and

*p*= 0.023, respectively). However, object motion did not have the effect we expected. We had speculated that structure-from-motion cues might contribute to the compensation mechanism, increasing the preference for normal-rendered objects. In fact, object motion decreased the compensation index for both monocular and binocular 2-D cubes (Figure 7); this was significant in the monocular condition. In stereoscopic 3-D, object motion did tend to increase the compensation index, but the increase was not significant. As Table 1 shows, considering the full set of raw data, there is a significant interaction between object motion and binocularity (

*p*< 0.0005). Pairwise comparison shows that object motion has a significant effect even when considering the individual binocularity conditions (

*p*= 0.011 for all three conditions of S3D, B2D, and monocular). However, this interaction was not significant when we restricted our analysis to the subset of data in Table 2. We conclude that overall, object motion has little consistent effect on perception.

*θ*

_{view}= 20°, viewers showed only a weak preference for the geometrically correct cube (

*θ*

_{rend}=

*θ*

_{view}), suggesting that compensation made the normally rendered cube appear nearly as veridical, whereas when

*θ*

_{view}= −45°, they showed a stronger preference for the geometrically correct cube (Figures 5 and 6, panels A and D vs. C and F). According to our model, pictures appear more veridical for small oblique viewing angles than for large ones (Equation 5). Our model assumes that compensation works equally well for all viewing angles (blue curves in Figure 4). The decline in veridicality comes from the preference for geometrical correctness against which the compensation mechanism is pitted. In our model, taking

*C*= 0.62 and

*r*= 23° as representative values, veridicality never drops below 62% of optimal even at the most extreme angles, and remains above 80% even out to viewing angles of 28°.

*B*term in Equation 1) is effectively an implementation of regression which allows for the possibility that regression is more effective for small departures from geometrical correctness. The parameter

*r*describes the range over which regression operates, with perfect regression corresponding to the case

*r*→ ∞ and

*A*= 0.

*r*, i.e., make subjects more tolerant of departures from geometrical correctness. It might also boost the weight of

*B*relative to

*A*, thus reducing the compensation index

*C*. If so, this could potentially be one reason we found less compensation with cubes than Banks et al. did with hinges.

*θ*

_{view}= 20°, our data show an asymmetry in the effect of render angle, indicating that objects looked more cube-like when rendered for a viewing angle closer to frontoparallel than the actual viewing angle than they did when the render angle was equally distant from the actual viewing angle but in the opposite direction. This must mean that subjects had access to some source of information about screen orientation. Possible sources of information include accommodation, motion parallax from small head movements within the headrest, gradients in luminance across the screen, and so on. However, this limitation does not affect our main conclusion, which relates to the difference between binocular 2-D and S3D viewing. Less surprisingly, in this impoverished viewing condition, subjects had greater uncertainty and were less able to perceive any differences between the two cubes. Our model fits indicate lower sensitivity under monocular viewing in almost all cases.

*, 25 (1), 12–16.*

*Information Display (1975)**, 121 (4), 24–43.*

*SMPTE Motion Imaging Journal**, 61 (8), 1555–1563.*

*Perception & Psychophysics**, 10 (4), 433–436.*

*Spatial Vision**, 12 (5): 8, 1–14, http://www.journalofvision.org/content/12/5/8, doi:10.1167/12.5.8. [PubMed] [Article]*

*Journal of Vision**, 13 (3), 323–334.*

*Journal of Experimental Psychology: Human Perception & Performance**. Mineola, NY: Courier Dover.*

*A treatise on painting**Traité de perspective linéaire contenant les tracés pour les tableaux, plans & courbes, les bas reliefs & les décorations théatrales, avec une théorie des effets de perspective*[Translation: Discusses Linear Perspective: Containing Traces To Tables, Maps And Curves].

*Dalmont et Dunod*.

*, 31 (4), 365–374.*

*Acta Psychol (Amst)**January).*. Paper presented at Stereoscopic Displays and Virtual Reality Systems XI, San Jose, CA.

*Variation and extrema of human interpupillary distance**. London, UK: Phaldon.*

*Art and illusion: A study in the psychology of pictorial representation*(4th ed.)*, 2 (4), 479–490.*

*Journal of Experimental Psychology: Human Perception & Performance**, 7 (6), 625–633.*

*Perception**. Paper presented at the 2013 International Conference on 3D Imaging, Liege, Belgium.*

*Perceptual compensation mechanisms when viewing stereoscopic 3D from an oblique angle**, 2008, 23–32, doi:10.1145/1394281.1394285.*

*ACM Transactions on Graphics**, 31, 111–122.*

*Ophthalmic & Physiological Optics**, 45 (4), 1042–1047.*

*Child Development**, 65 (1), 141–149.*

*British Journal of Psychology**Paper presented at the 30th European Conference on Visual Perception, Arezzo, Italy.*

*What's new in Psychtoolbox-3?**, 33, 513–530, doi:10.1068/p3454.*

*Perception**. Cambridge, UK: Cambridge University Press.*

*The psychology of perspective and Renaissance art**, 27 (10), 323–339.*

*Journal of the Optical Society of America**, 53 (3), 1–14.*

*Journal of Imaging Science and Technology**, 297 (5865), 376–378.*

*Nature**, 1 (5), 329–337.*

*London and Edinburgh Philosophical Magazine and Journal of Science**, 20, 604–623.*

*Archives of Ophthalmology**, 47 (4), 1175–1178.*

*Child Development**, 10 (4), 437–442.*

*Spatial Vision**, 14 (1), 13–18, doi:10.3758/Bf03198608.*

*Perception & Psychophysics**. London, UK: Cambridge University Press.*

*Optics, painting & photography**, 9 (13): 11, 1–37, http://www.journalofvision.org/content/9/13/11, doi:10.1167/9.13.11. [PubMed] [Article]*

*Journal of Vision**, 361 (6409), 253–255.*

*Nature**, 28 (6), 521–526.*

*Perception & Psychophysics**, 7 (6): 7, 1–11, http://www.journalofvision.org/content/7/6/7, doi:10.1167/7.6.7. [PubMed] [Article]*

*Journal of Vision**, 11 (8): 11, 1–29, http://www.journalofvision.org/content/11/8/11, doi:10.1167/11.8.11. [PubMed] [Article]*

*Journal of Vision**, 131, 261–263.*

*Nature**, 203 (1153), 405–426.*

*Proceedings of the Royal Society of London B: Biological Sciences**, 8 (10), 1401–1410.*

*Nature Neuroscience**, 128, 371–394.*

*Philosophical Transactions of the Royal Society of London**. Paper presented at Stereoscopic Displays and Applications IV, San Jose, CA.*

*Image distortions in stereoscopic video systems**. Paper presented at Stereoscopic Displays and Applications XXII, San Francisco, CA, USA.*

*How are crosstalk and ghosting defined in the stereoscopic literature?**, 1915, 36–48, doi:10.1117/12.157041.*

*Stereoscopic Displays and Applications IV**, 16 (6), 744–752, doi:10.1109/TCSVT.2006.875213.*

*IEEE Transactions on Circuits and Systems for Video Technology**Correction of geometric perceptual distortions in pictures*. Paper presented at the Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA.