Abstract
Stereoscopy involves presenting two differentially-offset images separately to left and right eyes. This 2D image information is combined binocularly in the brain to generate 3D depth perception. Last year, static image pairs were used to introduce and demonstrate the ability to recover and perceive dynamic 3D structure from 2D movies. This year, 3D movies generated from 2D movies demonstrate the same phenomenon, created by introducing a delay between identical tracking shots presented to the left and right eyes. As before, the stereoscopic displays were generated from a variety of lateral and arc tracking movies, including: 1) dolly shots, 2) lateral shots taken while driving, flying, boating, traveling by rail, and orbiting planets, 3) ‘bullet time'/Matrix time sequence shots, and 4) animations based on 3D models. These were scenes from classic motion pictures, or archival footage of significant historical and strategic events. This year's demonstrations include new 3D versions of the first known tracking shots (Venice, 1896), the Hindenburg’s last moments (1937), 'bullet time' animation from Speed Racer/MachGoGogo (1966), The Beatles rehearsals (1969), and scenes generated while orbiting the moon and distant planets. The method enables viewers to (1) see historically or strategically important scenes in 3D; (2) infer depth structure and estimate distance when static monocular cues to depth are sparse or non-existent; and (3) break static forms of camouflage. The method holds the potential to quantify real and perceived depth from motion parallax in historical and contemporary popular movie sequences. Moreover, we demonstrate that binocular disparity sequences derived from dolly-arc tracking shots rotating around a subject can generate robust 3D perception, despite the common practice of avoiding such binocular convergence in the stereoscopy field (e.g., Gao et al. 2018, PLoS One).