We modeled two scenes using Autodesk 3ds Max with the goal of creating two realistic and natural settings: an indoor office-home environment and an outdoor forest scene rendered using the gaming engine Unreal Engine (v4.27.1).
Figure 1 shows the two scenes from one angle. The indoor scene contains a desk, an office chair, a lounge chair, a door, a bookshelf, paintings, and curtains. There are several small objects placed around the scene, including plants and books, which have a variety of material properties and reflectances, including specularities. We used a single point light source placed just above the observers’ VR actor in gameplay (center of the room above office chair), such that even if they looked up, they would not see the light source.
The outdoor scene spans a larger area in VR space, containing trees, a large cliff, a lake, rocks, moss, grasses, and flowers. There were minor ripples across the lake water; otherwise, all other objects were static. The light sources were a directional light, pointed toward the large cliff, along with a skylight set to the same color as the directional light. We additionally changed the color of the sky by multiplying its default material colors (bluish with white clouds) by the illuminant color, such that the sky color (that is, its reflected light) was influenced by the color of the illuminant. The water in the lake was translucent and its color was shifted toward that of the illuminant. It reflected the sky as well as other surroundings (e.g., trees) at its surface.
To render the scenes, we make use of the two-step Photon Mapping (
Jensen, 2001) algorithm available in Unreal Engine (
Lightmass). The first step, computed offline, performs a lighting simulation by tracing packets of photons emitted by a light source, with each photon carrying a fraction of the power of the light. As photons scatter within the scene and hit nonspecular surfaces and are either scattered or partially absorbed, their energy, locations, and directions are stored in photon maps (
surface lightmaps), thus enabling subsequent computation of reflected radiance at any point in the scene. As dense lighting samples are collected throughout the volume of the scene (
Volumetric Lightmaps), lighting data can be interpolated and used to light dynamic objects. Mirror reflections are computed in the second step, which make use of Monte Carlo ray tracing to render the final images. Since most of the illumination information is computed offline during the first step, the computational cost of the rendering step is significantly lower than pure Monte Carlo ray-tracing approaches, albeit at the cost of higher storage requirements. Overall, photon mapping tends to produce low-frequency noise, rather than the high-frequency artifacts introduced by variance in Monte Carlo ray tracing. A potential downside of photon mapping is the bias introduced in rendered images, due to the kernel used in photon density estimation from stored data (
Schregle, 2003). However, since the technique is consistent (
Jensen, 2001), we reduced the bias by increasing the number of photons traced offline during the first step.