Six observers participated in each experiment. All were 20/20 or better on the standard Lighthouse test at 4 m and in a modified Lighthouse at the experimental viewing distance of 61 cm.
Images were 18-cm square and viewed on an iMac computer at 72-dpi resolution.
Images were generated using the Inventor library on a Silicon Graphics computer. The position of objects of interest were randomized at an (x,y) location with mean (x,y,z) =(0,0,0). The (x,y) position of the object of interest varied ±.375. In each scene, 21 background objects had (x,y) coordinates ±1.5 and z coordinates from −1.8 to −9. The projection was perspective with the virtual camera at (0,0,5). Units are arbitrary Inventor units.
Lighting was directional with fixed direction vector (1, -1, -1). Objects were rendered with Phong shading using Inventor parameters diffuse color = (.8, .8, .8), specular color = (1, 1, 1), ambient color = (.5, .5, .5), and shininess = 1.
Texture wrapping uses Inventor’s default method. First, the bounding box for each object is computed. Next, the texture image is projected onto each side of the box and then onto the object’s polygons.
Each work week (5 days) observers were trained on day 1, tested for recognition and then trained on days 2–4, and they were tested for recognition on day 5. This was repeated for each segmentation clue type in Experiment 1.
In Experiment 2, observers performed (a) a novel tracing task on day 1, week 1; (b) a novel and a familiar object tracing task on day 5, week 1; and (c) a familiar tracing task on day 5, week 2. During each week, the observers used a different set of three training objects. Object sets were permuted evenly among observers to control for object difficulty and ordering effects.