September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Searching Simulated Lungs in 3D with Stereoscopic Volume Rendering
Author Affiliations
  • Jeffrey Doon
    Department of Cognitive and Neural Systems, Boston University
    Department of Radiology, Brigham and Women's Hospital
  • David Getty
    Department of Radiology, Brigham and Women's Hospital
  • Ennio Mingolla
    Department of Cognitive and Neural Systems, Boston University
  • Jeremy Wolfe
    Department of Radiology, Brigham and Women's Hospital
    Harvard Medical School
Journal of Vision September 2011, Vol.11, 1335. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeffrey Doon, David Getty, Ennio Mingolla, Jeremy Wolfe; Searching Simulated Lungs in 3D with Stereoscopic Volume Rendering. Journal of Vision 2011;11(11):1335.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

A modern Computed Tomography (CT) scan can yield 700 slices at 512 × 512 pixel resolution. Radiologists search through vast amounts of data looking for subtle visual targets by scrolling back and forth through such stacks of images. Can we make search less difficult and time consuming? We compared stereoscopic volume rendering to traditional slice-by-slice viewing. Our new software uses GPU processors that enable real-time rotation of volume renderings. We created artificial stimuli designed to emulate challenges of real medical search tasks. Targets and distractors were placed randomly in a 200 × 200 × 600 volume of 1/f^3 noise. Images were viewed with a Planar polarized mirror system where 200x200 pixels subtended 8 deg of visual angle. Distractors were randomly oriented ellipsoids with two axes of length 15 and one of length 20 voxels. Observers searched for an egg-shaped target, created by fusing half of a randomly oriented ellipsoid with a sphere of diameter 15 voxels. Eggs and ellipsoids were twice the maximum intensity of the noise and blended into the background at their edges. We tested observers' ability to find the egg among ellipsoids in two conditions. The slice-by-slice condition allowed observers to scroll back and forth through 600 images, one at a time. In the stereo condition, 50-slice rendered stereo “slabs” could be sampled, rotated, and viewed under user control throughout the data volume. Stereo slabs were rendered by maximum intensity perspective projection, whereby, when multiple voxels project to a single view plane pixel, only the highest intensity value gets drawn. In every trial observers searched for one target among seven distractors and responded with a mouseclick on the suspected target. Stereo viewing was more accurate (98 vs 48 percent correct) and faster (42 vs 84 sec per trial mean) than “stack mode”. These would be dramatic improvements if they generalize to clinical settings.

Supported in part by Toshiba Medical Systems Corporation. Supported in part by CELEST, an NSF Science of Learning Center (SBE-0354378 and OMA-0835976). 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.