The US Supreme Court recently ruled that portions of the 1996 Child Pornography Prevention Act are unconstitutional. The Court ruled that computer generated (CG) images depicting a fictitious minor are constitutionally protected. Judges, lawyers, and juries are now being asked to determine whether an image is CG, but there is no data on whether they can reliably do so. To test the ability of human observers to discriminate CG and photographic images, we collected 180 high-quality CG images with human, man-made, or natural content. Since we were interested in tracking the quality of CG over time, we collected images created over the past six years. For each CG image, we found a photographic image that was matched as closely as possible in content. The 360 images were presented in random order to ten observers from the introductory psychology subject pool at Rutgers. Observers were given unlimited time to classify each image. Observers correctly classified 83% of the photographic images and 82% of the CG images (d'=2.21). Observers inspected each image for an average of 2.4 seconds. Among the CG images, those depicting humans were classified with the highest accuracy at 93% over all six years (d'=2.60). This accuracy declined to 63% for images created in 2006. Because the experiment was self-paced, inspection times differed among observers, and the results show a strong speed-accuracy trade-off. The observer with the longest inspection time (3.5 seconds/image) correctly classified 90% of all photographic images and 96% of all CG images (d'=3.00). This observer correctly classified 95% of CG images depicting humans (d'=2.92), and his only errors occurred with 2006 images, where he achieved an accuracy of 78%. Even with great advances in computer graphics technology, the human visual system is still very good at distinguishing between computer generated and photographic images.