Abstract
Does our visual system interpret the size of visual stimuli on computer screens the same way it interprets visual stimuli in the real world? In perceptual science, participants are often asked to make inferences based on visual stimuli on a two-dimensional computer screen. However, there is a lack of sufficient evidence that participants will interpret this the same way as they do real visual stimuli. Without depth cues available in the real world, the size of such stimuli could be over or under estimated. In the present study, participants performed a visual matching task between real objects and images on a screen. Participants were instructed to take items out of a basket to put on a table in front of them and then manipulate a computer image of that object to match the object’s true size. Participants were allowed to adjust the image as many times as they pleased before continuing to the next trial. Participants manipulated each of the 10 objects 5 times for a total of 50 trials. Of the 10 objects used in the study, participants made the computerized image significantly smaller than the actual size for 5 of the objects (p< .05) and significantly larger than the actual size for 2 of the objects (p< .01). Participants made the images the appropriate size for the remaining 3 objects. Although histograms of the results suggest that there is no clear bias of over or under estimation overall, it still remains apparent that participants misjudge the size of objects in pictures compared to real objects. The results of the present study suggest that more research needs to be focused on size estimation of visual images on computer screen before using it as a substitute for real world stimuli.