Abstract
What can we perceive in a single glance of the visual world? To answer this question, we used an inattentional blindness paradigm to measure how much and in what ways we could alter the periphery of an image without observers noticing? For 10 trials, participants viewed a stream of images (288ms/item, 288ms SOA) and reported if the last image had a face in the middle. On the 11th trial, a modified image was unexpectedly presented at the end of the sequence. Our images were modified in several ways. In Experiment 1, we used a texture synthesis algorithm to generate images that were matched on several first- and second-order statistics within a series of receptive field-like pooling windows (Rosenholtz, 2016). By increasing the size of the pooling windows, we created a series of images that looked progressively more “scrambled.” How much could we scramble images before observers notice? When scrambling the images the least (Freeman & Simoncelli, 2011), 100% of participants failed to notice. In the most extreme case, when using one large pooling window to scramble the entire image while preserving only the center/foveal part (Portilla & Simoncelli, 2001), 48% of participants failed to notice. In Experiment 2, we put the unscrambled peripheral part of one image (e.g., a skyline) around the center/foveal part of another (e.g., a dog). In this case, 75% of participants failed to notice. The only situation in which virtually every observer noticed our modifications (5%) was when the periphery of the image was completely blank. However, in cases when participants did not notice the modifications, they still successfully identified the last image in the stream (i.e., “I saw this dog, not that dog”). Together, these results highlight how a snapshot of perceptual experience is surprisingly impoverished when observers are attending to one location in space.
Acknowledgement: National Science Foundation