Abstract
The visual environment has a predictable anisotropic distribution of content which the human visual system appears to take advantage of after sufficient experience (Hansen & Essock, JoV, 2004; Schweinhart & Essock, Perception, 2013). While the general pattern of anisotropy has been found to be a good match to perceptual biases (Haun, Kim, & Essock, JoV, 2009), there is substantial variation both between and within the distributions for different environments (e.g. Girshick, Landy, & Simoncelli, Nat. Neuro., 2011) which can be modeled via a mixture distribution (Eaves, Schweinhart, & Shafto, in press). Here we explored methods for automatically summarizing and "teaching" people about the clusters of variation within indoor and outdoor scenes based on their orientation distribution. A sample of images was taken from videos recorded as observers walked around different types of environments (a nature preserve, a house, a city, a University, etc) and the amplitude of characteristic orientations was extracted (0,45,90,135). We then compared observers' ability to categorize sample images based on pairs of exemplars which 1. best summarized the category means, 2. summarized the sufficient statistics of the distributions, or 3. were chosen to optimally teach (Shafto & Goodman, Proc. Cog. Sci., 2008; Eaves, Schweinhart, & Shafto, in press). When the categories were not well separated and therefore difficult to distinguish, observers were more accurate when they were given exemplars chosen by the more complex methods than when the exemplars simply matched the mean of the orientation distribution. This work suggests that the orientation distribution of images can capture meaningful information about the conceptual category of the scene. Furthermore, the perceptual results have potential implications for fields in which visual training and image categorization are important.
Meeting abstract presented at VSS 2016