Abstract
What, if any, are the rules governing how vision interfaces with distantly related cognitive systems? We investigated a possible relationship between visual grouping mechanisms and expectations for words like "more" and "most". We walked up to strangers on the street (N=100), gave them an iPad, and asked them to create a picture depicting an English sentence that we would say to them. We created a program for the iPad that allowed subjects to use their fingers to create any number of yellow and blue dots and place them around the screen wherever they liked. The program stored the position and color of every dot placed by the subjects. Half of the subjects were asked to create a scene in which "most of the dots are blue". The other half was asked to create a scene in which "there are more blue dots than yellow dots". All subjects were native English speakers. Notice that these two sentences will agree with one another when there are only 2 colors present (i.e., #blue > #yellow). But, these sentences may interface with vision in distinct ways as "most" seems to highlight the relationship of the "blues" to all of the dots, while "more" seems to highlight the separate groups of "yellow" and "blue" dots. Results were consistent with this hypothesis as subjects asked about "more" created images where the centroids of the yellow and blue groups were distantly separated and their alpha shapes were less overlapping than subjects asked about "most". Because these word meaning do not specifically highlight spatial relations as part of their meanings, these effects appear to emerge from grouping mechanisms in spatial vision interacting with linguistic understandings of the relevant sets for each sentence.
Meeting abstract presented at VSS 2012