Abstract
Imagine composing an email or chopping carrots. These tasks, and many more, are performed in spaces of similar scale and structure—in "workspaces." We define workspaces as environments slightly deeper than arm's reach, containing task-relevant objects on a horizontal surface. Here, we examined whether views of workspaces have distinctive perceptual and semantic signatures compared to views of singleton objects and canonical scenes. First, using visual search as an index of perceptual similarity, we asked whether workspaces have distinctive perceptual features from scenes and objects. If workspaces differ perceptually, they should be found faster among scenes or objects than among other workspaces. Indeed, response times showed evidence for a 3-way dissociation among objects, workspaces, and scenes (Exp 1: stimuli luminance-matched; Exp 2: luminance- and spatial frequency-matched), providing initial evidence that workspace views have distinctive perceptual features from views of full scenes or single objects. Second, using semantic priming, we examined whether workspaces have stronger associations to action concepts than full scenes. Participants indicated whether a target word was an action or emotion verb, while ignoring a task-irrelevant image of either a scene or a workspace. Critically, the action word was either semantically congruent to the image (e.g. "chopping" for a kitchen workspace or scene), or incongruent (e.g. "ironing"). Congruent action words were categorized faster than incongruent ones, but only in the presence of workspace views, not in the presence of scene views (interaction term: p< 0.05). These data show that workspace views automatically trigger action-related processing in a way that differs from scenes. Together, these results suggest that workspace views have distinctive perceptual features and privileged relationships to action-related concepts, providing initial evidence that workspace views constitute a division of visual space that is distinct from both objects and scenes in how it interfaces with our visual cognitive systems.
Meeting abstract presented at VSS 2017