Abstract
The respective role of unconscious and conscious visual processing is a major unresolved question. Some argue that unconscious processing is limited to only simple features and objects such as shape and color, whereas others claim that unconscious processing fully parallels conscious processing. Intermediate between these two extremes, the unconscious binding hypothesis proposes that feature information can be extracted and integrated across space and time without awareness. But just what kinds of information can be integrated? To address this question, we introduce a paradigm that combines metacontrast masking and a go–nogo task. Specifically, we create two types of target objects that can be either visible or invisible: when visible, one is associated with go (i.e., press a button when it appears, referred to as go-associated), the other with nogo (i.e., withhold response when it appears, referred to as nogo-associated); when invisible, both are go trials. The crucial question concerns the invisible target: do participants respond more slowly when it is nogo-associated than when it is go-associated, even though the two could not be consciously differentiated and both are go trials? If so, this would indicate unconscious visual processes that can distinguish the go-associated and nogo-associated targets. Using this logic, we report two main findings. First, shape relations (i.e., whether two objects are the same or different) can be processed without awareness, demonstrating unconscious processing of an abstract same–different concept. This finding suggests that integrating information from a single feature (i.e., shape) across objects is not a signature of conscious awareness. Second, feature binding of shape and color within a single object, however, requires a high degree of conscious awareness, even when shape information is unconsciously represented and color information is consciously accessible. This finding points to feature binding as a signature of conscious awareness.
Meeting abstract presented at VSS 2014