Abstract
Research to date has shown that under some circumstances attentional selection is guided by object representations (Chen & Cave, 2006, 2008; Goldsmith & Yeari, 2003; Martinez et al., 2006; Richard, Lee, & Vecera, 2008; Shomstein & Behrmann, 2006, 2008; Shomstein & Yantis, 2002, 2004). However, the full understanding of why under some circumstances such guidance is possible has remained elusive, partially due to a lack of a theoretical framework with corresponding clear predictions, and partially due to a somewhat dogmatic approach to testing alternative theories. Given a critical mass of knowledge acquired on the topic, the need for a unifying framework is imperative. Here, we propose such framework termed the uncertainty reduction hypothesis, proposing that when uncertainty in the input is high (e.g., the location of the target or the target identity is unknown) the visual system integrates most of the available information embedded in the environment to guide attentional selection, thus yielding object-based effects. If, on the other hand, uncertainty is low, then resources are most efficiently allocated almost exclusively to the relevant information, thus reducing object-based effects. Evidence from a set of behavioral and fMRI experiments, utilizing different paradigms, will be reported in which uncertainty is manipulated with various external (sensory) and internal (reward) factors. Results from four experiments consistently show that as uncertainty in the input increases (e.g., location or color of the upcoming target is unknown), so do the magnitudes of the observed object-based effects. Additionally, our findings show that internal factors, such as reward, serve to reduce uncertainty, thereby minimizing object-based effects. We propose that the uncertainty reduction hypothesis has the potential to unify a large body of evidence on the topic of object-based guidance, opening new avenues for investigations elucidating the mechanisms of attentional selection.
Meeting abstract presented at VSS 2012