Purchase this article with an account.
Maxim Bushmakin, Thomas James; Feature fragments and evidence accumulation in object and face perception. Journal of Vision 2015;15(12):245. doi: 10.1167/15.12.245.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
Recognition and identification of objects is ubiquitous in the daily lives of human beings, yet despite its importance we still do not understand how visual information arriving at the retina is processed by the human visual cortex (and the rest of the brain) in such a fast and flexible manner. There is a large body of neurophysiological work with non-human primates showing that certain neurons in the posterior and middle regions of the inferior temporal gyrus (area TE) respond preferentially to specific features of objects. Further studies exploring the brain’s response to a large set of objects decomposed into simpler shapes led a number of researchers to suggest that part of TE was organized in column-like structures with all neurons in a column having preferential tuning for specific shapes. Assuming that object features are presented similarly in humans, the current study aimed to discover the features of objects and faces that produce the fastest and most accurate categorization responses. Participants viewed many single fragments selected from random locations in images of a set of objects and attempted to classify the fragment based on object category. Accuracy and reaction time measures were analyzed using a sequential sampling framework, specifically Drift Diffusion and Linear Ballistic Accumulator models (Brown & Heathcote, 2008; Ratcliff & Rouder, 1998). Modeling results showed that the variability in reaction times across features was accounted by the rate of perceptual evidence accumulation, which is dependent on the amount of information in the fragment. The current study represents a fundamental step toward evaluating the functional organization of the human ventral visual stream within a feature-based theoretical framework.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only