Abstract
One of the most impressive aspects of human visual processing is our ability to recognize objects despite severe degradations in image quality. In this study, we focused on the recognition of impoverished facial images. We were specifically interested in examining the latency of recognition as a function of the extent of degradation to derive clues about the dynamics of processing. We used degradations involving resolution reduction on building and face images. Participants were asked to perform either basic-level recognition, or subordinate recognition with celebrity faces. Perhaps not surprisingly, subjects were able to distinguish faces from buildings, with near-perfect performance and in constant time, for all levels of degradation used. However, the subordinate recognition latency results exhibited an interesting pattern. We found a strong monotonic increase in time to recognition as a function of the level of degradation. These results have at least two important implications. First, the finding that basic-level classification latency was unaffected by image degradations, while subordinate-level recognition latency was significantly affected, suggests that these two tasks might, at least under some circumstances, be dissociable. This degradation-induced temporal decoupling of the two processes can be exploited to identify their neural correlates (see Morash et al., VSS 2008). Second, the increased recognition latencies provide tentative support for the idea that purely feed-forward theories of recognition are likely to be incomplete as accounts of human processes, since they would not predict large latency differences across conditions. It is unclear precisely what processes account for the increased time-costs. One possibility is that recognition of degraded images might involve a time-consuming iterative exchange of information between high and low-level visual areas, which effectively implements a ‘hypothesize and verify’ kind of analysis strategy.