Abstract
Regularities in visual experience allow learning of relationships between objects over space and time. Such visual statistical learning (VSL) is typically viewed as relying on incremental accumulation of evidence about object co-occurrence. However, because of the large number of objects we encounter, the number of potential co-occurrences is astronomical. Here we test an alternative learning algorithm involving “hypothesis testing”, in which associations encountered even once are presumed real. These hypotheses become a source of predictions that refine learning: when the predictions are fulfilled, the current hypotheses are retained; when violated, hypotheses are dropped and new ones adopted. To evaluate how this algorithm supports VSL, we presented participants with a series of scenes and objects. On double-object trials, always following trials with one scene, participants chose which of the two objects “went with” the preceding scene (their hypothesis). Each scene appeared three times and participants were instructed that the correct object would be consistently shown after the same scene. Crucially, there were no correct object-scene pairings. On the second scene presentation, half of object hypotheses proposed on the first presentation were violated. On the third presentation, half of these revised hypotheses were again violated and half of the hypotheses that had been verified on the second presentation were now violated. Memory was tested by cuing with scenes and having participants choose with which objects they had appeared. Choice behavior was explained by both joint probabilities (irrespective of choice) and hypothesis testing, with a slight advantage to hypothesis testing. Memory accuracy tracked what happened on the final scene presentation, with increasing accuracy from baseline to proposed to verified, but no difference between verified and reverified (implying that a single verification can saturate a hypothesis with evidence). These findings suggest hypothesis testing is a rapid but rigid alternative to brute force statistical learning.
Acknowledgement: NIH R01 MH069456