Abstract
You can readily see at a glance how two objects spatially relate to each other. But seeing how 20 objects all relate seems impossible, due to computational explosion (with 190 pairs). Such situations require *visual routines*: dynamic visual procedures that efficiently compute various properties ‘on demand’ — e.g. whether two points lie on the same winding path, in a busy scene containing many points and paths (‘path tracing’). Some surprisingly foundational questions about visual routines remain unexplored, including: what (if anything) remains in visual memory after the execution of a visual routine? Does path tracing result in a memory of the traced path itself? Or just of *whether* there was a path? Or nothing at all, after the moment has passed? We explored this for spontaneous path tracing in 2D mazes. Observers saw a maze in which two probes appeared in positions connected by a path. They were then shown two mazes, and had to select which was the initially presented maze. Across experiments, the incorrect maze could be (1) a Path-Obstruction maze, where a new contour blocked the initial inter-probe path; (2) an Irrelevant-Obstruction maze, where a new contour was introduced elsewhere; or (3) an Alternative-Path maze, where the same new Path-Obstruction contour was accompanied by the removal of an existing contour, providing an alternative inter-probe path. Performance on Path-Obstruction trials was much better than on Irrelevant-Obstruction trials (always controlling for lower-level contour properties across trial types). But Alternative-Path trials entirely eliminated this advantage. This suggests that a visual memory is formed by spontaneous path tracing, but that its content is not the path itself, but only *whether* a path existed. If visual routines exist to answer on-demand questions during perception, then the resulting memories may consist only of the answers themselves, and not the processing that generated them.