Abstract
In everyday life, we usually have no problem distinguishing liquids with different viscosities, such as water, honey and tar. However, it is not known exactly how we achieve this and which image measurements have most influence on our perception of viscosity. Here, we investigated how stable visual estimates of viscosity are over time, as liquids pour continuously into a container. This allows us to test the extent to which the visual system can extract properties of the shape and motion of the liquid that are invariant across its volume and previous history, a basic requirement for achieving 'viscosity constancy'. We simulated seven liquids with different viscosities, ranging from highly viscous gel-like fluids to runny water-like fluids, and selected seven time frames from the animations that contained perceptually salient events (e.g. splash events). These static images were selected based on pilot studies in which subjects indicated the most informative frames from the animation. The scene consisted of a fluid source, a fixed solid sphere and an invisible reservoir, which filled up over time as the volume of the fluid increased. On each trial, the observers were presented with a static frame at a given time point from one of the seven viscosities, and had to indicate which of seven frames from a standard time point had the corresponding viscosity in a 7AFC paradigm. Thus, subjects had to identify viscosity based on static shape cues, across differences in time. The results show that subjects make systematic errors across all stimuli, although accuracy increases over time. The pattern of errors suggests that subjects rely on a number of simple shape-based heuristics to perform the matching task, which leads to performance that is substantially below 'viscosity constancy'.
Meeting abstract presented at VSS 2014