Abstract
The way liquids move and change shape is governed by complex physical laws. Despite this, we usually have vivid visual expectations about how liquids ooze or splash, taking into account their viscosity, velocity and surrounding objects. This ability to predict fluids' future states potentially involves both perceptual processes and mental simulation. Bates, Yildirim, Tenenbaum and Battaglia (2015) used observers' predictions about the future locations of liquids to argue for an "intuitive physics engine" based on mental simulation. Here, we focus on predicting the future shapes of liquids, suggesting that robust perceptual feature identification also plays an important role. We simulated 10sec animations of liquids pouring onto a plane. The optical properties were held constant, but viscosity varied (from water to honey) in seven steps. For each liquid, we created eight variations using random wind-like perturbations near the source, which caused the liquids to adopt distinct shapes as they [Jv1] poured. On each trial, observers viewed a 2sec clip from the beginning of one simulation ('test'), along with eight static frames ('matches') taken from a later time point, one for each variant with the same viscosity as the test. Their task was to rank the match stimuli according to how plausibly they could be future states of the test stimulus. Viscosity and the time offset between test and match stimuli varied across trials. Performance declined with time offset but was far above chance in all conditions. Lower viscosities were somewhat easier than higher viscosities. Analysis of the underlying geometry of the liquids revealed that sophisticated feature correspondence processes are required to predict perceived matches. Together our findings suggest the visual system uses robust feature identification with internal models of liquid-related shape changes to predict the future states of liquids.
Meeting abstract presented at VSS 2017