Open Access
Article  |   May 2019
Manipulating patterns of dynamic deformation elicits the impression of cloth with varying stiffness
Author Affiliations
Journal of Vision May 2019, Vol.19, 18. doi:https://doi.org/10.1167/19.5.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wenyan Bi, Peiran Jin, Hendrikje Nienborg, Bei Xiao; Manipulating patterns of dynamic deformation elicits the impression of cloth with varying stiffness. Journal of Vision 2019;19(5):18. https://doi.org/10.1167/19.5.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Cloth is a common material, and humans can visually estimate its mechanical properties by observing how it deforms under external forces. Here, we ask whether and how dynamic deformation can affect the perception of mechanical properties of cloth. In Experiment 1, we find that both intrinsic mechanical properties and optical properties affect stiffness perception when the stimuli are presented as images. By contrast, in videos, humans can partially discount the effect of optical appearances and exhibit higher sensitivity to stiffness. We further identified an idiosyncratic deformation pattern (i.e., movement uniformity) to differentiate stiffness, which can be reliably measured by six optical flow features. In Experiment 2, we isolate the deformation by creating dynamic dot stimuli from the 3-D mesh of the cloth. We directly alter the movement pattern by manipulating the uniformity of the displacement vectors on the dot stimuli and show that changing the pattern of dynamic deformation alone can alter the perceived stiffness of cloth in a variety of scene setups. Furthermore, by analyzing optical flow fields extracted from the manipulated dynamic dot stimuli, we confirmed the same six optical flow features can be diagnostic of the degree of stiffness of moving cloth across different scenes. Overall, our study demonstrates that manipulating patterns of dynamic deformation alone can elicit the impression of cloth with varying stiffness, suggesting that the human visual system might rely on the idiosyncratic pattern of dynamic deformation for estimating stiffness.

Introduction
For humans to successfully navigate in the environment, it is crucial to estimate material properties of objects and predict how they would react under external forces. In the last two decades, a good amount of measurements have been made on how humans perceive material properties of rigid objects (Bi, Newport, & Xiao, 2018; Fleming, 2017; Fleming, Gegenfurtner, & Nishida, 2015; Maloney & Brainard, 2010). Numerous image features have been proposed for the perception of optical properties of solid objects (Fleming, Wiebel, & Gegenfurtner, 2013; Kim, Marlow, & Anderson, 2012; Motoyoshi, 2010; Motoyoshi, Nishida, Sharan, & Adelson, 2007; Sawayama & Nishida, 2018). However, many objects around us are nonrigid and deformable, such as cloth, food, and liquids. They deform and move under externally applied forces. In this situation, seeing how they deform over time would be informative about their mechanical properties, such as soft or hard, heavy or light, liquid or solid, elastic or stiff (Paulun, Schmidt, van Assen, & Fleming, 2017). 
One way to quantify the dynamic deformation is to measure the movement pattern through analyzing features of the optical flow fields extracted from the videos. Previous research suggests that humans could use distortions in the optical flow fields to estimate mechanical properties (e.g., softness) of an object (Bi, Jin, Nienborg, & Xiao, 2018; Kawabe, Maruya, Fleming, & Nishida, 2015; Nishida, Kawabe, Sawayama, & Fukiage, 2018; Paulun et al., 2017; van Assen, Barla, & Fleming, 2018). However, visually estimating mechanical properties of deformable objects is challenging because several factors, including optical properties (e.g., gloss, transparency), intrinsic mechanical properties (e.g., malleability), and external forces, can affect the perceived deformation pattern (see Figure 1) and cause distortions in the optic flow conjointly. For example, the deformation pattern generated by crumpling a skirt is very different from that of a flapping flag under wind force. Hence, if the goal is to estimate the intrinsic mechanical properties of deformable objects, the visual system has to disentangle the causal contributions of these factors. 
Figure 1
 
Estimating the stiffness of cloth from images and videos. (A) Cloth samples rendered with the same 3-D mesh but with different optical properties. Optical properties affect the perceived stiffness even for clothes with the same intrinsic mechanical properties. (B). A generative model for estimating cloth mechanical properties. Both intrinsic material properties and external scene properties influence the optical appearances (4) and deformations (3). In Experiment 1, we aim to compare static image conditions and videos in the estimation of stiffness of cloth. Previous studies investigated how stiffness estimation is affected by optical appearance and deformations by directly manipulating intrinsic properties (1) or/and external forces (2). In Experiment 2, we directly manipulate the dynamic deformations (3) by using dynamic dot stimuli and measure how this manipulation affects the visual estimation of stiffness.
Figure 1
 
Estimating the stiffness of cloth from images and videos. (A) Cloth samples rendered with the same 3-D mesh but with different optical properties. Optical properties affect the perceived stiffness even for clothes with the same intrinsic mechanical properties. (B). A generative model for estimating cloth mechanical properties. Both intrinsic material properties and external scene properties influence the optical appearances (4) and deformations (3). In Experiment 1, we aim to compare static image conditions and videos in the estimation of stiffness of cloth. Previous studies investigated how stiffness estimation is affected by optical appearance and deformations by directly manipulating intrinsic properties (1) or/and external forces (2). In Experiment 2, we directly manipulate the dynamic deformations (3) by using dynamic dot stimuli and measure how this manipulation affects the visual estimation of stiffness.
What information does the visual system use to achieve this? In what aspect does dynamic information help disentangle the contribution of optical properties and mechanical properties? To what degree is dynamic deformation alone sufficient to convey impressions of various mechanical properties? In this paper, we first identified an idiosyncratic deformation pattern (i.e., movement uniformity) to differentiate stiffness, which can be reliably measured by six optical flow features. We further developed a method that directly manipulates the deformation patterns of cloth using dynamic dot stimuli and show that manipulating the deformation pattern alone can alter the impression of stiffness of a moving cloth. Using this method, we can examine the causal role of dynamic deformation in stiffness perception. Furthermore, by analyzing optical flow fields extracted from the manipulated dynamic dot stimuli, we confirmed the same six optical flow features can be diagnostic of the degree of stiffness of moving cloth across different scenes. 
Previous work
Deformation cues in material perception
One of the challenges for understanding material perception is that both the object's shape and its intrinsic physical properties influence its appearance (Fleming, Jäkel, & Maloney, 2011; Marlow & Anderson, 2015; Marlow, Todorović, & Anderson, 2015). This is especially true for deformable objects. In some situations, the visual system can infer material properties just from static shape information without any optical appearances. In a unique study, Pinna and Deiana (2015) showed that simply deforming contours of an object's boundaries could elicit vivid impressions of different material properties. In our work, we followed a similar reductionism approach by directly manipulating dynamic dot stimuli generated from 3-D meshes to alter the perceived stiffness of cloth. Other studies have reported that shape cues are sufficient for the judgment of liquid viscosity and stiffness of deformable and elastic objects (Paulun, Kawabe, Nishida, & Fleming, 2015; Paulun et al., 2017; Spröte & Fleming, 2016). For example, a liquid of a given viscosity can settle into shapes with characteristic features (Paulun et al., 2017; van Assen et al., 2018; van Assen & Fleming, 2016) and these features are diagnostic for its viscosity. Previous studies also show that optical properties affect the perception of shape deformations (Han & Keyser, 2015; Schmidt, Paulun, van Assen, & Fleming, 2017). 
Motion has been shown to influence material perception of specular solid objects. For example, Sakano and Ando (2010) showed that the change of the angle of light refraction and reflection caused by head movements is a crucial cue in the perception of glossiness. Doerschner et al. (2011) observed that a moving glossy surface differed from a matte moving surface in three motion features: coverage, divergence, and 3-D shape reliability. They verified the critical role of these motion features by demonstrating that a model trained with these three motion cues could successfully predict both successes and some failures of human material perception. The follow-up study also showed that specular flow is important for estimating 3-D shape of glossy objects (Dövencioğlu, Ben-Shahar, Barla, & Doerschner, 2017). Most recently, Tamura, Higashi, and Nakauchi (2018) discovered robust dynamic visual cues that could differentiate mirror and glass. Specifically, they found that glass objects had more opposite motion components relative to the direction of the object's rotation. 
Because shape deformation can already tell quite a bit of how deformable an object is under external forces, it is unclear to what degree motion matters. We propose that, even though observers can tell a stiff object from a soft one from a single static frame, this information alone might not be robust over time. When more evidence emerges, observers might change their judgments. As to the perception of elasticity, Kawabe and Nishida (2016) found that human observers were able to recover the elasticity of computer-rendered jelly-like cubes based on the shape-contour deformation alone. This was still true even when the cube movies were replaced by dynamic random noise patterns, which retained the optical flow information but not the surface information. The researchers concluded that the elasticity judgment was based on the pattern of image motion arising from the contour and the optical deformations. In studies regarding liquid viscosity, Kawabe, Maruya, Fleming, et al. (2015) extracted the optical flow fields of liquids and presented them as a 2-D noise patch array. Even though the shape information has been largely excluded, observers can still distinguish liquids with different viscosity from the noise array. The authors further demonstrated that the visual system utilized image motion speed in the optical flow field as a cue to estimate liquid viscosity. Additionally, motion is also important for achieving perceptual constancy of mechanical properties in variation of external forces, such as the softness of cloth (Bi & Xiao, 2016) and liquid viscosity (van Assen et al., 2018). 
Recent studies measured the effects of optical properties, shape, and motion cues on perception of stiffness and elasticity of deformable objects. Kawabe, Maruya, and Nishida (2015) analyzed spatiotemporal frequencies of image deformation and have reported that some specific spatiotemporal frequencies allow perception of transparent liquid layers. Studies in liquid viscosity discovered that viscosity is inferred primarily from shape and motion cues but that optical characteristics influence recognition of specific liquids and inference of other physical properties (van Assen & Fleming, 2016). Schmidt et al. (2017) investigated the interactions among optical properties, shape, and motion cues and their corresponding effects on stiffness judgment of unfamiliar objects using a material attributes rating task. They found optical appearances elicited a wide range of apparent properties. Using physical simulation, they additionally found that softness of the objects is highly correlated with the extent of deformation. However, when combining the optical cues with the shape deformation, the authors found that optical cues are completely dominant. Finally, they presented motion sequences to the observers and found significant effects of motion as well as optical cues. In this paper, we aim to test the hypothesis that dynamic information is highly useful in discounting the effects of optical properties. In addition, different from previous studies, we mathematically manipulate deformation patterns of the dot stimuli generated from the mesh instead of using physical simulation. Our method has the power to dissect which aspect of deformation is critical in stiffness judgment. 
Cloth perception
Cloth is a common deformable material, yet little is known about how optical properties, shape, and motion affect the perception of its material properties. One debate is how important the dynamic information is. Figure 1A shows that optical appearance affects perceived stiffness even for the clothes with the same intrinsic mechanical properties. Most likely, observers would perceive the skirt rendered with a “silk” optical appearance (Figure 1A(1)) to be more flexible and softer than the one rendered with a “velvet” appearance (Figure 1A(3)) even though the two pieces of cloth are rendered with the same 3-D models. This might be because recognizing the fabric being “silk” biases the perception of it to be less stiff. Yet another explanation might be that the specular highlights on the silky surface could modify the perceived deformation and, hence, could influence stiffness impression. Aliaga, O'Sullivan, Gutierrez, and Tamstorf (2015) found that the appearance, rather than the motion, dominated the categorical judgment of cloth except for fabrics with extremely characteristic motion dynamics (i.e., silk). By contrast, other studies found that motion information is important in the perception of mechanical properties of cloth. For example, in a study by Bouman, Xiao, Battaglia, and Freeman (2013), observers estimated the stiffness and mass of cloth examples in real scenes. They found that the observers' responses were less correlated with the physical parameters in the image condition compared to the video condition. This finding supports the importance of motion in visual estimation of material properties. More recently, Bi, Jin, et al. (2018) reported that when the frame sequences were scrambled, observers' sensitivity to different stiffness values of cloth was decreased, also suggesting the important role of multiframe motion information in the perception of stiffness of cloth. To verify this hypothesis, they trained a machine learning model using the dense trajectories over 15 consecutive frames and demonstrated the robustness of the model in predicting human perceived stiffness of cloth. One reason why we use cloth as model is that its deformation is usually caused by a more complex external force (e.g., oscillating wind) instead of the simple bending or poking force. Inspired by previous studies showing that maximum deformation is a critical cue for stiffness judgment of an elastic cube (Paulun et al., 2017), we conjecture that the human visual system might rely on the idiosyncratic pattern of dynamic deformation for estimating stiffness such that, under the same external force, a soft cloth deforms to a larger extent but less uniformly than a stiffer cloth. 
Study overview
Figure 1B illustrates the process of how human observers estimate mechanical properties from videos and an overview of our methods. In Experiment 1, we test the hypothesis that humans can partially discount the bias caused by optical appearances and exhibit higher sensitivity to stiffness in video conditions than static image conditions. Second, we analyze the statistics of optical flow fields and discover that cloth with different stiffness values differs in the optical flow movement features relating to the movement uniformity. In Experiment 2, we directly manipulate movement patterns of dynamic dot stimuli generated from the 3-D mesh exported from the physics engine. By doing so, we can isolate the dynamic information by removing the influence of optical properties. We investigate whether this manipulation can alter the stiffness judgment and whether this method can be generalized to other scene setup, 3-D model, and generative physics model. The codes and demo videos of this paper are available online (https://sites.google.com/site/wenyanbi0819/website-builder/jov_dotstimuli?authuser=0). 
Experiment 1: Effects of dynamics on stiffness judgment of cloth
Previous research shows that dynamic information is important in judging material properties. It is unclear in what aspect dynamic stimuli have an advantage over static images in the perception of material properties of cloth. We used cloth animations as stimuli and measured the perception of bending stiffness. First, we aimed to show that dynamic stimuli could indeed convey stiffness of cloth better than static images. To do so, we compared stiffness judgments of the same cloth samples displayed in two conditions: static (image) and dynamic (video). We hypothesized that, in the static image condition, both intrinsic mechanical properties (i.e., ground truth stiffness value) and surface properties (e.g., textures, surface reflectance) affect stiffness perception. By contrast, in the dynamic condition, the perceived stiffness would be less affected by surface properties and, therefore, more aligned with the ground truth values. Second, we investigated what kind of deformation pattern could predict the perceived stiffness by analyzing the optical flow fields. We identified six optical flow motion descriptors that are diagnostic of the degree of stiffness. 
Materials and methods
Observers
Eight observers (seven women; mean age = 27.2 years, SD = 5.8 years) participated in the experiment on a voluntary basis and were not paid for their participation. All observers reported normal visual acuity and color vision. 
Stimuli
The stimuli were physics-based animations of cloth with various material properties under external forces. Figure 2A illustrates an example of the stimuli in the experiment. On the left is the target cloth, and on the right is the reference cloth. In the dynamic condition, both the reference and target cloth were shown as videos; in the image condition, the reference cloth was displayed as a video, but the target cloth was displayed as a static image that was a random frame extracted from the corresponding video. The animations were simulated using the Blender Cycles render engine (Blender version 2.7.6). The full length of every target and reference cloth video was 8 s with 24 frames per second. Each animation differed in one of the following four rendering parameters. 
Figure 2
 
(A) The interface of the multiple choice experiment. Observers adjusted the stiffness of the reference cloth to match that of the target cloth by selecting one of the small reference videos (rows below), that contained the cloth with varying values of bending stiffness. When one of the small reference videos was selected, it would show up at the position of the reference cloth. (B) The target cloth was rendered with four different material appearances (i.e., cotton, felt, red gauze, brocade) and two different scenes (i.e., the ball scene and the wind scene). In the dynamic condition, the target was always presented as video; in the image condition, the target was presented as a single static frame randomly chosen from the corresponding video. (C) A zoom-in view of the material textures. The “neutral gray” refers to the reference cloth, and the other four belong to the “target cloth.”
Figure 2
 
(A) The interface of the multiple choice experiment. Observers adjusted the stiffness of the reference cloth to match that of the target cloth by selecting one of the small reference videos (rows below), that contained the cloth with varying values of bending stiffness. When one of the small reference videos was selected, it would show up at the position of the reference cloth. (B) The target cloth was rendered with four different material appearances (i.e., cotton, felt, red gauze, brocade) and two different scenes (i.e., the ball scene and the wind scene). In the dynamic condition, the target was always presented as video; in the image condition, the target was presented as a single static frame randomly chosen from the corresponding video. (C) A zoom-in view of the material textures. The “neutral gray” refers to the reference cloth, and the other four belong to the “target cloth.”
Stiffness
Bending stiffness describes how the cloth forms its wrinkles. Higher values result in bigger but not necessarily more wrinkles. Each target cloth was rendered with one of five bending stiffness levels: 0.01, 0.1, 1, 10, 100. Each reference cloth was rendered with one of 10 stiffness levels: 0.005, 0.01, 0.1, 0.5, 1, 5, 10, 25, 100, 300, which covered the whole range of the stiffness of the target cloth. 
Mass
Mass describes the heaviness of cloth per area. The target cloth had two mass levels—0.1, 0.7—and each reference cloth was rendered with mass value 0.3. 
Material appearance
Each target cloth was rendered with one of four distinctive material appearances (see Figure 2B and C): felt, cotton, red gauze, brocade. By contrast, the reference cloth was always rendered with a grayish fleece material (see Figure 2C: neutral gray), which was different from any material appearance of the target cloth. All material appearances were simulated based on the impressions from real cloth samples but not with any particular set of optical parameters, so the resulting videos had different texture, thickness, surface reflectance, roughness, and transparency, making them unique in several dimensions of material appearances. 
Scene setup
Each target cloth was rendered with one of two dynamic scenes: a wind scene containing a piece of hanging cloth moving under oscillating wind forces (Figure 2B, right; see Bi, Jin, et al., 2018, for details) and a ball scene containing a rolling ball colliding with a piece of hanging cloth (Figure 2B, left). The reference cloth was rendered with only the wind scene. See Supplementary Movie S1 for a demonstration of the wind scene and Supplementary Movie S2 for a demonstration of the ball scene. 
Each observer finished 160 trials in both the static and dynamic conditions (5 stiffness levels × 2 mass levels × 4 material categories × 2 scene setups × 2 repetitions). 
Procedure
Observers were presented with videos and images of cloth simulations and compared their judgments of stiffness. Figure 2A shows the interface of the multiple-choice task used in this experiment. During each trial, the observers adjusted the stiffness of a reference cloth by selecting one of the 10 small reference videos below it to match the stiffness of a target cloth. The 10 small reference videos were rendered with varying bending stiffness values, which covered the whole range of stiffness from the least to the most stiff. When one of the small reference videos was selected, it would show up as a full-sized video at the position of the reference cloth and replaced the default video. At each trial, the target cloth varied in one of the following four factors: stiffness, mass, material appearance, and scene setup. We used the same set of small reference videos for each target video across all trials. 
Results
Figure 3 plots the mean (A and B) and median (C and D) matched stiffness across all observers versus the ground truth stiffness values of the target cloth. First, we found that, in both image and video conditions, observers could distinguish cloth with different bending stiffness values. Across all conditions, the matched stiffness increases as the ground truth stiffness of the target cloth increases. Second, the figure shows that the lines in the video condition are steeper than those in the image condition, which is especially obvious when plotting the median (Figure 3C and D). This suggests that observers showed higher sensitivity to different stiffness values in the dynamic condition than in the static condition. 
Figure 3
 
Matched stiffness plotted as a function of ground truth stiffness levels in image and video conditions. The x-axis shows the ground truth stiffness value of the target cloth. The y-axis shows the matched stiffness levels. Different colors indicate different material appearances. (A) Mean matched stiffness levels plotted as a function of ground truth stiffness for image condition. (B) Mean matched stiffness levels plotted as function of ground truth stiffness for video condition. (C and D) Same as panels A and B, but the matched stiffness levels are plotted using the median value across all observers.
Figure 3
 
Matched stiffness plotted as a function of ground truth stiffness levels in image and video conditions. The x-axis shows the ground truth stiffness value of the target cloth. The y-axis shows the matched stiffness levels. Different colors indicate different material appearances. (A) Mean matched stiffness levels plotted as a function of ground truth stiffness for image condition. (B) Mean matched stiffness levels plotted as function of ground truth stiffness for video condition. (C and D) Same as panels A and B, but the matched stiffness levels are plotted using the median value across all observers.
We analyzed observers' performance in the experiment by a two-way, repeated-measure analysis of variance on the matched stiffness with the material appearances (four levels) and the stiffness values (five levels) of the target cloth as fixed within-subject factors. We did separate analysis for the video and image conditions. 
The main effect of material appearances was significant in both the image condition, F(3, 21) = 14.73, p < 0.0001, and the video condition, F(3, 21) = 3.38, p = 0.038. Post hoc analysis indicated that cotton and felt were perceived to be stiffer than red gauze and brocade (ps < 0.05). But the effect largely decreased in the video condition (η2 = 0.026) when compared with the image condition (η2 = 0.366). 
Additionally, the main effect of ground truth stiffness of the target cloth on the matched stiffness was significant in both the image condition, F(4, 28) = 33.69, p < 0.0001, and the video condition, F(4, 28) = 41.94, p < 0.0001. But the effect is much larger in the video condition (η2 = 0.624) than in the image condition (η2 = 0.287). 
Optical flow analysis
The perceptual results show that, in the video condition, observers show enhanced sensitivity to stiffness, and furthermore, the effect of material appearances has been partially discounted. This suggests that the videos must contain information that allows observers to estimate the intrinsic stiffness without being affected much by material appearances. To determine what kind of motion information is available and important for the inference of stiffness, we analyzed the statistics that extracted from the optical flow fields of the cloth videos, using the method described in Kawabe, Maruya, Fleming, et al. (2015). Some of these statistics have been shown to be highly correlated with the perceived liquid viscosity and the stiffness of cloth (Bi & Xiao, 2016; Kawabe, Maruya, Fleming, et al., 2015). 
Figure 4A shows that the optical flow fields of a soft moving cloth is different from that of a stiff one. The two cloth videos were rendered with all other parameters exactly the same except the bending stiffness values. We observed that the flow vectors of a soft cloth are less uniform in both direction and magnitude than those of a stiff cloth. This motivated us to look for motion features that describe such movement uniformity. To do so, we compared the optical flow statistics between a soft cloth (bs = 0.1; solid lines in Figure 4B) and a stiff one (bs = 100; dotted lines in Figure 4B) with the wind forces in all cloth videos kept the same. We analyzed the 16 motion descriptors that were proposed by Kawabe, Maruya, Fleming, et al. (2015) and found that the mean and standard deviation of three motion features (divergence, gradient, and discrete Laplacian) could typically differentiate the stiffness values and might account for the movement uniformity. Specifically, Figure 4B shows that the videos containing a less stiff cloth typically show higher values in all these statistics than the videos containing a more stiff cloth regardless of optical appearances. The detailed calculation of these statistics can be found in Kawabe, Maruya, Fleming, et al. (2015). 
Figure 4
 
Optical flow analysis of videos containing simulated cloth. (A) Two-frame optical flow vectors are less uniform for a soft cloth (upper panel) than a stiff cloth (lower panel). The color maps are plotted together with the displacement vectors to show the length and direction of the displacements. In the vector fields, a larger vector refers to more displacement, and the arrow points to the movement direction. In the color map, the saturation indicates the magnitude of the movement, and the hue represents the direction of the movement. (B) Comparison of optical flow statistics between a soft cloth (bs = 0.1; solid line) and a stiff one (bs = 100; dotted line). Different colors indicate different material appearances. Across the whole time period that has been plotted and for all plots, the solid lines are typically above the dotted lines, indicating that all the optical flow statistics have higher values for the softer cloth compared to the stiffer one.
Figure 4
 
Optical flow analysis of videos containing simulated cloth. (A) Two-frame optical flow vectors are less uniform for a soft cloth (upper panel) than a stiff cloth (lower panel). The color maps are plotted together with the displacement vectors to show the length and direction of the displacements. In the vector fields, a larger vector refers to more displacement, and the arrow points to the movement direction. In the color map, the saturation indicates the magnitude of the movement, and the hue represents the direction of the movement. (B) Comparison of optical flow statistics between a soft cloth (bs = 0.1; solid line) and a stiff one (bs = 100; dotted line). Different colors indicate different material appearances. Across the whole time period that has been plotted and for all plots, the solid lines are typically above the dotted lines, indicating that all the optical flow statistics have higher values for the softer cloth compared to the stiffer one.
The optical flow analysis shows that the idiosyncratic deformation pattern (i.e., movement uniformity) measured by the six optical flow features can be used to differentiate stiffness of cloth in variation of optical appearance. However, the extraction of optical flow field can be easily affected by optical properties. In order to directly test how the pattern of dynamic deformation affects the stiffness judgment, we propose to isolate the deformation pattern by removing the influence of optical properties. Hence, in the next section, we show a method to directly alter the movement pattern by manipulating the displacement vectors in the 3-D mesh of the cloth and demonstrate that changing the pattern of dynamic deformation alone can alter the perceived stiffness. 
Experiment 2: Patterns of dynamic deformation affect inferred stiffness
In Experiment 1, we discovered that motion statistics associated with movement uniformity could be diagnostic of degree of stiffness. This finding motivates us to propose a method to isolate and manipulate the dynamic deformation that is related to movement uniformity and to demonstrate that this manipulation alone can alter the inferred stiffness. 
To isolate the dynamic deformation information, we created dynamic dots from the 3-D mesh of a moving cloth as stimuli (Figure 5). In the dynamic dot stimuli, the movements are generated by the displacement of each dot. We can manipulate the pattern of dynamic deformation by directly varying the displacement vectors. One way to systematically vary the displacement vectors is to vary the moving velocity. We name the uniformity of the velocity of the dots as “velocity coherence,” which describes the movement uniformity. 
Figure 5
 
Experiment 2: Using dynamic dot stimuli to isolate and manipulate the patterns of dynamic deformation. (A) Method of creating the dynamic dot stimuli. The input frames are exported from Blender. The output frame is generated by shifting the positions of the dot in the original frame by the updating function defined on the right box. For each dot in the original frame t, its new position \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({p^{\prime} _t} \in {R^3}\) in the new frame t is updated using both its current position (tR3) as well as its position at the first frame (0R3) (right panel). The α in the function determines the velocity coherence. A smaller α value makes the dots move more uniformly. (B) Examples of the dot stimuli generated with different α values. The x-axis represents the time period. The y-axis represents three α levels. The color hue represents the depth information.
Figure 5
 
Experiment 2: Using dynamic dot stimuli to isolate and manipulate the patterns of dynamic deformation. (A) Method of creating the dynamic dot stimuli. The input frames are exported from Blender. The output frame is generated by shifting the positions of the dot in the original frame by the updating function defined on the right box. For each dot in the original frame t, its new position \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({p^{\prime} _t} \in {R^3}\) in the new frame t is updated using both its current position (tR3) as well as its position at the first frame (0R3) (right panel). The α in the function determines the velocity coherence. A smaller α value makes the dots move more uniformly. (B) Examples of the dot stimuli generated with different α values. The x-axis represents the time period. The y-axis represents three α levels. The color hue represents the depth information.
The velocity is defined as the spatial displacement of one dot within every two consecutive frames. The dynamic dots move more uniformly when a large number of dots move with similar velocities. In contrast, the movement uniformity is low when the velocity of the dots differs a lot among each other. In Experiment 2, we aim to test the hypothesis that movement uniformity affects the perceived stiffness of dynamic dotted cloth videos such that increase the movement uniformity makes the dotted cloth appear stiffer. 
Materials and methods
Observers
Eight observers (six women; mean age = 24.75 years, SD = 4.3 years) participated in the study on a voluntary basis and were not paid for their participation. Four of them also participated in Experiment 1
Stimuli
Figure 5 shows examples of dynamic dot stimuli and the method we used to generate them. To generate the dot stimuli with different movement uniformity, we first need a template dot video, which was the 3-D mesh outputs from Blender animations of a moving cloth in the wind scene as described in Experiment 1 (see Figure 2B, wind scene). The outputs were 200 sequential wavefront .obj files, and by default, the cloth 3-D mesh in each frame contained 21,626 vertexes (i.e., dots). We then reduced the number of vertexes in each frame to 664 using the systematic sampling method to make the dot stimuli appear less dense but still like a piece of cloth. 
Based on the template video, we generated new dynamic dot stimuli with different movement uniformity by directly manipulating the displacement of each dot, using the updating function shown in Figure 5A. Specifically, for each dot in any given frame t, its new position Display Formula\({p^{\prime} _t} \in {R^3}\) in the new frame t was calculated using both its current position (ptR3) and its position in the first frame (p0R3). The α in the function determines different movement uniformity. When α < 1, the updating function decreases the displacement amplitudes of dots that move more than average and increases the displacement amplitudes of dots that move less than average, therefore making the movement more uniform. On the other hand, when α > 1, the updating function increases the displacement amplitudes of dots that move more than average and decreases the displacement amplitudes of dots that move less than average, leading to decreased movement uniformity. Overall, smaller α values correspond to increased movement uniformity, which would make the dot stimuli appear stiffer as we hypothesized. 
To experimentally test this hypothesis, we generated eight dynamic dot videos with uniformly sampled α values (0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6) and conducted a psychophysics experiment to measure the perceived stiffness of the object represented by these dot stimuli. If the perceived stiffness is correlated with the α values in the dot stimuli, we then have the evidence to show that movement uniformity was useful in estimating stiffness. 
Design and procedure
We used maximum likelihood difference scaling (MLDS) with the method of triads (Knoblauch & Maloney, 2008; Maloney & Yang, 2003) to measure the psychometric function relating changes in the α value to changes in perceived stiffness by humans. In each trial, observers were presented with a triplet of videos and asked to judge which video pair—left and center versus right and center—appear more different from each other in terms of stiffness. They indicated their choice by pressing the “P” (left pair) or the “Q” key (right pair). On any given trial, the three videos in the triads always had different α values, and the α values of the center videos were always between those of the left and right ones. Therefore, the movement uniformity of the three videos was either in ascending (left < center < right) or descending (left > center > right) order. 
Perceptual results
The perceptual scale for each observer was computed using the MLDS package for R from Knoblauch & Maloney (2008). Figure 6A shows the estimated perceptual scale for each observer as a function of the α value along with the mean across all observers, which were estimated by MLDS using the generalized linear model implementation (McCullagh, 1984). There was a strongly negative correlation between the perceptual scale and the α value, r(64) = −0.97, indicating that observers were able to distinguish the dot stimuli with different velocity coherence values. A linear regression fitted to these data revealed that the perceived stiffness decreased as the α increased in a significant linear fashion, F(1, 62) = 1,060.7, p < 0.0001. Together, in support of our hypothesis, higher movement uniformity made the cloth in dynamic dot stimuli appear stiffer. 
Figure 6
 
Results of Experiment 2. (A) Measured perceptual scale of stiffness from the dot video as a function of different α values. The black line represents the scale averaged over the eight observers, and the blue lines represent individual observer's scales. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the averaged perceptual scale of stiffness. Perceived stiffness decreases as the α value increases in a linear fashion. The dotted red line refers to the linear fit between the perceptual scale and the α value. (B) Optical flow statistics and perceptual scale are plotted as function of α values. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the normalized values of the perceptual scale as well as the six optical flow statistics averaged across time. The α value, optical flow statistics, and perceived stiffness from the dot stimuli are highly correlated with each other.
Figure 6
 
Results of Experiment 2. (A) Measured perceptual scale of stiffness from the dot video as a function of different α values. The black line represents the scale averaged over the eight observers, and the blue lines represent individual observer's scales. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the averaged perceptual scale of stiffness. Perceived stiffness decreases as the α value increases in a linear fashion. The dotted red line refers to the linear fit between the perceptual scale and the α value. (B) Optical flow statistics and perceptual scale are plotted as function of α values. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the normalized values of the perceptual scale as well as the six optical flow statistics averaged across time. The α value, optical flow statistics, and perceived stiffness from the dot stimuli are highly correlated with each other.
Optical flow analysis
We have demonstrated that directly manipulating the pattern of dynamic deformation can alter the perceived stiffness of cloth in the dynamic dot stimuli. Specifically, the cloth in dot videos is perceived to be stiffer when the dots move more uniformly. Next, we would like to confirm that the same six optical flow features that were proposed in Experiment 1 could also be diagnostic of the degree of stiffness of dot stimuli. 
To verify this, we used the same method as in Experiment 1 to extract the optical flow statistics from the dynamic dot stimuli. Figure 7 plots the mean and standard deviation of divergence, gradient, and discrete Laplacian of the eight dynamic dotted stimuli that were generated with different α values. The figure shows that the blue lines are typically above the yellow lines, suggesting that the values of optical flow features increase as the α value increases (i.e., is perceived to be softer). This is consistent with the results of the optical flow analysis in Experiment 1
Figure 7
 
The optical flow statistics extracted from the eight dynamic dot stimuli with different α values. Each line represents optical flow statistics of a specific α value. Across the majority of the time period that has been plotted (i.e., 0 ∼ 200 frames), the bluish lines are above the yellowish lines, indicating that all six optical flow statistics are higher for the dot stimuli with higher α values (i.e., perceived to be softer).
Figure 7
 
The optical flow statistics extracted from the eight dynamic dot stimuli with different α values. Each line represents optical flow statistics of a specific α value. Across the majority of the time period that has been plotted (i.e., 0 ∼ 200 frames), the bluish lines are above the yellowish lines, indicating that all six optical flow statistics are higher for the dot stimuli with higher α values (i.e., perceived to be softer).
Further correlation analysis demonstrated that the six optical flow statistics were highly correlated with both the perceived stiffness and the α value (r2s > 0.89; see Figure 6B). Together, these findings supported our hypothesis that, when judging stiffness of cloth in videos, human observers might rely on the idiosyncratic pattern of dynamic deformation such that a soft cloth moves less uniformly than a stiff cloth. More importantly, such a pattern of dynamic deformation can be directly manipulated by the updating function we proposed and can be measured by the optical flow features extracted from the videos. 
Test of robustness
It is possible that our finding is particularly restricted to the cloth in the wind scene because the movement pattern as well as the optical flow fields would be highly affected by the scene setups. One can also argue that our finding is limited to the generative physics model used to simulate the cloth. Therefore, we used a different physics model to simulate cloth in two different dynamic scenes. In addition, we aim to test whether this method could also alter the perceived softness of a different 3-D object, such as an elastic cube. 
New generative physics model and scene setup
To account for these, we used Baraff and Witkin's (1998) method to model cloth dynamics, which is different from the cloth model used by Blender (Provot, 1997; Provot, 1995). Specifically, based on the implementation by Pritchard (Freecloth 0.7.1), we exported the 3-D dot cloth animation sequences under two new dynamic scenes: the drape scene contained a piece of cloth draping over a square desk (Figure 8A, “3-D mesh” panel), and the corner scene contained a piece of cloth hanging over with three corners pinned and the other corner being released (Figure 8B, “3-D mesh” panel). We used the same method as described in the stimuli section in Experiment 2 to create the dynamic dot stimuli with different α values. As Figure 8A and B shows, the dot stimuli with a higher α value appeared softer than those with a lower α value. Additionally, consistent with the main findings in Experiments 1 and 2, the dynamic dot stimuli with lower α value had smaller values in all of the six optical flow statistics that we proposed. 
Figure 8
 
Results of optical flow analysis of dynamic dot stimuli in three new conditions: cloth in the drape scene (A) and in the corner scene (B) as well as an elastic bouncing cube (C). Each line in the leftmost panel represents the optical flow statistics value as a function of time. The blue colored line represents high α values, and the orange colored lines represent low α values. For all of these three conditions, the blue lines are above the orange lines. This indicates that the six optical flow statistics are higher for the dot stimuli with higher α values. The rightmost panel plots the measured perceptual scales from four new observers. Similar to the findings in Experiment 2, the perceived stiffness is highly correlated with the α values.
Figure 8
 
Results of optical flow analysis of dynamic dot stimuli in three new conditions: cloth in the drape scene (A) and in the corner scene (B) as well as an elastic bouncing cube (C). Each line in the leftmost panel represents the optical flow statistics value as a function of time. The blue colored line represents high α values, and the orange colored lines represent low α values. For all of these three conditions, the blue lines are above the orange lines. This indicates that the six optical flow statistics are higher for the dot stimuli with higher α values. The rightmost panel plots the measured perceptual scales from four new observers. Similar to the findings in Experiment 2, the perceived stiffness is highly correlated with the α values.
To measure the perceptual scale of the dot stimuli in the two new scenes, we sampled eight α values (same as in Experiment 2) and conducted a MLDS experiment on four new observers. In both scenes, the measured perceptual scales of stiffness (Figure 8A and B: rightmost panel) were highly correlated with the α values, r(32)2 > 0.92. 
New 3-D model
Next we were interested to know whether our findings could be generalized to other nonrigid objects. To answer this, we created an elastic cube bouncing on the ground in Blender (see Figure 8C, “3-D mesh” panel). Next, we used the same method to modify the pattern of dynamic deformation and generated the dynamic dot stimuli. We then extracted optical flow statistics. Figure 8C shows that the results are consistent with previous ones: the dynamic dot stimuli with higher velocity coherence had smaller values in all of the six optical flow statistics. Similarly, results of an MLDS experiment revealed that the measured perceptual scale of the dot stimuli was highly correlated with the α value, r(32)2 = 0.915. 
Together, these results suggest that our findings can be generalized to other dynamic scene setup, generative physics models, and other deformable objects. 
Discussion
This article aimed to understand whether and how dynamic deformation affects the inference of stiffness of deformable objects in dynamic scenes. First, we found stiffness perception was highly affected by optical appearances when the cloth was viewed as static images. However, when presented as videos, the effect of optical appearances had been partially discounted, leading to enhanced sensitivity to different stiffness values. Optical flow analysis showed that motion statistics associated with movement uniformity (i.e., divergence, gradient, and discrete Laplacian) could be diagnostic of the degree of stiffness when the appearances were the same. In Experiment 2, we demonstrated that directly manipulating patterns of the dynamic deformation of a dot cloth video using a mathematical function alone could alter the perceived stiffness. We could use the same method to alter the perceived stiffness of deformable objects across a variety of scene setups, a 3-D model, and a generative physics model. Finally, we confirmed the same six low-level optical flow image features could discriminate the degree of stiffness from the dot stimuli. 
Estimating mechanical properties using image-based features
Intrinsic mechanical properties interact with external forces in a complex way to affect the movement of cloth in the video. Due to the complexity of this generating process, it is unlikely that humans can inverse this process to accurately estimate the mechanical properties. More likely, the human visual system can infer stiffness by identifying image-based features (i.e., features that directly extracted from images or videos) that are more affected by intrinsic stiffness than other factors. 
Using the framework of the statistical appearance model, one line of work investigated the role of verbally reported mid-level features. For example, Schmidt et al. (2017) reported that perceived deformation is important in stiffness perception of unfamiliar objects. van Assen et al. (2018) found the perceived liquid viscosity could be well predicted by four factors, which were verbal judgments relating to the distribution, irregularity, rectilinearity, and dynamics. Even though these studies help understand the perception of mechanical properties, they didn't identify what image-based measurements the visual system might use to represent these verbally reported features. 
Another line of studies has identified multiple image-based cues in perception of deformable materials that might be used by human observers. Kawabe, Maruya, Fleming, et al. (2015) computed optical flow fields between two consecutive frames of running transparent liquid and decomposed them into horizontal and vertical vector components. After Fourier transform of each of the decomposed components, they found that the amplitudes of spectra of image deformation, but not the phase, are critical for identifying a transparent liquid. They also identified critical spatiotemporal frequencies that are correlated with the impression of transparent liquid. Studies by the same author also showed that motion speed is critical for perception of liquid viscosity (Kawabe, Maruya, Fleming, et al., 2015) and judgment of elasticity of deformable transparent cubes (Kawabe & Nishida, 2016). Previous work showed that image motion speed could also be used to discriminate rigid from nonrigid cylinders (Jain & Zaidi, 2011). Nevertheless, other studies found that mean speed is not enough. For example, it was found that the magnitude of phase differences in oscillating motion could influence the visual impression of an illusory object's elasticity (Masuda, Matsubara, Utsumi, & Wada, 2015; Masuda et al., 2013). 
Here, we discuss the role of motion speed in perception of stiffness from the dynamic dot stimuli used in our study. In Figure 9A, we plotted the mean speed of optical flow vectors for dot stimuli with different alpha values. The figure shows that the image motion speed alone is insufficient to discriminate the stiffness during slow variation of external forces (see red circled area in Figure 9A). Compared to the other six optical flow statistics that we used in this paper (Figure 9E), the image motion speed can better discriminate the stiffness during rapid variation of the external forces but is worse in discriminating the stiffness when the external forces are slowly varying. Our results suggest motion speed and features associated with movement uniformity might be supplementary to each other in discriminating stiffness. 
Figure 9
 
Comparing mean speed (mean norms of optical flow vectors) and other optical flow statistics in stiffness discrimination across the time period of applied force. (A) Mean speed of the optical flow vectors for dot stimuli with different α values. The area in the red circle is an example period during which the mean speed is not sufficient in distinguishing different stiffness levels. (B) The mean magnitude of received force of the cloth across the time period. (C) Configuration of the scene geometry. The location of the wind is fixed, but it rotates around the cloth periodically. (D) Example plot of the mean speed of optical flow vectors for dot stimuli of two α values along with the magnitude of received force across the time period. The mean speed can well discriminate the stiffness during the rapid variation of the force (e.g., 75th to 125th frame) but cannot reliably discriminate the stiffness when the force varies slowly (e.g., 25th to 75th frame). (E) Other optical flow statistics for dot stimuli of the same alpha values and the magnitude of received force across the time period. These statistics can discriminate the stiffness better during the slow variation of external forces.
Figure 9
 
Comparing mean speed (mean norms of optical flow vectors) and other optical flow statistics in stiffness discrimination across the time period of applied force. (A) Mean speed of the optical flow vectors for dot stimuli with different α values. The area in the red circle is an example period during which the mean speed is not sufficient in distinguishing different stiffness levels. (B) The mean magnitude of received force of the cloth across the time period. (C) Configuration of the scene geometry. The location of the wind is fixed, but it rotates around the cloth periodically. (D) Example plot of the mean speed of optical flow vectors for dot stimuli of two α values along with the magnitude of received force across the time period. The mean speed can well discriminate the stiffness during the rapid variation of the force (e.g., 75th to 125th frame) but cannot reliably discriminate the stiffness when the force varies slowly (e.g., 25th to 75th frame). (E) Other optical flow statistics for dot stimuli of the same alpha values and the magnitude of received force across the time period. These statistics can discriminate the stiffness better during the slow variation of external forces.
Static deformation versus dynamic deformation
Previous works show that the stiffness perception of deformable objects is dominantly affected by the absolute magnitude of deformation and translation (Paulun et al., 2017; Schmidt et al., 2017; Warren, Kim, & Husney, 1987). For example, in a dynamic scene that contains a bouncing ball, the perceived elasticity of the ball is barely affected by the velocity information but mainly determined by the absolute translation information, such as relative height (Warren et al., 1987). More recently, Paulun et al. (2017) used computer-rendered animations of an elastic cube being pushed downward to various extents and demonstrated that perceived elasticity was mainly determined by the absolute magnitude of deformation compared to the original nondeformed state regardless of whether the stimuli were presented as static images or videos. They concluded that, to judge stiffness, the human visual system might rely on the extent to which an object changes its shape under external forces. 
The update function that we propose in this study changes both the dynamic deformation (i.e., movement uniformity) as well as the maximum deformation (i.e., the maximum magnitude of deformation). It could be argued that the maximum deformation alone is enough for the inference of stiffness, and the pattern of dynamic deformation does not provide more information. We conjecture that the maximum deformation dominantly affects the perceived stiffness when the two pieces of fabric are very different in their maximum deformation. Humans tend to judge fabric with larger maximum deformation to be softer than one with smaller maximum deformation. However, when the two pieces of fabric's maximum deformations are similar, humans might rely on the pattern of dynamic deformation to distinguish stiffness. 
With a small modification to the updating function, we created a demo to provide preliminary support for the importance of the pattern of dynamic deformation in inferring stiffness. Specifically, we partially isolated the maximum deformation and the movement uniformity and created two new cloth videos: one moves more uniformly but with larger deforming speed (see Supplementary Movie S3), and the other one moves less uniformly but with smaller deforming speed (see Supplementary Movie S4). If stiffness judgment only relies on the absolute magnitude of deformation, observers would always judge the cloth with larger speed to be softer. However, as reported by three new observers, the one that moves less uniformly was reported to be softer, indicating that the effect of movement uniformity outweighed that of the absolute magnitude of deformation. Interestingly, the observers reported the cloth video with larger deforming speed to be blown by stronger winds. This pilot data suggests that, in a complex scene that has varying and unknown forces, the stiffness perception of cloth might rely on both the static deformation (i.e., absolute magnitude of deformation) and the pattern of dynamic deformation. 
Image synthesis without simulation
Other than computing and identifying image features of perceptual salience, image synthesis is another approach to seek a statistical description of visual appearance that is consistent with human perception. Portilla and Simoncelli (2000) developed a method of synthesizing an image of Gaussian white noise to any given reference texture image. They first built a set of steerable pyramid sub-bands from the reference texture image, iteratively optimized the sample statistics of each steerable pyramid sub-band, and reconstructed an image from the updated pyramid. A similar approach has also been used in synthesizing images of certain material appearance; Kawabe, Maruya, and Nishida (2015) created the impression of transparent liquid by synthesizing the pattern of dynamic image deformation to match the spatiotemporal frequency amplitude spectrum of the resulting pixels to those of real water. Recently, deep learning has been used to generate images without simulation, such as image style transfer (Gatys, Ecker, & Bethge, 2016) and image synthesis of complex materials (e.g., flow) without simulation (Chu & Thuerey, 2017). Particularly, generative adversarial networks (Goodfellow et al., 2014) have begun to generate highly photorealistic images of specific categories, such as faces, album covers, and room interiors (Radford, Metz, & Chintala, 2015); dynamic textures (Xie, Zhu, & Nian Wu, 2017); and videos with scene dynamics (Vondrick, Pirsiavash, & Torralba, 2016). Our method can also be used as a framework to modify material properties of stimuli without rerendering. 
Implications for neuroscience, computer vision, and graphics
Our method can be potentially applied to other areas. Because the dynamic dot stimuli are spatially simple and the dynamic deformation pattern is quantifiable, this could potentially serve as a paradigm to study the role of visual motion on material perception of objects in neuroscience. In addition to the optical flow features that are discussed in this paper, there are many other image features that can represent visual motion. To see whether other physiologically plausible computations based on spatiotemporal filters can be used to account for our results, we computed outputs of a linear combination of motion energy features from the dynamic dot stimuli (Adelson & Bergen, 1985; Nishimoto et al., 2011; Vinken, Van den Bergh, Vermaercke, & Op de Beeck, 2016). Figure 10A illustrates the details of the model. First, we convolved the video frames with a bank of quadrature pairs of Gabor filters, each with a certain spatiotemporal frequency and orientation. The output of each quadrature pair was then squared and summed to give the energy features. Figure 10B shows that the simplest linear combination (i.e., equal weights) can well discriminate dotted cloth videos with different alpha values. It suggests that motion processing at the earliest processing stages in the brain may be sufficient for discriminating stiffness in such stimuli. Future studies can examine the neural responses when these dotted videos are presented to the visual system and determine the neural basis of discriminating stiffness from videos. 
Figure 10
 
Linear combination of the motion energy features can also be used to discriminate stiffness from the dot stimuli. (A) Flowchart of the motion energy model. Cloth videos with different alpha values were concatenated as input stimuli. We first converted the input to LAB color space and then convolved the luminance channel of the input stimuli with a bank of quadrature pairs of Gabor filters, each with a certain spatiotemporal frequency and orientation. The output of each quadrature pair is then squared and summed to give energy features. And the output from all Gabor filter channels is standardized, summed up, and plotted as predictions in panel B. (B) Outputs from the linear combination of motion energy features with equal weights for the dot stimuli with different α values across the time period. The prediction by motion energy features can also be diagnostic of degree of stiffness.
Figure 10
 
Linear combination of the motion energy features can also be used to discriminate stiffness from the dot stimuli. (A) Flowchart of the motion energy model. Cloth videos with different alpha values were concatenated as input stimuli. We first converted the input to LAB color space and then convolved the luminance channel of the input stimuli with a bank of quadrature pairs of Gabor filters, each with a certain spatiotemporal frequency and orientation. The output of each quadrature pair is then squared and summed to give energy features. And the output from all Gabor filter channels is standardized, summed up, and plotted as predictions in panel B. (B) Outputs from the linear combination of motion energy features with equal weights for the dot stimuli with different α values across the time period. The prediction by motion energy features can also be diagnostic of degree of stiffness.
Previous work suggests that visual motion often provides powerful cues for scene segmentation for both human and artificial systems (Shi & Malik, 1998; Wagemans et al., 2012). More recently, an fMRI study found that, when showing a class of bistable moving stimuli, the perceptual scene segmentation was associated with increased activity in the posterior parietal cortex (PPC) together with a decreased neural signal in the early visual cortex (Grassi, Zaretskaya, & Bartels, 2018). This suggests that PPC is a hub involved in structuring visual scenes based on different motion cues. Using our dynamic dot stimuli, we could test the hypothesis that PPC is involved in estimating stiffness from movement patterns. This further advances the search for a neural substrate for material perception. 
From the computer vision perspective, our method could serve as a fast way to modify the stiffness of nonrigid objects, which can benefit real-time rendering and apparent motion modification. Most recently, Punpongsanon, Iwai, and Sato (2018) used a similar method to efficiently change the perceived fabric-bending stiffness using an optical flow enhancement technique in augmented reality. Specifically, they first extracted apparent motion from a real cloth, then directly modified the apparent motion. Using a projector to map the new apparent motion back to the original fabric, they were able to change the perceived stiffness of the real fabric. Future studies could modify our method into a standard way to change the perceived stiffness of general nonrigid objects in a wide range of scenes. 
Our work could also contribute to computer graphics by proposing a parameter to linearly manipulate the stiffness of nonrigid objects. The stiffness parameter in most physical engines (e.g., Blender) is extremely nonlinear. Thus, in the previous studies, the authors had to sample the parameters based on their experience to generate stimuli with perceptually plausible stiffness levels (Bi, Jin, et al., 2018; Bi & Xiao, 2016; Paulun et al., 2017; Schmid & Doerschner, 2018). In our method, the α value is linearly correlated with perceived stiffness (see Figure 6B), which allows us to easily create nonrigid objects that are perceptually uniform in stiffness. 
Conclusion
In conclusion, we discovered that both intrinsic mechanical properties and optical properties affect stiffness perception of cloth when the stimuli are displayed as static images. In the video conditions, humans can partially discount the bias caused by optical appearances and exhibit higher sensitivity to stiffness. Analysis of optical flow fields shows that motion statistics associated with the pattern of dynamic deformation (i.e., movement uniformity) is diagnostic of stiffness of cloth. To further test how the patterns of dynamic deformation affect the inference of stiffness, we isolate the deformation information by removing the influence of optical properties and create dynamic dot stimuli from the 3-D mesh of cloth animations. We propose a method to directly manipulate the pattern of dynamic deformation of the dot stimuli and demonstrate that changing the movement uniformity of the dots can alter the perceived stiffness. We evaluate the robustness of this method by showing that it can be generalized to other scene setups, 3-D models, and cloth physics. Finally, we confirm that the same six optical flow features (the mean and standard deviation of divergence, gradient, and discrete Laplacian) are reliable image-based measurements to differentiate stiffness of nonrigid objects. Additionally, the linear combination of motion energy features is also diagnostic of the degree of stiffness of the dynamic dot stimuli. Together, our study demonstrates that manipulating patterns of dynamic deformation can elicit the impression of cloth with varying stiffness, suggesting the brain can infer mechanical properties from image cues related to dynamic image deformation. 
Acknowledgments
Hendrikje Nienborg acknowledges funding by the Deutsche Forschungsgemeinschaft (German Research Foundation) Projektnummer 276693517 – SFB 1233. 
Commercial relationships: none. 
Corresponding author: Wenyan Bi. 
Address: Department of Computer Science, American University, Washington, DC, USA. 
References
Adelson, E. H., & Bergen, J. R. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America. A, Optics and Image Science, 2 (2), 284–299.
Aliaga, C., O'Sullivan, C., Gutierrez, D., & Tamstorf, R. (2015). Sackcloth or silk? The impact of appearance vs dynamics on the perception of animated cloth. In L. Trutoiu & M. Geuss (Eds.), Proceedings of the ACM Siggraph Symposium on Applied Perception (pp. 41–46). New York, NY: ACM.
Baraff, D., & Witkin, A. (1998). Large steps in cloth simulation. In S. Cunningham, W. Bransford, & M. Cohen (Eds.), Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (pp. 43–54). New York, NY: ACM.
Bi, W., Jin, P., Nienborg, H., & Xiao, B. (2018). Estimating mechanical properties of cloth from videos using dense motion trajectories: Human psychophysics and machine learning. Journal of Vision, 18 (5): 12, 1–20, https://doi.org/10.1167/18.5.12. [PubMed] [Article]
Bi, W., Newport, J., & Xiao, B. (2018). Interaction between static visual cues and force-feedback on the perception of mass of virtual objects. In C. Grimm & P. Willemsen (Eds.), Proceedings of the 15th ACM Symposium on Applied Perception (pp. 12: 1–12 :5). New York, NY: ACM, http://doi.acm.org/10.114 5/3225153.3225177.
Bi, W., & Xiao, B. (2016). Perceptual constancy of mechanical properties of cloth under variation of external forces. In E. Jain & S. Joerg (Eds.), Proceedings of the ACM Symposium on Applied Perception (pp. 19–23). New York, NY: ACM.
Bouman, K. L., Xiao, B., Battaglia, P., & Freeman, W. T. (2013). Estimating the material properties of fabric from video. In L. Davis & R. Hartley (Eds.), Proceedings of the IEEE International Conference on Computer Vision (pp. 1984–1991). Washington, DC: IEEE.
Chu, M., & Thuerey, N. (2017). Data-driven synthesis of smoke flows with CNN-based feature descriptors. ACM Transactions on Graphics (TOG), 36 (4), 69.
Doerschner, K., Fleming, R. W., Yilmaz, O., Schrater, P. R., Hartung, B., & Kersten, D. (2011). Visual motion and the perception of surface material. Current Biology, 21 (23), 2010–2016.
Dövencioğlu, D. N., Ben-Shahar, O., Barla, P., & Doerschner, K. (2017). Specular motion and 3D shape estimation. Journal of Vision, 17 (6): 3, 1–15, https://doi.org/10.1167/17.6.3. [PubMed] [Article]
Fleming, R. W. (2017). Material perception. Annual Review of Vision Science, 3 (1), 365–388.
Fleming, R. W., Gegenfurtner, K. R., & Nishida, S. (2015). Visual perception of materials: The science of stuff. Vision Research, 109, 123–124.
Fleming, R. W., Jäkel, F., & Maloney, L. T. (2011). Visual perception of thick transparent materials. Psychological Science, 22 (6), 812–820.
Fleming, R. W., Wiebel, C., & Gegenfurtner, K. (2013). Perceptual qualities and material classes. Journal of Vision, 13 (8): 9, 1–20, https://doi.org/10.1167/13.8.9. [PubMed] [Article]
Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In R. Bajcsy, F. F. Li, & T. Tuytelaars (Eds.), Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2414–2423). Washington, DC: IEEE.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S.,… Bengio, Y. (2014). Generative adversarial nets. In Z. Ghahramani, W. Welling, C. Cortes, N. D. Lawrence, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems (pp. 2672–2680). Cambridge, MA: MIT Press.
Grassi, P. R., Zaretskaya, N., & Bartels, A. (2018). A generic mechanism for perceptual organization in the parietal cortex. Journal of Neuroscience, 38 (32), 7158–7169.
Han, D., & Keyser, J. (2015). Effect of appearance on perception of deformation. In F. Faure & C. Larboulette (Eds.), Proceedings of the 14th ACM Siggraph/Eurographics Symposium on Computer Animation (pp. 37–44). New York, NY: ACM.
Jain, A., & Zaidi, Q. (2011). Discerning nonrigid 3D shapes from motion cues. Proceedings of the National Academy of Sciences, USA, 108 (4), 1663–1668.
Kawabe, T., Maruya, K., Fleming, R. W., & Nishida, S. (2015). Seeing liquids from visual motion. Vision Research, 109, 125–138.
Kawabe, T., Maruya, K., & Nishida, S. (2015). Perceptual transparency from image deformation. Proceedings of the National Academy of Sciences, USA, 112 (33), E4620–E4627.
Kawabe, T., & Nishida, S. (2016). Seeing jelly: Judging elasticity of a transparent object. In E. Jain & S. Joerg (Eds.), Proceedings of the ACM Symposium on Applied Perception (pp. 121–128). New York, NY: ACM.
Kim, J., Marlow, P. J., & Anderson, B. L. (2012). The dark side of gloss. Nature Neuroscience, 15 (11), 1590–1595.
Knoblauch, K., & Maloney, L. T. (2008). MLDS: Maximum likelihood difference scaling in R. Journal of Statistical Software, 25 (2), 1–26.
Maloney, L. T., & Brainard, D. H. (2010). Color and material perception: Achievements and challenges. Journal of Vision, 10 (9): 19, 1–6, https://doi.org/10.1167/10.9.19. [PubMed] [Article]
Maloney, L. T., & Yang, J. N. (2003). Maximum likelihood difference scaling. Journal of Vision, 3 (8): 5, 573–585, https://doi.org/10.1167/3.8.5. [PubMed] [Article]
Marlow, P. J., & Anderson, B. L. (2015). Material properties derived from three-dimensional shape representations. Vision Research, 115, 199–208.
Marlow, P. J., Todorović, D., & Anderson, B. L. (2015). Coupled computations of three-dimensional shape and material. Current Biology, 25 (6), R221–R222.
Masuda, T., Matsubara, K., Utsumi, K., & Wada, Y. (2015). Material perception of a kinetic illusory object with amplitude and frequency changes in oscillated inducer motion. Vision Research, 109, 201–208.
Masuda, T., Sato, K., Murakoshi, T., Utsumi, K., Kimura, A., Shirai, N.,… Wada, Y. (2013). Perception of elasticity in the kinetic illusory object with phase differences in inducer motion. PloS One, 8 (10), e78621.
McCullagh, P. (1984). Generalized linear models. European Journal of Operational Research, 16 (3), 285–292.
Motoyoshi, I. (2010). Highlight-shading relationship as a cue for the perception of translucent and transparent materials. Journal of Vision, 10 (9): 6, 1–11, https://doi.org/10.1167/10.9.6. [PubMed] [Article]
Motoyoshi, I., Nishida, S., Sharan, L., & Adelson, E. H. (2007, May 10). Image statistics and the perception of surface qualities. Nature, 447 (7141), 206–209.
Nishida, S., Kawabe, T., Sawayama, M., & Fukiage, T. (2018). Motion perception: From detection to interpretation. Annual Review of Vision Science, 4, 501–523.
Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21 (19), 1641–1646.
Paulun, V. C., Kawabe, T., Nishida, S., & Fleming, R. W. (2015). Seeing liquids from static snapshots. Vision Research, 115, 163–174.
Paulun, V. C., Schmidt, F., van Assen, J. J. R., & Fleming, R. W. (2017). Shape, motion, and optical cues to stiffness of elastic objects. Journal of Vision, 17 (1): 20, 1–22, https://doi.org/10.1167/17.1.20. [PubMed] [Article]
Pinna, B., & Deiana, K. (2015). Material properties from contours: New insights on object perception. Vision Research, 115, 280–301.
Portilla, J., & Simoncelli, E. P. (2000). A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40 (1), 49–70.
Provot, X. (1997). Collision and self-collision handling in cloth model dedicated to design garments. In P. C. van der Kruit & G. Gilmore (Eds.), Computer Animation and Simulation '97 (pp. 177–189). Dordrecht, Netherlands: Kluwer.
Provot, X. (1995). Deformation constraints in a mass-spring model to describe rigid cloth behaviour. In P. Prusinkiewicz (Ed.), Graphics Interface (pp. 147–147). Mississauga, Canada: Canadian Information Processing Society.
Punpongsanon, P., Iwai, D., & Sato, K. (2018). Flexeen: Visually manipulating perceived fabric bending stiffness in spatial augmented reality. In K. Mueller (Ed.), IEEE Transactions on Visualization and Computer Graphics. Washington, DC: IEEE.
Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv :1511.06434.
Sakano, Y., & Ando, H. (2010). Effects of head motion and stereo viewing on perceived glossiness. Journal of Vision, 10 (9): 15, 1–14, https://doi.org/10.1167/10.9.15. [PubMed] [Article]
Sawayama, M., & Nishida, S. (2018). Material and shape perception based on two types of intensity gradient information. PLoS Computational Biology, 14 (4), e1006061.
Schmid, A. C., & Doerschner, K. (2018). Shatter and splatter: The contribution of mechanical and optical properties to the perception of soft and hard breaking materials. Journal of Vision, 18 (1): 14, 1–32, https://doi.org/10.1167/18.1.14. [PubMed] [Article]
Schmidt, F., Paulun, V. C., van Assen, J. J. R., & Fleming, R. W. (2017). Inferring the stiffness of unfamiliar objects from optical, shape, and motion cues. Journal of Vision, 17 (3): 18, 1–17, https://doi.org/10.1167/17.3.18. [PubMed] [Article]
Shi, J., & Malik, J. (1998). Motion segmentation and tracking using normalized cuts. In S. Chandran & U. Desai (Eds.), Sixth International Conference on Computer Vision, 1998 (pp. 1154–1160). Washington, DC: IEEE.
Spröte, P., & Fleming, R. W. (2016). Bent out of shape: The visual inference of non-rigid shape transformations applied to objects. Vision Research, 126, 330–346.
Tamura, H., Higashi, H., & Nakauchi, S. (2018). Dynamic visual cues for differentiating mirror and glass. Scientific Reports, 8 (1): 8403.
van Assen, J. J. R., Barla, P., & Fleming, R. W. (2018). Visual features in the perception of liquids. Current Biology, 28 (3), 452–458.
van Assen, J. J. R., & Fleming, R. W. (2016). Influence of optical material properties on the perception of liquids. Journal of Vision, 16 (15): 12, 1–20, https://doi.org/10.1167/16.15.12. [PubMed] [Article]
Vinken, K., Van den Bergh, G., Vermaercke, B., & Op de Beeck, H. P. (2016). Neural representations of natural and scrambled movies progressively change from rat striate to temporal cortex. Cerebral Cortex, 26 (7), 3310–3322.
Vondrick, C., Pirsiavash, H., & Torralba, A. (2016). Generating videos with scene dynamics. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, & R. Garnett (Eds.), Advances in Neural Information Processing Systems (pp. 613–621). Red Hook, NY: Curran Associates.
Wagemans, J., Elder, J. H., Kubovy, M., Palmer, S. E., Peterson, M. A., Singh, M., & von der Heydt, R. (2012). A century of gestalt psychology in visual perception: I. Perceptual grouping and figure-ground organization. Psychological Bulletin, 138 (6), 1172.
Warren, W. H.,Jr., Kim, E. E., & Husney, R. (1987). The way the ball bounces: Visual and auditory perception of elasticity and control of the bounce pass. Perception, 16 (3), 309–336.
Xie, J., Zhu, S.-C., & Nian Wu, Y. (2017). Synthesizing dynamic patterns by spatial-temporal generative ConvNet. In R. Chellappa, Z. Zhang, & A. Hoogs (Eds.), Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7093–7101). Washington, DC: IEEE.
Figure 1
 
Estimating the stiffness of cloth from images and videos. (A) Cloth samples rendered with the same 3-D mesh but with different optical properties. Optical properties affect the perceived stiffness even for clothes with the same intrinsic mechanical properties. (B). A generative model for estimating cloth mechanical properties. Both intrinsic material properties and external scene properties influence the optical appearances (4) and deformations (3). In Experiment 1, we aim to compare static image conditions and videos in the estimation of stiffness of cloth. Previous studies investigated how stiffness estimation is affected by optical appearance and deformations by directly manipulating intrinsic properties (1) or/and external forces (2). In Experiment 2, we directly manipulate the dynamic deformations (3) by using dynamic dot stimuli and measure how this manipulation affects the visual estimation of stiffness.
Figure 1
 
Estimating the stiffness of cloth from images and videos. (A) Cloth samples rendered with the same 3-D mesh but with different optical properties. Optical properties affect the perceived stiffness even for clothes with the same intrinsic mechanical properties. (B). A generative model for estimating cloth mechanical properties. Both intrinsic material properties and external scene properties influence the optical appearances (4) and deformations (3). In Experiment 1, we aim to compare static image conditions and videos in the estimation of stiffness of cloth. Previous studies investigated how stiffness estimation is affected by optical appearance and deformations by directly manipulating intrinsic properties (1) or/and external forces (2). In Experiment 2, we directly manipulate the dynamic deformations (3) by using dynamic dot stimuli and measure how this manipulation affects the visual estimation of stiffness.
Figure 2
 
(A) The interface of the multiple choice experiment. Observers adjusted the stiffness of the reference cloth to match that of the target cloth by selecting one of the small reference videos (rows below), that contained the cloth with varying values of bending stiffness. When one of the small reference videos was selected, it would show up at the position of the reference cloth. (B) The target cloth was rendered with four different material appearances (i.e., cotton, felt, red gauze, brocade) and two different scenes (i.e., the ball scene and the wind scene). In the dynamic condition, the target was always presented as video; in the image condition, the target was presented as a single static frame randomly chosen from the corresponding video. (C) A zoom-in view of the material textures. The “neutral gray” refers to the reference cloth, and the other four belong to the “target cloth.”
Figure 2
 
(A) The interface of the multiple choice experiment. Observers adjusted the stiffness of the reference cloth to match that of the target cloth by selecting one of the small reference videos (rows below), that contained the cloth with varying values of bending stiffness. When one of the small reference videos was selected, it would show up at the position of the reference cloth. (B) The target cloth was rendered with four different material appearances (i.e., cotton, felt, red gauze, brocade) and two different scenes (i.e., the ball scene and the wind scene). In the dynamic condition, the target was always presented as video; in the image condition, the target was presented as a single static frame randomly chosen from the corresponding video. (C) A zoom-in view of the material textures. The “neutral gray” refers to the reference cloth, and the other four belong to the “target cloth.”
Figure 3
 
Matched stiffness plotted as a function of ground truth stiffness levels in image and video conditions. The x-axis shows the ground truth stiffness value of the target cloth. The y-axis shows the matched stiffness levels. Different colors indicate different material appearances. (A) Mean matched stiffness levels plotted as a function of ground truth stiffness for image condition. (B) Mean matched stiffness levels plotted as function of ground truth stiffness for video condition. (C and D) Same as panels A and B, but the matched stiffness levels are plotted using the median value across all observers.
Figure 3
 
Matched stiffness plotted as a function of ground truth stiffness levels in image and video conditions. The x-axis shows the ground truth stiffness value of the target cloth. The y-axis shows the matched stiffness levels. Different colors indicate different material appearances. (A) Mean matched stiffness levels plotted as a function of ground truth stiffness for image condition. (B) Mean matched stiffness levels plotted as function of ground truth stiffness for video condition. (C and D) Same as panels A and B, but the matched stiffness levels are plotted using the median value across all observers.
Figure 4
 
Optical flow analysis of videos containing simulated cloth. (A) Two-frame optical flow vectors are less uniform for a soft cloth (upper panel) than a stiff cloth (lower panel). The color maps are plotted together with the displacement vectors to show the length and direction of the displacements. In the vector fields, a larger vector refers to more displacement, and the arrow points to the movement direction. In the color map, the saturation indicates the magnitude of the movement, and the hue represents the direction of the movement. (B) Comparison of optical flow statistics between a soft cloth (bs = 0.1; solid line) and a stiff one (bs = 100; dotted line). Different colors indicate different material appearances. Across the whole time period that has been plotted and for all plots, the solid lines are typically above the dotted lines, indicating that all the optical flow statistics have higher values for the softer cloth compared to the stiffer one.
Figure 4
 
Optical flow analysis of videos containing simulated cloth. (A) Two-frame optical flow vectors are less uniform for a soft cloth (upper panel) than a stiff cloth (lower panel). The color maps are plotted together with the displacement vectors to show the length and direction of the displacements. In the vector fields, a larger vector refers to more displacement, and the arrow points to the movement direction. In the color map, the saturation indicates the magnitude of the movement, and the hue represents the direction of the movement. (B) Comparison of optical flow statistics between a soft cloth (bs = 0.1; solid line) and a stiff one (bs = 100; dotted line). Different colors indicate different material appearances. Across the whole time period that has been plotted and for all plots, the solid lines are typically above the dotted lines, indicating that all the optical flow statistics have higher values for the softer cloth compared to the stiffer one.
Figure 5
 
Experiment 2: Using dynamic dot stimuli to isolate and manipulate the patterns of dynamic deformation. (A) Method of creating the dynamic dot stimuli. The input frames are exported from Blender. The output frame is generated by shifting the positions of the dot in the original frame by the updating function defined on the right box. For each dot in the original frame t, its new position \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({p^{\prime} _t} \in {R^3}\) in the new frame t is updated using both its current position (tR3) as well as its position at the first frame (0R3) (right panel). The α in the function determines the velocity coherence. A smaller α value makes the dots move more uniformly. (B) Examples of the dot stimuli generated with different α values. The x-axis represents the time period. The y-axis represents three α levels. The color hue represents the depth information.
Figure 5
 
Experiment 2: Using dynamic dot stimuli to isolate and manipulate the patterns of dynamic deformation. (A) Method of creating the dynamic dot stimuli. The input frames are exported from Blender. The output frame is generated by shifting the positions of the dot in the original frame by the updating function defined on the right box. For each dot in the original frame t, its new position \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({p^{\prime} _t} \in {R^3}\) in the new frame t is updated using both its current position (tR3) as well as its position at the first frame (0R3) (right panel). The α in the function determines the velocity coherence. A smaller α value makes the dots move more uniformly. (B) Examples of the dot stimuli generated with different α values. The x-axis represents the time period. The y-axis represents three α levels. The color hue represents the depth information.
Figure 6
 
Results of Experiment 2. (A) Measured perceptual scale of stiffness from the dot video as a function of different α values. The black line represents the scale averaged over the eight observers, and the blue lines represent individual observer's scales. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the averaged perceptual scale of stiffness. Perceived stiffness decreases as the α value increases in a linear fashion. The dotted red line refers to the linear fit between the perceptual scale and the α value. (B) Optical flow statistics and perceptual scale are plotted as function of α values. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the normalized values of the perceptual scale as well as the six optical flow statistics averaged across time. The α value, optical flow statistics, and perceived stiffness from the dot stimuli are highly correlated with each other.
Figure 6
 
Results of Experiment 2. (A) Measured perceptual scale of stiffness from the dot video as a function of different α values. The black line represents the scale averaged over the eight observers, and the blue lines represent individual observer's scales. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the averaged perceptual scale of stiffness. Perceived stiffness decreases as the α value increases in a linear fashion. The dotted red line refers to the linear fit between the perceptual scale and the α value. (B) Optical flow statistics and perceptual scale are plotted as function of α values. The x-axis represents α values of different dynamic dot stimuli. The y-axis represents the normalized values of the perceptual scale as well as the six optical flow statistics averaged across time. The α value, optical flow statistics, and perceived stiffness from the dot stimuli are highly correlated with each other.
Figure 7
 
The optical flow statistics extracted from the eight dynamic dot stimuli with different α values. Each line represents optical flow statistics of a specific α value. Across the majority of the time period that has been plotted (i.e., 0 ∼ 200 frames), the bluish lines are above the yellowish lines, indicating that all six optical flow statistics are higher for the dot stimuli with higher α values (i.e., perceived to be softer).
Figure 7
 
The optical flow statistics extracted from the eight dynamic dot stimuli with different α values. Each line represents optical flow statistics of a specific α value. Across the majority of the time period that has been plotted (i.e., 0 ∼ 200 frames), the bluish lines are above the yellowish lines, indicating that all six optical flow statistics are higher for the dot stimuli with higher α values (i.e., perceived to be softer).
Figure 8
 
Results of optical flow analysis of dynamic dot stimuli in three new conditions: cloth in the drape scene (A) and in the corner scene (B) as well as an elastic bouncing cube (C). Each line in the leftmost panel represents the optical flow statistics value as a function of time. The blue colored line represents high α values, and the orange colored lines represent low α values. For all of these three conditions, the blue lines are above the orange lines. This indicates that the six optical flow statistics are higher for the dot stimuli with higher α values. The rightmost panel plots the measured perceptual scales from four new observers. Similar to the findings in Experiment 2, the perceived stiffness is highly correlated with the α values.
Figure 8
 
Results of optical flow analysis of dynamic dot stimuli in three new conditions: cloth in the drape scene (A) and in the corner scene (B) as well as an elastic bouncing cube (C). Each line in the leftmost panel represents the optical flow statistics value as a function of time. The blue colored line represents high α values, and the orange colored lines represent low α values. For all of these three conditions, the blue lines are above the orange lines. This indicates that the six optical flow statistics are higher for the dot stimuli with higher α values. The rightmost panel plots the measured perceptual scales from four new observers. Similar to the findings in Experiment 2, the perceived stiffness is highly correlated with the α values.
Figure 9
 
Comparing mean speed (mean norms of optical flow vectors) and other optical flow statistics in stiffness discrimination across the time period of applied force. (A) Mean speed of the optical flow vectors for dot stimuli with different α values. The area in the red circle is an example period during which the mean speed is not sufficient in distinguishing different stiffness levels. (B) The mean magnitude of received force of the cloth across the time period. (C) Configuration of the scene geometry. The location of the wind is fixed, but it rotates around the cloth periodically. (D) Example plot of the mean speed of optical flow vectors for dot stimuli of two α values along with the magnitude of received force across the time period. The mean speed can well discriminate the stiffness during the rapid variation of the force (e.g., 75th to 125th frame) but cannot reliably discriminate the stiffness when the force varies slowly (e.g., 25th to 75th frame). (E) Other optical flow statistics for dot stimuli of the same alpha values and the magnitude of received force across the time period. These statistics can discriminate the stiffness better during the slow variation of external forces.
Figure 9
 
Comparing mean speed (mean norms of optical flow vectors) and other optical flow statistics in stiffness discrimination across the time period of applied force. (A) Mean speed of the optical flow vectors for dot stimuli with different α values. The area in the red circle is an example period during which the mean speed is not sufficient in distinguishing different stiffness levels. (B) The mean magnitude of received force of the cloth across the time period. (C) Configuration of the scene geometry. The location of the wind is fixed, but it rotates around the cloth periodically. (D) Example plot of the mean speed of optical flow vectors for dot stimuli of two α values along with the magnitude of received force across the time period. The mean speed can well discriminate the stiffness during the rapid variation of the force (e.g., 75th to 125th frame) but cannot reliably discriminate the stiffness when the force varies slowly (e.g., 25th to 75th frame). (E) Other optical flow statistics for dot stimuli of the same alpha values and the magnitude of received force across the time period. These statistics can discriminate the stiffness better during the slow variation of external forces.
Figure 10
 
Linear combination of the motion energy features can also be used to discriminate stiffness from the dot stimuli. (A) Flowchart of the motion energy model. Cloth videos with different alpha values were concatenated as input stimuli. We first converted the input to LAB color space and then convolved the luminance channel of the input stimuli with a bank of quadrature pairs of Gabor filters, each with a certain spatiotemporal frequency and orientation. The output of each quadrature pair is then squared and summed to give energy features. And the output from all Gabor filter channels is standardized, summed up, and plotted as predictions in panel B. (B) Outputs from the linear combination of motion energy features with equal weights for the dot stimuli with different α values across the time period. The prediction by motion energy features can also be diagnostic of degree of stiffness.
Figure 10
 
Linear combination of the motion energy features can also be used to discriminate stiffness from the dot stimuli. (A) Flowchart of the motion energy model. Cloth videos with different alpha values were concatenated as input stimuli. We first converted the input to LAB color space and then convolved the luminance channel of the input stimuli with a bank of quadrature pairs of Gabor filters, each with a certain spatiotemporal frequency and orientation. The output of each quadrature pair is then squared and summed to give energy features. And the output from all Gabor filter channels is standardized, summed up, and plotted as predictions in panel B. (B) Outputs from the linear combination of motion energy features with equal weights for the dot stimuli with different α values across the time period. The prediction by motion energy features can also be diagnostic of degree of stiffness.
Supplement 1
Supplement 2
Supplement 3
Supplement 4
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×