Free
Research Article  |   April 2007
Texture and object motion in slant discrimination: Failure of reliability-based weighting of cues may be evidence for strong fusion
Author Affiliations
Journal of Vision April 2007, Vol.7, 3. doi:https://doi.org/10.1167/7.6.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pedro Rosas, Felix A. Wichmann, Johan Wagemans; Texture and object motion in slant discrimination: Failure of reliability-based weighting of cues may be evidence for strong fusion. Journal of Vision 2007;7(6):3. https://doi.org/10.1167/7.6.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Different types of texture produce differences in slant-discrimination performance (P. Rosas, F. A. Wichmann, & J. Wagemans, 2004). Under the assumption that the visual system is sensitive to the reliability of different depth cues (M. O. Ernst & M. S. Banks, 2002; L. T. Maloney & M. S. Landy, 1989), it follows that the texture type should affect the influence of the texture cue in depth-cue combination. We tested this prediction by combining different texture types with object motion in a slant-discrimination task in two experiments. First, we used consistent cues to observe whether our subjects behaved as linearly combining independent estimates from texture and motion in a statistical optimal fashion (M. O. Ernst & M. S. Banks, 2002). Only 4% of our results were consistent with such an optimal combination of uncorrelated estimates, whereas about 46% of the data were consistent with an optimal combination of correlated estimates from cues. Second, we measured the weights for the texture and motion cues using perturbation analysis. The results showed a large influence of the motion cue and an increasing weight for the texture cue for larger slants. However, in general, the texture weights did not follow the reliability of the textures. Finally, we fitted the correlation coefficients of estimates individually for each texture, motion condition, and observer. This allows us to fit our data from both experiments to an optimal cue combination model with correlated estimates, but inspection of the fitted parameters shows no clear, psychophysically interpretable pattern. Furthermore, the fitted motion thresholds as a function of texture type are correlated with the slant thresholds as a function of texture type. One interpretation of such a finding is a strong coupling of cues.

Introduction
Depth-cue combination: Models and psychophysical methods
In an ordinary visual scene, there are multiple sources of information indicating depth; in psychophysics, they are often labeled as “cues.” Examples are object motion, occlusion, or texture. Because, usually, there are many different sources of information but only a single depth percept at a time, we assume that the information obtained from different cues is fused or combined in a systematic way, typically called the combination rule. This problem is frequently named depth-cue combination, whereas in machine vision research, the term sensor fusion or data fusion is used. The problem of how different sources of information interact in depth perception takes the form of trying to find an expression for the combination rule of depth cues available in a given scene. 
The literature offers several possible types of combination rules. Parker, Cumming, Johnston, and Hurlbert (1995), for instance, propose the following classification:
  •  
    Veto: one cue overrides the information that other cues might be contributing.
  •  
    Weak fusion: independent modules compute estimations of depth based on each available cue and then fused with a weighted linear combination into a depth map. 1
  •  
    Strong fusion: the depth-cue modules interact prior to the depth estimations; thus, there is dependency between them.
One might argue that veto is a special case of weak fusion, in which all but one cue have zero weight, and then distinguish only between weakly and strongly coupled data fusion as in Clark and Yuille (1990). In such a classification, the two classes have mutually exclusive independency assumptions for the “sensory modules.” Landy, Maloney, Johnston, and Young (1995) proposed a hybrid model called the modified weak fusion, which admits a single type of strong-fusion interaction between cues denominated “promotion” to solve the currency problem posed by the different types of information provided by cues (Cutting & Vishton distinguish three types of cue information: ordinal, scaled, or metric; Cutting & Vishton, 1995). This mechanism promotes the status of (nonmetric) cues to that of a metric or “absolute depth cue” (Landy et al., 1995, p. 392), allowing averaging between their estimates. 
The linear combination of cues present in the weak-fusion model has received empirical support from several studies using different cues. Some examples are studies by Dosher, Sperling, and Wurst (1986) for stereo and proximity luminance and by Bruno and Cutting (1988) for relative size, height, occlusion, and motion parallax as cues. Landy, Maloney, and Young (1991) specifically tested the assumptions of the modified weak fusion using texture and motion cues in a curvature-discrimination task. More recently Oruç, Maloney, and Landy (2003) studied two cues that might be correlated: linear perspective and texture gradients2 in a slant judgement task. Evidence of strong coupling between cues has also been reported: Bülthoff and Mallot (1988) for stereo and shading, Curran and Johnston (1994) for shading and texture, and Johnston, Cumming, and Landy (1994) for stereo and motion, finding evidence of the promotion of motion by stereo. 
One common psychophysical methodology to determine the combination rule is to put the cues in conflict; that is, some sources of information within the stimuli indicate a certain depth, whereas others indicate another, substantially different, depth. For example, Cutting and Millard (1984) studied three texture gradients as different cues for perceiving a flat or curved surface. Their stimuli contained zero, one, two, or three gradients “appropriately” depicting a particular surface geometry (a flat or curved surface receding in the distance) and three, two, one, or zero “inappropriate” gradients, respectively. In the case of flat surfaces, the inappropriate gradients depicted a surface orthogonal to the line of sight, whereas for curved surfaces, the inappropriate gradients depicted a flat plane. Then, the conflict in the first case was “between gradients that specify horizontal flatness versus near vertical flatness,” whereas in the second case, it was “between gradients appropriate for curvature and those appropriate for horizontal flatness” (Cutting & Millard, 1984, p. 203). 
In such studies, the discrepancies among the different cues are much larger than ever encountered in the natural environment. One obvious criticism is that the human vision system probably does not handle such incoherent information accurately because in natural scenes, those conflicts do not occur. This point was argued for by Blake, Bülthoff, and Sheinberg (1993) and by Gibson (1950a, 1979). Cutting and Millard (1984) argued that the technique is nevertheless useful because it would show whether the visual system is capable of processing the cues independently. However, this line of reasoning critically depends on the linear nature of the combination rule (e.g., see the robust influence function discussed below). Gepshtein and Banks (2003) and Gepshtein, Burge, Ernst, and Banks (2005) report that integration may fail with larger conflicts. Then, one might speculate that Cutting and Millard's failure to find “synergies of information” in their data (p. 214) could well have been an abnormal performance of the visual system and not its regular performance given the large conflict introduced between the cues studied. 
Landy et al. (1995) and Maloney and Landy (1989) suggested that the human visual system behaves like a robust estimator, in the sense that depth estimation would be “resistant to outlier observations” (Landy et al., 1995, p. 394). Thus, the influence of a discrepant cue in the cue combination varies linearly as long as it is not excessively discrepant from the rest of cues; otherwise, its influence decreases nonlinearly. Such a model could, in principle, account for the lack of synergies of information reported by Cutting and Millard and, at the same time, predict significant cue interaction when the discrepancy between cues is minor. Thus, the more appropriate experimental methodology for observing interactions between sources of information is one of introducing only slight differences between the cues presented in a stimulus. Landy et al. (1991) name this method perturbation analysis. Perturbation analysis allows to estimate the weights of each cue in the stimuli if the depth combination mechanism follows a weighted average of independent estimates based on each available cue. 
Reliability-sensitive weighting of depth cues
There is evidence that the depth estimation is sensitive to the reliability of the cues (Ellard, Goodale, & Timney, 1984; Goodale, Ellard, & Booth, 1990) and increases the weight of more reliable cues (e.g., Ernst & Banks, 2002; Landy et al., 1995). Such a mechanism is desirable under normative statistical considerations because if the weights are inversely proportional to the variance of the cue, the weighted average corresponds to the minimal variance unbiased estimator of depth (Landy et al., 1995, p. 395). In fact, this normative approach is common in the sensor fusion literature, where a number of sensors provide the independent cues to be combined (e.g., McKendall, 1990). Furthermore, a number of recent studies report that human observers combine depth cues precisely in this “statistically optimal fashion”: Ernst and Banks (2002), with a grasping task using haptic and disparity information, and Oruç et al. (2003), who report some evidence for optimal combination of texture gradients and linear perspective in slant judgments. Henceforth, “optimal” refers to obtaining the minimum variance unbiased estimator. 
In Rosas, Wichmann, and Wagemans (2004), we studied the problem of surface-slant-from-texture by measuring the performances of five human subjects in a slant-discrimination task with a number of different types of textures: uniform lattices, randomly displaced lattices, circles (also known as “polka dots”), Voronoi tessellations, orthogonal sinusoidal plaid patterns, fractal or 1/f noise, “coherent” noise, and a “diffusion-based” texture (leopard-skin-like texture). The results showed that the slant discrimination of textured planes is affected by both the slant level and the texture mapped on the surface. The first effect, already reported by Knill (1998), is such that the more slanted the surface is, the easier the discrimination becomes. Also, we observed that there are considerable differences between texture patterns and that such differences are clearer when the surface slant is closer to the vertical plane. This allowed us to determine a unique rank order of texture types for the five subjects in our study that also held in an experiment using the method of probe adjustment. Circles tended to allow the best slant-discrimination performance, whereas noise patterns tended to allow the worst. An example of this effect is shown in Figure 1
Figure 1
 
An example of the effect of texture type on slant discrimination. On the top part, the psychometric functions obtained with four texture types (error bars representing 68% confidence intervals) for slant discrimination around 37° slant (left plot) and 66° (right plot) are shown. The steepness of the psychometric function reflects how helpful the texture type is for slant discrimination. On the bottom part, the patterns used to obtain the previous data are shown. From left to right: circle, leopard-skin-like, Perlin noise, and 1/f noise textures. For this subject, as for all subjects in this study, the task was easier when the texture based on circles was mapped onto the slanted planes, reflected in the steepest psychometric function. The worst performance was obtained using 1/f noise.
Figure 1
 
An example of the effect of texture type on slant discrimination. On the top part, the psychometric functions obtained with four texture types (error bars representing 68% confidence intervals) for slant discrimination around 37° slant (left plot) and 66° (right plot) are shown. The steepness of the psychometric function reflects how helpful the texture type is for slant discrimination. On the bottom part, the patterns used to obtain the previous data are shown. From left to right: circle, leopard-skin-like, Perlin noise, and 1/f noise textures. For this subject, as for all subjects in this study, the task was easier when the texture based on circles was mapped onto the slanted planes, reflected in the steepest psychometric function. The worst performance was obtained using 1/f noise.
Such effect allows changing the reliability of the texture cue by interchanging the different texture types. This manipulation is inside the normal operational range of the visual system because all samples are valid instances of textures. Also, because the texture types elicit different precision of depth perception, this manipulation should elicit significant weight changes in a reliability-sensitive cue combination mechanism. 
In the present study, we used such a manipulation to study the combination of texture and motion. 
Outline of the present study
To test a reliability-sensitive combination of motion and texture cues to slant, we conducted two experiments in which subjects had to discriminate the slant of moving textured planes. We manipulated the reliability of the texture cue by changing the texture type on the stimuli.
  1.  
    In the next section, we describe a consistent-cues experiment, which directly tests the hypothesis that texture and motion cues are optimally combined, assuming the motion threshold to be independent of the texture type. In the analysis, we first assume the texture and motion cue estimates to be uncorrelated and thereafter extend our analysis to allow for correlated estimation of cues. However, the optimal cue combination model, with uncorrelated or correlated cues, cannot account for our data.
  2.  
    Because of this failure of the optimal cue combination model, we tested whether a suboptimal, but reliability-sensitive, weak-fusion model could account for our data, as we previously found for haptic and visual cues in slant perception (Rosas, Ernst, Wagemans, & Wichmann, 2005). Thus, we conducted a perturbation analysis to measure the weight changes induced by the perturbations for texture and motion. However, we observed that the change in weights did not follow the reliability of the individual texture types, although the texture weights did increase with slant across textures, consistent with the greater reliability of the texture cue for increased slant levels.
  3.  
    Combining the data from the previously described consistent-cues experiment and the perturbation-analysis experiment, we were in a position to relax the assumption that the motion threshold is independent of the texture type used. Thus, we fit the optimal cue combination model—again for both uncorrelated and correlated cue estimates—to our data, allowing the motion threshold to be a free parameter.
  4.  
    In the Summary and general discussion section, we will summarize and discuss our results.
Experiment 1: Slant from consistent texture and motion
Methods
Stimuli
We created movies representing translating textured planes (rigid motion) by repeatedly mapping a texture onto a slanted plane with an amount of displacement of the texture patch on each rendering. We used the algorithm for texture mapping under perspective projection described by Heckbert (1989). The movies, whose duration was 282 ms, consisted of eight frames; each frame was repeated three times during the stimulus presentation to avoid flickering.3 We generated motion in two orientations: a “vertical” movement in which the direction of motion was aligned with the viewing direction, that is, the surface either moved away or toward the observer maintaining a certain slant, and a “horizontal” motion in which the displacement of the plane was orthogonal to the viewing direction. In the latter case, the surface could move either from the left to the right of the observer or vice versa, maintaining a given slant. We used four texture types: circles, leopard, Perlin noise, and 1/f noise (click links to view demos). Because we did not assume a priori that the horizontal and vertical motions were equivalent, these were independent conditions in the experiment. The two possible directions for each type of motion (up and down for vertical movement, left–right and right–left for horizontal movement) were assumed to be equivalent and used as further randomization in the experiment. 
Setup and task
The experimental methodology used for these experiments is similar to the one reported in Rosas et al. (2004): The experimental setup was designed to avoid, as much as possible, a cue-conflict situation arising due to the physical flatness of the screen. The monitor, a Sony GDM-F500R running at 85 Hz, was located behind a black wooden plate with a 10-cm-diameter circular aperture. A viewing tube was located against the wooden plate. At the viewing tube's end opposite to the monitor, a head and chin rest was located, at approximately 60 cm away from the screen. This device was aligned such that the subject's opened eye (monocular view) was positioned at the center of the viewing aperture. The stimuli subtended approximately 10° of visual angle at the chosen viewing distance. The setup itself was inside an area enclosed by black curtains, and subjects did not see, at any time during experimentation, the casing of the monitor or the computer driving it. 
We used standard temporal two-alternative forced-choice (2AFC) methodology. Each stimulus lasted 282 ms. Given the temporal resolution of our equipment, we defined the pixel displacement between frames such that the stimulus depicted a plane moving at a speed of approximately 4.8 visual deg/s in the central part of the image. Such speed is well above the threshold for motion detection and depicts a smoothly moving plane. The movies depicted textured planes at physically different levels of slant, and the subjects had to indicate which of the two stimuli appeared more slanted in depth. Subjects had 750 ms to answer, and no feedback was given. Horizontal and vertical motion conditions were randomly interleaved. On every trial, one possible direction of motion (up or down for vertical movement, left–right or right–left for horizontal movement) was selected and used in both intervals of a single 2AFC trial. The display of images, timing, and answer collection was controlled using the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). 
Three subjects who had participated in our previous study using texture as cue to slant (Rosas et al., 2004) also participated in this experiment. One subject (H.Z.) was a paid observer, naive to the goals of the experiment; another one (B.W.) was a highly trained observer in psychophysical experiments with a general notion of the purposes of this study who volunteered to participate in this study, and one subject (P.R.) was an author. The data collected for the study in Rosas et al. (2004) for these subjects were used here as the texture-only condition. Each texture was tested independently; that is, we collected all the data corresponding to a given texture type before proceeding to the next one. The order in which the different textures were tested was different for each subject. All subjects had normal or corrected-to-normal vision. 
We used four texture types (circles, leopard, Perlin noise, and 1/ f noise) and two standards (see Figure 1): The first standard (37° slant) was selected to observe the combination of texture and motion where the slant-discrimination performance clearly differs for different texture types (and where the texture cue is, in general, less reliable). At the second standard (66° slant), performance is similar for different texture types and the texture cue is more helpful for the discrimination task. Thus, if the weak-fusion model is correct, we should observe the following:
  •  
    different weights for different texture types at the low slant and
  •  
    larger and similar texture weights for different texture types at the higher slant.
A combination of adaptive and constant stimuli procedures was used to collect data: The adaptive procedure (QUEST, Watson & Pelli, 1983) was used to obtain a crude first estimation of the psychometric function. From this estimate, four stimuli levels were determined to be collected in a constant stimuli procedure, typically the levels that yielded approximately 63%, 77%, 91%, and 96% correct performance in the estimated psychometric function. The final estimation of the psychometric functions was made from the combined data from both procedures, typically representing 350 trials for each psychometric function. They were obtained using the Psignifit Toolbox (Wichmann & Hill, 2001a, 2001b). All fits were forced to cross chance performance (50% correct in 2AFC) at the slant level of the standard, and the lapse rate was a (highly constrained) free parameter to avoid bias in the estimation, as shown in Wichmann and Hill (2001a). Thus, not all psychometric functions asymptote at 1.0. 
Results
In Figure 2, we show two typical examples of the estimated psychometric functions for slant discrimination with texture only (black) and texture and motion (gray). The data for subject H.Z. and circle texture show an enhanced performance for texture and motion, whereas the data for subject B.W. and Perlin noise texture have no improvement. Each plot contains the psychometric functions around one standard (66° and 37°, respectively). On each plot, performance as fraction of correct responses is on the vertical axis, whereas performance as a function of the slant is on the (linear) horizontal axis. The curves depict the estimated psychometric functions for discriminating the standard against planes with smaller slants (left side of the standard) and bigger slants (right side of the standard). Because we plot the results for larger and smaller slants together, the plots of performance versus slant form a “U” shape composed of two psychometric functions touching at the 0.5 performance level. Error bars represent one standard deviation computed by a parametric bootstrap procedure (Wichmann & Hill, 2001b). The size of each data point is proportional to the number of trials collected at such level (to avoid cluttering, we do not show data points with less than five trials, which were obtained with the adaptive procedure). In Figure 3, we show one example of the slant discrimination for both types of motion. 
Figure 2
 
Examples of performance with texture and motion cues together. The top panel shows an example of better performance with texture and motion cues, whereas the bottom panel shows an example of performance that was not enhanced by motion. On the top, the estimated psychometric functions for slant discrimination around 66° slant with circle-only texture (black) and circle texture and vertical motion (gray) for subject H.Z. On the bottom, the estimated psychometric functions for slant discrimination around 37° slant with Perlin-noise-only texture (black) and Perlin noise texture and horizontal motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Figure 2
 
Examples of performance with texture and motion cues together. The top panel shows an example of better performance with texture and motion cues, whereas the bottom panel shows an example of performance that was not enhanced by motion. On the top, the estimated psychometric functions for slant discrimination around 66° slant with circle-only texture (black) and circle texture and vertical motion (gray) for subject H.Z. On the bottom, the estimated psychometric functions for slant discrimination around 37° slant with Perlin-noise-only texture (black) and Perlin noise texture and horizontal motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Figure 3
 
Estimated psychometric functions for slant discrimination around 37° slant with leopard texture and horizontal motion (black) and vertical motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Figure 3
 
Estimated psychometric functions for slant discrimination around 37° slant with leopard texture and horizontal motion (black) and vertical motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Discussion
Types of motion
In Rosas et al. (2004), we introduced the area between the psychometric functions around a standard between two performance levels (60% and 80% correct) as a measure of the difficulty of the task. This measurement takes into account both the slopes and the relative shifts of the psychometric functions (“thresholds”). This measurement is exemplified in Figure 4. Lower values of area indicate a more rapidly improving performance. 
Figure 4
 
The area between the psychometric functions for bigger and smaller slants around a certain standard (indicated by a solid vertical line), enclosed by two performance levels, is taken as the measure for the helpfulness of the cue for the discrimination.
Figure 4
 
The area between the psychometric functions for bigger and smaller slants around a certain standard (indicated by a solid vertical line), enclosed by two performance levels, is taken as the measure for the helpfulness of the cue for the discrimination.
We quantify the benefit of motion as the difference between the area values obtained in the slant-from-texture discrimination experiment and those obtained with texture and motion, normalized by the area value from the slant-from-texture discrimination. In other words, we measure the percentage of change in area induced by the motion cue. The results (in percentage values) for each type of motion and each type of texture are displayed in Figure 5. The differences between the types of motion are generally small (between 1% and 14%) and not consistent across types of texture or observers. Across types of texture and types of motion, the benefit from adding motion is about 24%, with some variability among observers ranging from 18% to 34%. 
Figure 5
 
The benefit of adding motion as percentage of change of area between the psychometric functions. Each plot corresponds to the data of one subject.
Figure 5
 
The benefit of adding motion as percentage of change of area between the psychometric functions. Each plot corresponds to the data of one subject.
Cue combination rule: Veto
A veto that favors the texture cue would produce a performance for the slant discrimination based on texture and motion that is identical to the performance obtained with texture only. If the veto favored the motion cue, the discrimination performance would be identical for all the texture types. In our data, we do not observe either of these possibilities in a systematic fashion. For example, the performance of B.W. for Perlin noise texture is consistent with a veto of texture over motion (see Figures 2 and 5). However, the same subject shows better performance when motion is added to the circle and leopard textures (see Figure 5). Also, subject P.R. shows improvement for Perlin noise and motion, discarding a general veto rule for this specific type of texture. 
In the next subsections, we will test whether the optimal cue combination model explains our data. We will focus on the predictions of the optimal cue combination model for discrimination thresholds based on motion only. We will first study the case of uncorrelated estimates from cues; then, we will consider a correlation factor between the estimates derived from motion and texture. 
Optimal cue combination with uncorrelated estimates
Assuming that the (noises associated to the) estimates from cues are normally distributed and uncorrelated, the weighted average cue combination corresponds to the minimum variance unbiased estimator if the weights are inversely related to the variance of each cue (Ernst & Banks, 2002; Landy et al., 1995): 
ωi=1/σi2j=1n1/σj2.
(1)
Here, ωi and σi denote the weight and the variance of the ith cue, respectively. Ernst and Banks (2002) derived the relationship between the variance of the estimators and the discrimination thresholds from psychometric functions with a cumulative Gaussian underlying shape that yield this “statistically optimal” combination: 
τtm2=τt2τm2τt2+τm2,
(2)
where τt denotes the discrimination threshold when texture is the only available cue, τm indicates the threshold when motion is the only cue in the stimuli, and τtm denotes the threshold when both cues are combined. Here, the thresholds are defined as the difference between the comparison stimulus judged more slanted 84% of the time and the PSE as in the study of Ernst and Banks. 
We have not measured slant-discrimination performance based on motion only because such experimental condition is not possible to construct without introducing a type of texture on the stimulus. Some authors (e.g., Jacobs, 1999) have argued that because texture density is a weak cue for depth (Buckley, Frisby, & Blake, 1996), it is possible to use a low-density texture to measure a motion-only condition. In particular, they have used low-density random dots. However, considering that the slant level seems to interact with a weak texture that improves performance notoriously (cf. the improved performance for 1/f noise in Rosas et al., 2004), we choose not to measure τm. Nevertheless, because the weak-fusion model, as opposed to strong fusion, assumes that the sensory modules based on each cue are independent (Clark & Yuille, 1990, p. 72), the threshold for motion only should not be affected by the threshold based on texture only. Further, if the texture types used in this study are sufficiently similar such that the motion threshold does not change by interchanging texture types, the values of the motion threshold derived from the optimal cue combination model should have the same value for every type of texture used in this experiment.4 From Equation 2, we can obtain 
τm=τtτtmτt2τtm2.
(3)
The conditions tested represent 24 sets of predictions (3 subjects × 2 types of motion × 2 standards × 2 sides around the standard) for τm derived from the texture types. We visually inspected whether the predictions overlapped for different textures, that is, whether a unique motion threshold existed for a given slant level and type of motion, considering the overlap of 95% confidence intervals. In only one case (subject P.R., vertical motion, slants larger than 66°) could we accept a single value for motion threshold for the four texture types. In the rest of our data, either there was no overlap for the predictions based on Equation 3 across the four texture types (12 cases) or the results obtained with Equation 3 were not valid threshold values (11 cases). In effect, the denominator in Equation 3 implies that in all cases, the threshold for texture only should be strictly larger than the threshold for the combined cues.5 Hence, the addition of motion should always improve the discrimination performance, which does not hold in general for our data, as shown in Figure 6
Figure 6
 
Comparing the performance between texture-only and texture and motion conditions. Difference in performance was measured as overlap of 68% confidence intervals at 75% correct. (A total of 32 comparisons [4 textures × 2 types of motion × 2 standards × 2 sides around the standard] per subject are reported. Subject P.R. had no cases of better performance with texture only.)
Figure 6
 
Comparing the performance between texture-only and texture and motion conditions. Difference in performance was measured as overlap of 68% confidence intervals at 75% correct. (A total of 32 comparisons [4 textures × 2 types of motion × 2 standards × 2 sides around the standard] per subject are reported. Subject P.R. had no cases of better performance with texture only.)
Optimal cue combination with correlated estimates
The failure of relationships given by Equation 3 implies either a suboptimal combination of cues or a violation of the underlying assumptions of the model. In particular, a more relaxed model in terms of the relationship between the estimates from cues considers a correlation ρ between the noises associated to the estimates. From Oruç et al. (2003), we have the following relationship for the reliability (r) of combined cues as a function of the reliability of the individual reliabilities of the cues: 
rtm=rt+rm2ρrtrm1ρ2.
(4)
Given that the reliability r is defined as the reciprocal of the variance of the cue,
r=1σ2
, this equation can be rewritten as 
1σtm2=1σt2+1σm22ρ1σt21σm21ρ2.
(5)
Using the relationship between thresholds and standard deviation τ =
2
σ defined in Ernst and Banks (2002), a more general expression of Equation 2 considering a correlation ρ can be derived: 
τtm2=(1ρ2)τt2τm2τt2+τm22ρτtτm,
(6)
from which we derive the generalization of Equation 3 considering a correlation term ρ: 
τm=τtτtmρτtm+(ρ21)(τt2τtm2)
(7)
 
τm=τtτtmρτtm(ρ21)(τt2τtm2)
(8)
 
Figure 7 shows one example of the variability of τ m as a function of ρ given these equations. 
Figure 7
 
Example of motion threshold predictions with correlation term varying between −1 and 1. The black solid line is given by Equation 8, which we have used in our analysis. The gray dotted line depicts the alternative solution given by Equation 7.
Figure 7
 
Example of motion threshold predictions with correlation term varying between −1 and 1. The black solid line is given by Equation 8, which we have used in our analysis. The gray dotted line depicts the alternative solution given by Equation 7.
Considering the thresholds obtained from our data and ρ values between −1 and 1, the solution from Equation 7 has a very fast asymptote that undermines its meaning as a psychophysical threshold. Then, in the following analysis, we consider only τ m values obtained with Equation 8. This equation presents two asymptotes: one associated with ρ = 1 and another associated with the point where its denominator becomes zero. The values of τ m obtained in the neighborhood of the asymptotes are doubtful as meaningful thresholds not only because of their large magnitude but also especially because of the large variation of the prediction given small variations of ρ. To avoid variations of the motion threshold predictions that might be a numerical artifact, we set the maximum value acceptable for ρ as the point where a change of 5% in ρ produced a change in τ m of a factor of 2. 6 
The correlation term denotes a relationship between the noises associated to the estimates derived from the cues, but it does not imply a coupling between the cues such that that the motion threshold depends on the texture type. Therefore, the optimal combination of correlated estimates from cues can still be tested by observing whether there exists a unique τ m for different texture types. As indicated before, the conditions tested represent 24 sets of predictions (3 subjects × 2 types of motion × 2 standards × 2 sides around the standard) derived from the texture types. We proceed by looking whether, in each of these sets, the predictions derived from Equation 8 for the different texture types are valid and statistically equal. 
First, as with Equation 3, the cases in which the threshold for texture and motion is larger than the threshold for texture cannot be explained by Equation 8. In the 24 sets of predictions, 11 cases present at least one texture, whose data show a texture and motion threshold statistically larger than the texture-only threshold. 7 In those cases, the term inside the square root becomes negative, given that 1 − ρ 2 is always positive. Those cases have a single valid solution at ρ = 1, namely, that the motion-only threshold should match the texture-only threshold. However, we do not consider those solutions because they seem to be a mathematical singularity of the equations rather than a psychophysically valid result. 
Second, for the 13 remaining cases, we proceed by visually inspecting whether the predictions overlap for different textures, that is, whether a unique motion threshold exists for a given slant level and type of motion. To observe the range of change of predictions induced by the correlation factor, we considered the predictions for zero correlation, the correlation factor that minimizes τ m, and the maximum acceptable ρ. The variability of those predictions was estimated using the 1999 bootstrap values obtained from the Psignifit Toolbox for every psychometric function to compute similar number of predictions for τ m. The error bars were obtained by considering the limits of the 95% of the values lying on the center of the distribution of predictions obtained. Examples of plots are shown in Figures 8 and 9. The data points with circles represent the predictions with zero correlation. Data points marked with triangles pointing downward correspond to the ρ values that yield the minimum values predicted for τ m, whereas data marked with triangles pointing upward represent the maximum values predicted for τ m given the restrictions on ρ mentioned above. Data points with squares represent intermediate ρ values that were selected to match the predictions for τ m for different textures. 
Figure 8
 
Example of motion threshold predictions for different texture types with correlated cues. In this example, a single motion threshold (approximately 5°) can be accepted for the different texture types at both sides of standard 66° for subject H.Z. To avoid crowding, we do not show the maximum values predicted for τ m because a single motion threshold can be found without resorting to those values. Also, for Perlin noise at smaller slants (left panel) and 1/ f noise at bigger slants (right panel), we selected the minimum value for ρ, and thus, we do not show intermediate results. Error bars are 95% confidence intervals obtained by bootstrapping.
Figure 8
 
Example of motion threshold predictions for different texture types with correlated cues. In this example, a single motion threshold (approximately 5°) can be accepted for the different texture types at both sides of standard 66° for subject H.Z. To avoid crowding, we do not show the maximum values predicted for τ m because a single motion threshold can be found without resorting to those values. Also, for Perlin noise at smaller slants (left panel) and 1/ f noise at bigger slants (right panel), we selected the minimum value for ρ, and thus, we do not show intermediate results. Error bars are 95% confidence intervals obtained by bootstrapping.
Figure 9
 
Example of significantly different motion threshold predictions from different texture types. Error bars are 95% confidence intervals obtained by bootstrapping.
Figure 9
 
Example of significantly different motion threshold predictions from different texture types. Error bars are 95% confidence intervals obtained by bootstrapping.
The visual inspection yields 11 cases in which certain values of ρ induce approximately the same predicted motion threshold for the different textures, considering 95% confidence intervals. Figure 8 shows one example. Details of these cases can be found in Table 1, namely, the values of correlation for the different texture types that render similar predictions for the motion threshold and the limits of the predictions associated to those values of ρ. The last two columns on this table show the intersection of those predictions. In three cases, we selected the maximum value for ρ according to the criterion indicated above. Seven cases are found for the most slanted standard (66°). There were four cases paired around one standard (subjects H.Z. and P.R. with vertical motion at 66°), and the predicted motion thresholds for smaller and bigger slants around the standard intersect. 
Table 1
 
Unique motion threshold for different texture types. For every ρ, the range of predictions for motion threshold ( τ m) is given. The intersection of these predictions are given in the rightmost column (see also Figure 8). α denotes slant angle.
Table 1
 
Unique motion threshold for different texture types. For every ρ, the range of predictions for motion threshold ( τ m) is given. The intersection of these predictions are given in the rightmost column (see also Figure 8). α denotes slant angle.
Circles Leopard Perlin 1/ f noise Intersection
ρ Minimum Maximum ρ Minimum Maximum ρ Minimum Maximum ρ Minimum Maximum
B.W.
Horizontal ( α > 66°) −0.35 3.13 13.05 0.20 3.73 10.93 0.30 3.69 9.18 0.87 4.29 5.83 4.29 5.83
Vertical ( α > 66°) −0.35 2.70 11.99 0.25 3.21 7.81 0.45 3.42 6.98 0.77 3.82 5.82 3.82 5.82

H.Z.
Horizontal ( α > 37°) −0.67 8.89 40.62 −0.40 9.69 33.70 0.99 12.42 16.84 0.80 11.91 17.30 12.42 16.84
Vertical ( α > 37°) −0.39 9.48 43.07 −0.06 10.45 62.32 0.48 11.54 23.56 0.92 13.73 18.09 13.73 18.09
Vertical ( α < 66°) −0.51 3.64 11.21 −0.26 3.31 12.30 0.64 4.45 6.82 0.90 4.39 5.70 4.45 5.70
Vertical ( α > 66°) −0.57 3.36 11.47 −0.35 3.33 13.46 0.70 4.47 6.87 0.65 4.66 6.28 4.66 6.28

P.R.
Horizontal ( α > 37°) −0.35 9.01 53.15 −0.68 8.70 37.13 0.00 10.21 19.37 0.79 13.23 19.22 13.23 19.22
Vertical ( α < 37°) −0.58 15.03 160.89 −0.67 20.37 67.66 0.40 25.60 39.04 0.53 27.78 41.75 27.78 39.04
Horizontal ( α > 66°) 0.00 3.92 16.73 0.00 4.77 18.89 −0.60 4.45 11.06 0.85 5.57 7.75 5.57 7.75
Vertical ( α < 66°) −0.40 4.82 32.60 −0.50 5.31 13.83 0.93 6.96 9.58 −0.35 5.59 16.12 6.96 9.58
Vertical ( α > 66°) 0.00 3.26 6.76 0.00 3.60 6.98 0.00 3.56 6.66 0.67 4.30 6.07 4.30 6.07
Finally, in the remaining two cases, which represent 8% of our data, the correlated-cues expression did not yield the same prediction for motion threshold for different textures. Figure 9 shows one example. 
In summary, 46% of our data (11 of 24 cases) are consistent with the optimal combination of correlated estimates from texture and motion cues. We consider this value as an upper bound for the statistically optimal model, given that the analysis applied was inclined to accept the common motion threshold prediction: First, we allowed the ρ value to be close to its asymptote, obtaining a wide variation of the predictions. Second, the overlap of 95% confidence interval is a weak test for discarding the hypothesis of equal values. These results could indicate that either subjects are not able to perform optimally in every condition or they choose strategies without applying a statistically optimal criterion. Alternatively, they might indicate a coupling between motion from texture not captured by the correlation introduced in Equation 8. In particular, it might be that the motion threshold itself depends on the texture type in the stimulus. We will return to this point in the Coupling of texture and motion section. 
Experiment 2: Slant from inconsistent texture and motion (perturbation analysis)
The results of the previous experiment cannot be explained by a statistically optimal combination of correlated or uncorrelated estimates from texture and motion. However, the combination rule for these cues could still be sensitive to the reliability of different texture types as predicted by the weak-fusion model. According to a reliability-sensitive cue combination rule, a higher weight for the motion cue should be given for the less slanted standard, and the different texture types should elicit different weights for the texture cue, reflecting the rank order of texture types already described. Also, at the most slanted standard, the texture weight should increase (for all texture types) as the texture cue becomes more reliable. 
Only in the case of an optimal combination of motion and texture would it be possible to compute the weights of both cues by applying Equation 1. Hence, a perturbation-analysis experiment was conducted to empirically estimate the weights for texture and motion in slant discrimination. In perturbation analysis, the central idea is to introduce a small discrepancy between the cues: The stimulus intensity given by one cue is changed, whereas the rest of the cues are kept at a fixed stimulus intensity. The larger the change in the depth percept induced by the perturbation level, the larger the influence of the perturbed cue. The weight of the perturbed cue is derived from the amount of change in percept as a function of the amount of perturbation. 
Methods
The experimental methodology was similar to Experiment 1. Given the perturbation-analysis methodology, on every trial, one of the two images (the standard) corresponded to an inconsistent-cues stimulus, whereas the other (the comparison) was a consistent-cues stimulus. The three subjects who participated in the previous experiment took part in this experiment. None of them reported to have noticed a difference in the stimuli for this experiment in comparison with the previous one. Only the constant stimulus method was used. We divided the procedure in two parts to adjust, if necessary, the stimulus levels to be tested. Typically, 250 trials were collected for each psychometric function. To obtain the “common model” used to estimate the weights in the Results section, we combined the data from the different perturbation levels and those from the previous experiment. Thus, the slope of the model used to estimate the change in PSE was estimated typically with 1,700 trials. 
Stimuli
The stimuli for the inconsistent-cues experiment were created by defining two planes at different slants. One plane corresponded to the slant depicted by the motion cue, whereas the other represented the slant of the texture cue. For every frame, the amount of (linear) displacement of the motion plane displaced the points on the image plane, given the slant of the motion plane and the viewing distance (perspective projection). The pixel values of the points on the image plane were determined by the intersection of the observer's eye and the coordinates of the points on the image plane and on the slanted texture plane. A similar method was employed by O'Brien and Johnston (2000) and is analogous to the method by Young, Landy, and Maloney (1993) for moving cylinders. In this experiment, we tested only the horizontal type of motion. 
As in Young et al. (1993), we defined four perturbation levels around the two standards studied in Experiment 1. These perturbation levels define the standards of the present study, against which slanted planes defined by consistent cues were compared to obtain psychometric functions in a slant-discrimination task. Henceforth, “standard” will refer to perturbed-cues stimuli, whereas we will refer to the standards of the previous experiment (37° and 66°) as “slant levels.” Because we conducted the consistent-cues experiment (Experiment 1) before the perturbation-analysis experiment (Experiment 2), we were able to restrict the perturbation levels to the 80% correct level of discrimination of consistent cues to avoid the robust estimator effect produced by large conflicts between the cues. In general, the perturbation levels, as measured in degrees of slant, were different for every texture and every subject, considering the differences between texture types as well as the interindividual differences in performance. 
Results
In Figure 10, we show an example of the obtained psychometric functions for perturbed texture and perturbed motion. These psychometric functions were fitted to each data set independently, that is, before obtaining a common model for the combined data from which we estimate the weights as described below. 
Figure 10
 
An example of the perturbation effect on slant discrimination. Each plot shows the data and estimated psychometric functions for the perturbation of one cue, whereas the other was fixed at a given slant (66°): perturbed motion in the top panel and perturbed texture in the bottom panel. Each standard represents a different perturbation level (color coded). The vertical axis depicts the fraction of trials in which a slant depicted by consistent cues was perceived as more slanted than each of the standards. Each data set shown on this figure was fitted independently, whereas, in Figure 11, we used a common model to obtain the weights.
Figure 10
 
An example of the perturbation effect on slant discrimination. Each plot shows the data and estimated psychometric functions for the perturbation of one cue, whereas the other was fixed at a given slant (66°): perturbed motion in the top panel and perturbed texture in the bottom panel. Each standard represents a different perturbation level (color coded). The vertical axis depicts the fraction of trials in which a slant depicted by consistent cues was perceived as more slanted than each of the standards. Each data set shown on this figure was fitted independently, whereas, in Figure 11, we used a common model to obtain the weights.
The linearity of the weak-fusion model predicts that the perturbation analysis should produce only a displacement of the psychometric functions on the stimulus axis. To test this prediction of parallelism, we used the same methodology as in Rosas et al. (2004): First, the stimulus intensities on the data sets were shifted toward one (arbitrary) slant level using the estimated PSE (the slant defined by consistent cues considered 50% of time as equal to the perturbed-cues standard). The four data sets corresponding to one perturbed cue and one texture type at one slant level (37° or 66°) are combined, also including the corresponding data from the consistent-cues experiment. We used the resulting combined-data fit to refit the individual data sets, leaving only the horizontal shift as a free parameter, that is, forcing a model to the data sets in which the psychometric functions were parallel. 
The goodness of fit of these new refits was assessed with deviance and correlation between the deviance residuals and the model predictions. The deviance, or log-likelihood ratio statistic, represents how a model deviates from a full or saturated model containing as many parameters as data points. Larger values of deviance indicate a poor fit. To determine the critical region for interpreting a particular value of deviance, we generate the deviance distribution using Monte Carlo techniques and by performing a single-sided test with a 5% significance level (deviance takes only positive values). Analogously, we compare the observed correlation between the deviance residuals and the model predictions with the Monte-Carlo-based estimated distribution for the correlation. A double-sided test with a 5% significance level was performed in this case (see Wichmann, 1999, for further details). The goodness of fit was used as an indicator of parallelism: A failure of the fit indicates that the data could not be obtained from a parallel psychometric family. We observed a lack of fit for 34 of 192 sets (4 textures × 3 subjects × 2 slant levels × 2 perturbed cues × 4 perturbation levels). Dismissing the data sets with a poor fit, we repeated the process of obtaining a combined-data model. In this case, all the common models showed a good fit. For example, in the data depicted in Figure 10, the psychometric function corresponding to texture cue at 68° and motion cue at 66° was dismissed because of lack of fit. Thus, we did not include that data set when obtaining the texture weight in this case, as shown in Figure 11
Figure 11
 
An example of the estimation of weights using perturbation analysis. On the top row, we show the psychometric functions estimated using the model obtained by fusing the data sets supporting a parallel family of psychometric functions so that the only variation is the PSE. In this particular case, the data set corresponding to texture cue set at 68° and motion cue set at 66° was dismissed because of lack of fit with a same-slope model. On the bottom row, we show how the slope of change of the PSEs, given an amount of perturbation, determines the weight of the perturbed cue.
Figure 11
 
An example of the estimation of weights using perturbation analysis. On the top row, we show the psychometric functions estimated using the model obtained by fusing the data sets supporting a parallel family of psychometric functions so that the only variation is the PSE. In this particular case, the data set corresponding to texture cue set at 68° and motion cue set at 66° was dismissed because of lack of fit with a same-slope model. On the bottom row, we show how the slope of change of the PSEs, given an amount of perturbation, determines the weight of the perturbed cue.
Using the new common models, we refitted the selected data sets, leaving only the horizontal shift as a free parameter. Goodness of fit for all these refits indicated that those data sets were well modeled by a parallel family of psychometric functions. We used these refits to obtain the weights for each perturbed cue. The cue weights correspond to the slope of change in the PSEs obtained with the perturbation of one cue. To estimate this slope, we minimized the least squared error of a straight-line fit, weighting each PSE point by its bootstrap standard deviation, and constrained it, such that the weights for texture and motion should be positive and add up to unity for every slant level and texture type. In Figure 11, we show one example of this procedure. The psychometric functions in this figure represent the same data as in Figure 10, with the exception of the data corresponding to texture cue at 68° and motion cue at 66° that were dismissed because of lack of fit as indicated before. Many data sets were dismissed for 1/ f noise texture, impeding the estimation of weights from the slope of change in the PSEs. When possible, in such cases, we estimated the weights given the restriction that both weights should add up to unity. To measure the variability of the cue weight, we computed the linear fit 250 times using samples of the full distributions of bootstrap values obtained with Psignifit, representing 95% confidence intervals. As a measurement of goodness of fit of the straight line, we computed the mean of the normalized residuals for the linear fit, using 1,000 samples of the bootstrapped PSEs obtained with Psignifit. In Table 2, we show the weights obtained for this experiment, as well as the mentioned measurements of variability and goodness of fit. To facilitate the visualization of these results, we show the texture weights in Figure 12
Table 2
 
Cue weights obtained with perturbation analysis. In some cases (all subjects, 1/f noise, 37 degrees) we could not estimate the weights because of the lack of fit of the parallel family of psychometric functions. For these cases, when possible, we report the weight using the restriction that both weights should add up to unity (marked with asterisk). See text for further details and Figure 12
Table 2
 
Cue weights obtained with perturbation analysis. In some cases (all subjects, 1/f noise, 37 degrees) we could not estimate the weights because of the lack of fit of the parallel family of psychometric functions. For these cases, when possible, we report the weight using the restriction that both weights should add up to unity (marked with asterisk). See text for further details and Figure 12
37° 66°
ω m 95% CI Mean normalized residual ω t 95% CI Mean normalized residual ω m 95% CI Mean normalized residual ω t 95% CI Mean normalized residual
B.W.
Circles 1.00 0.01 0.17 0.00 0.01 0.14 0.88 0.03 0.10 0.12 0.03 0.11
Leopard 0.89 0.08 0.03 0.11 0.08 −0.02 0.79 0.05 −0.06 0.22 0.05 −0.03
Perlin noise 0.64 0.06 −0.02 0.36 0.06 −0.02 0.49 0.08 −0.11 0.51 0.08 −0.18
1/ f noise 0.69 0.06 −0.14 0.31 0.06 −0.32

H.Z.
Circles 1.00 0.01 0.01 0.00 0.01 −0.02 0.95 0.04 −0.03 0.05 0.04 −0.05
Leopard 0.96 0.05 −0.01 0.04 0.05 0.01 0.75 0.05 0.03 0.25 0.05 −0.11
Perlin noise 1.00 0.02 −0.06 0.00 0.02 0.00 0.52 0.10 −0.10 0.48 0.10 −0.09
1/ f noise 0.96* 0.04 0.07 −0.04 0.71 0.07 0.15 0.30 0.07 0.12

P.R.
Circles 0.72 0.10 0.02 0.28 0.10 −0.08 0.61 0.08 −0.06 0.39 0.08 −0.09
Leopard 0.59 0.09 0.06 0.41 0.09 0.01 0.59 0.05 −0.19 0.41 0.05 −0.10
Perlin noise 0.64 0.11 0.03 0.36 0.11 −0.04 0.77 0.09 0.00 0.23 0.09 −0.05
1/ f noise 0.63* 0.37 0.14 0.09 0.17 0.07 −0.13 0.83 0.07 −0.24
Figure 12
 
Texture weights obtained with perturbation analysis (with the exception of subject B.W., 1/ f noise texture at 37°, because of lack of fit. See text for further details). Each plot contains the results for one subject at both slant levels tested. Error bars represent 95% confidence intervals.
Figure 12
 
Texture weights obtained with perturbation analysis (with the exception of subject B.W., 1/ f noise texture at 37°, because of lack of fit. See text for further details). Each plot contains the results for one subject at both slant levels tested. Error bars represent 95% confidence intervals.
Discussion
The prediction of a combination rule that is sensitive to the reliability of a cue is straightforward: Less reliable cues should be given a smaller weight (less influence) in the combination. In our case, this means, first, that the weight for a particular texture type should increase with the slant level tested. Second, more helpful textures should have higher weights than less reliable patterns. 
Our results support the first prediction: Considering 68% confidence intervals in 9 of 11 cases, the mean estimated texture weight is larger at the more slanted standards. Considering the 95% variation of the estimates, those cases reduce to 6 of 11 pairs of weights (see Figure 12). However, the texture weights do not follow the rank order of texture types according to the second prediction. In fact, many of the estimated weights follow the opposite order: a larger motion weight for the more reliable texture type. For example, in all cases, the motion weight obtained for circles is larger than the motion weight obtained for 1/ f noise. This observation again suggests a dependency of the motion threshold on the texture type, which appears to be supported by a fraction of the data from the previous experiment. We will further explore this possibility in the next section by combining the data obtained in both experiments. 
Coupling of texture and motion
When discussing the results of the consistent-cues experiment ( Experiment 1) and the perturbation-analysis experiment ( Experiment 2), we have suggested that the partial failure of the optimal cue combination model might be due to a dependency on the texture type of the slant estimates based on the motion cue. In Experiment 1, we assumed the motion threshold to be the same for the different texture types used in this study. Now, we will relax this assumption and we will test the optimal cue combination model by combining the data obtained in both experiments. In particular, we will apply the optimal cue combination model and observe whether the values for motion thresholds obtained with data from Experiment 1 match the values obtained with data from Experiment 2, allowing the motion threshold to take any positive value. 
Uncorrelated estimates
We start considering the case of uncorrelated estimates. As we have seen in the Optimal cue combination with uncorrelated estimates section, Equation 3 provides an expression for the motion threshold as function of the thresholds obtained with texture only and with texture and motion cues combined. 
Also, we can relate the texture weight obtained with the perturbation-analysis experiment and the motion threshold with the following equation (from Ernst & Banks, 2002): 
ωt=τm2τm2+τt2,
(9)
from which we can obtain a relationship that is analogous to Equation 3 and is applicable to the data from the perturbation-analysis experiment: 
τm=τt1ωt1.
(10)
 
When analyzing the results of Experiment 1 in the Optimal cue combination with uncorrelated estimates section, we applied Equation 3 to the data at each side around the standards independently. Now, to compare both experiments, we fuse the data from Experiment 1 around the standards for horizontal motion. Thus, we get a set of 24 motion thresholds (3 subjects × 4 textures × 2 slant levels) to compare. As we already pointed out, a fraction of the data obtained in Experiment 1 cannot be explained by Equation 3, namely, when the performance with both cues is poorer than performance with texture alone. This also affects the fused data, as we observe five cases where the motion thresholds from Equation 3 are invalid. Putting those cases aside, in Figure 13, we compare the (valid) values for motion threshold obtained with Equations 3 and 10. By observing the overlap of the confidence intervals of the values derived from both experiments with the identity line, we see that the values generally do not match. That is, the optimal cue combination of uncorrelated cues does not seem to model our data accurately. 
Figure 13
 
Comparing the motion threshold values obtained with Experiment 1 (consistent cues) and Experiment 2 (perturbation analysis) using the optimal cue combination of uncorrelated estimates. On the horizontal axis, we show the values obtained using the texture and motion thresholds and texture-only thresholds ( Equation 3), whereas, on the vertical axis, we show the values obtained from the texture weights and texture-only thresholds ( Equation 10). The error bars depict the standard deviation of values obtained by randomly sampling from the empirical distribution of thresholds. The diagonal line (identity) is plotted for reference.
Figure 13
 
Comparing the motion threshold values obtained with Experiment 1 (consistent cues) and Experiment 2 (perturbation analysis) using the optimal cue combination of uncorrelated estimates. On the horizontal axis, we show the values obtained using the texture and motion thresholds and texture-only thresholds ( Equation 3), whereas, on the vertical axis, we show the values obtained from the texture weights and texture-only thresholds ( Equation 10). The error bars depict the standard deviation of values obtained by randomly sampling from the empirical distribution of thresholds. The diagonal line (identity) is plotted for reference.
Correlated estimates
Thus, we relax the model and allow for a correlation term between the estimates. We know already from the Optimal cue combination with correlated estimates section that Equation 3 then becomes Equations 7 and 8
Analogously, Equation 10 becomes  
τ m = τ t ( ρ 2 ρ ω t + ρ 2 + 4 ( 1 + ρ 2 ) ( 1 + ω t ) ω t ) 2 ( 1 + ω t ) ,
(11)
 
τ m = τ t ( ρ + 2 ρ ω t + ρ 2 + 4 ( 1 + ρ 2 ) ( 1 + ω t ) ω t ) 2 ( 1 + ω t ) .
(12)
 
Next, to obtain the values for ρ such that the motion threshold values coincide between the experiments, we match Equations 7 and 8 with Equations 11 and 12. For all the possible matching between the four equations, the possible solutions for ρ are  
ρ = τ t m 2 τ t 2 ω t τ t τ t 2 ω t 2 + τ t m 2 ( 1 + 2 ω t )
(13)
and  
ρ = ± 1 .
(14)
 
Applying the measured weights and thresholds, we obtain values for the correlation term (−1 ≤ ρ ≤ 1), which are plotted in Figure 14
Figure 14
 
Values of ρ such that valid and identical motion thresholds would be obtained with data from Experiments 1 and 2.
Figure 14
 
Values of ρ such that valid and identical motion thresholds would be obtained with data from Experiments 1 and 2.
The five cases where the solution from Equation 1 is not a valid correlation term can be assigned a value of 1 given Equation 1. However, that implies the threshold of motion to be exactly the texture threshold, which, as we argued in the Optimal cue combination with correlated estimates section, we suspect to be a mathematical singularity rather than a psychophysically valid result. Furthermore, there is no clear pattern of the obtained correlation values. Considering the variation for the different textures, the ρ values for the circle texture are the most similar for all subjects and slant levels, spanning a range between 0.37 and 0.73. For the rest of texture types, there is a wider variation, especially for leopard and 1/ f noise. This seems to suggest that the motion thresholds derived using the optimal cue combination in this manner lack psychophysical value. Nevertheless, we proceed using those values of correlation to obtain the motion thresholds that hold either Equation 11 or 12, related to the consistent-cues data, and either Equation 11 or 12, related to the perturbation-analysis data. The obtained values are depicted in Figure 15
Figure 15
 
Values of motion thresholds that match the results from (at least) one equation for the consistent-cues data and (at least) one equation for the perturbation-analysis data. (The lines are plotted to facilitate visualization.)
Figure 15
 
Values of motion thresholds that match the results from (at least) one equation for the consistent-cues data and (at least) one equation for the perturbation-analysis data. (The lines are plotted to facilitate visualization.)
These values follow the same trend as the texture thresholds (lower values for the most slanted slants), and they also follow the rank order of texture types. If we accept these values to have not only mathematical but also psychophysical value, they suggest that whichever statistical properties of textures that are of help to see their slant also support motion extraction. Alternatively, this may suggest a strong coupling between the sensory modules from texture and motion in terms of Clark and Yuille (1990). 
Summary and general discussion
To test a reliability-sensitive combination of motion and texture cues to slant, we have conducted two experiments in which subjects had to discriminate the slant of moving textured planes using the temporal 2AFC methodology. We manipulated the reliability of the texture cue by changing the texture type on the stimuli, using textures that elicit different performance in slant discrimination as reported in Rosas et al. (2004). 
In brief, the main observations in this article are the following:
  •  
    First, we conducted a consistent-cues experiment assuming that the motion threshold was independent of the texture types used. We observed that an optimal combination of texture and motion, either as uncorrelated or as correlated cues, could not account for our data.
  •  
    Second, an experiment that performed perturbation analysis showed that the change in weights did not follow the reliability of the individual texture types. However, the texture weights, in general, did increase for larger slants as predicted by a reliability-sensitive cue combination mechanism.
  •  
    Third, we relaxed the assumption that the motion threshold was independent of texture type by analyzing the data from both experiments together, allowing the motion threshold to be a free parameter. We tested the optimal cue combination model by observing whether the motion thresholds predicted from the first experiment matched those predicted from our second experiment. We were able to match them, but because of the large number of free parameters, such matches are trivial to obtain: This is bound to be due to overfitting. Furthermore, we found the fitted motion thresholds consistent with the optimal cue combination model to be widely different for observers, textures, and slant levels. Thus, we are inclined to attribute the “matches” to overfitting rather than take our results as an indication of large—and largely unsystematic—individual differences between observers optimally combining cues.
We will now discuss each of these points in more detail. 
Experiment 1: Testing the optimal cue combination
In the first experiment, with cues consistently depicting slant and motion sideways or receding or approaching in depth, we tested whether the cues were optimally combined in the sense of constructing a minimum variance unbiased estimator of depth, as shown by Ernst and Banks (2002) for haptic and disparity cues. Because we chose not to measure a motion-only condition, we indirectly tested the optimal combination of cues by observing if a single motion threshold for different texture types could be obtained from our data using equations from the optimal model. Very few data (approximately 4%) supported the optimal combination model. We then introduced a correlation term between the estimates from cues, combining expressions from Oruç et al. (2003) and those from the study of Ernst and Banks. However, still only about 46% of our data are consistent with such a model. 
Experiment 2: Testing a reliability-sensitive cue combination
In the second experiment, we conducted a perturbation analysis to estimate the weights for texture and motion, using planes translating sideways. Approximately 83% of the data (158 of 192 data sets) collected with this procedure were well modeled by parallel psychometric functions as predicted by the linearity of the weak-fusion model. We used those data to estimate the weights to observe whether the texture weight followed the change of reliability induced by the amount of slant and the rank order of texture types, as predicted by a reliability-sensitive weak fusion of cues. In most cases, the texture weight increased for larger slants, which is consistent with Knill (1998) and Knill and Saunders (2003). However, we observed a variation that is inconsistent with the rank order of texture types, as, in many cases, the weight for the motion cue was larger for those textures that allow a better performance in slant discrimination. Young et al. (1993) used texture and motion in a curvature-discrimination experiment, using elliptical cylinders rotating in depth, and found a decreased texture weight when they made the texture less reliable by changing randomly the shape of the circular texels that composed their stimuli. Besides the difference between the geometry and motion flow between rotating cylinders and translating planes, the reliability manipulations employed are different: One could speculate that subjects in the study of Young et al. interpreted the manipulated texture as a deformed version of a more reliable, undeformed texture and changed its weight accordingly, whereas such an interpretation is not possible by changing the type of texture. In a study using a more similar manipulation of the texture cue, O'Brien and Johnston (2000) found no changes in the weight for texture using plaids or a “cloud-like pattern” as textures in a slant discrimination based on texture and motion cues. 
Testing the optimal cue combination again
The apparent failure of the reliability-sensitive model would imply a suboptimal combination of cues. Our data showed an increasing texture weight for larger slants, where the texture cue is more reliable. This suggests that observers did take reliability into account—at least qualitatively. In the Coupling of texture and motion section, we used the optimal cue combination model to find the motion threshold values that could be derived with the data from both our experiments simultaneously. For achieving this, we had to allow a wide variation of correlation terms between the estimates from cues—mathematically, we can fit the data, but we clearly have to worry about overfitting. If we accept those values as psychophysically valid, we would be able to explain our data as optimal cue combination of correlated estimates but at a price: We have to allow a different correlation coefficient for each slant level, texture, and observer. Furthermore, there is little consistency between observers implying, if correct, very large individual differences in a low-level vision task. 
When Maloney and Landy (1989) introduced the weak-fusion model for depth-cue combination, they acknowledged that “the assumption that different (…) cues are processed independently is suspect.” But they argued that “if we choose appropriate experimental conditions, the performance of each module may be studied psychophysically, or analyzed computationally in isolation” (Maloney & Landy, 1989, p. 1155). It might be that such appropriate experimental conditions are not fulfilled in experiments using a more natural manipulation of reliability such as the interchanging of texture type used in our study. We find that the variation of the motion thresholds matches closely the variation of texture thresholds with slant level and texture type, as shown in Figure 16
Figure 16
 
Comparison of motion thresholds estimated with data from Experiments 1 and 2 and empirical texture thresholds.
Figure 16
 
Comparison of motion thresholds estimated with data from Experiments 1 and 2 and empirical texture thresholds.
One reason could be that texture-extraction and motion-extraction algorithms use similar stimulus information and, hence, the dependency: It is in the world. Alternatively, the processing of the cues may not be independent—a strong coupling between the respective sensory modules in the terminology of Maloney and Landy: The dependency we find is generated in our visual system. 
Supplementary Materials
Supplementary Movie - Supplementary Movie 
Supplementary Movie - Supplementary Movie 
Supplementary Movie - Supplementary Movie 
Supplementary Movie - Supplementary Movie 
Acknowledgments
Part of this research was presented as a poster at the Vision Sciences Society Annual Meeting, Sarasota, FL, May 2003. Financial support was provided by the Research Council at the Katholieke Universiteit Leuven, Belgium (IDO/98/002); by the Fund for Scientific Research-Flanders (FWO G.0189.02 and G.0095.03), and by the Chilean National Commission for Scientific and Technological Research (Fondecyt 3050022). 
We are grateful to Marc Ernst, to an anonymous reviewer, and to our editor, Dr. Laurence T. Maloney, for their insightful comments that have helped improve our manuscript considerably. 
Commercial relationships: none. 
Corresponding author: Pedro Rosas. 
Email: pedro.rosas@tuebingen.mpg.de. 
Address: Instituto de Ciencias Biomédicas, Independencia 1027, Santiago, Chile. 
Footnotes
Footnotes
1  Some authors argue that a different kind of depth representation (other than a depth map) might be used by the visual system. For example, (1994) suggested a curvature map. Given that the question we asked in this study is related to the weak fusion model, we assume that regardless of the depth representation in the visual system, it is possible to model depth perception as an average between (metric) estimates.
Footnotes
2  A texture gradient is a notion introduced by (1950b) to describe the contribution of the texture cue to the perception of depth: On a textured surface receding in depth, certain features, like the shape of the basic component of the texture, change gradually and systematically from point to point in the plane. Such a pattern of change, the gradient, is directly related to the distance from the viewer.
Footnotes
3  We opted for repeating a set of 8 frames instead of generating 24–frame movies because of the extended time needed to generate the frames. By visual inspection, an 8–frame movie whose frames were shown three times was equivalent to a 24–frame movie whose frames were shown once, given the same total spatial displacement.
Footnotes
4  As a counterexample for this last assumption, one can think of the extreme case of a blank texture, whose reliability is zero, but that would also prevent the use of motion for the task. We will relax this assumption in the Coupling of texture and motion section.
Footnotes
5  When evaluating violations to this restriction, we have taken into account the limited precision of the psychophysical measurements (see footnote 7).
Footnotes
6  Given that the range of variation of ρ is 2, a 5% change in ρ is 0.1. Then, the criterion is that given Δ ρ = ρ aρ b = 0.1 if we have τ m( ρ a) = 2 τ m( ρ b), the maximum ρ would be ρ a.
Footnotes
7  We measured the variability of τ m by bootstrapping the predictions from Equation 8 over the 1,999 bootstrap values obtained from the Psignifit Toolbox for every psychometric function. We accepted the predictions for one texture if at least 50% of the predictions were valid thresholds. In the problematic sets, on average, 13% of the bootstrap values generated valid predictions for τ m.
References
Blake, A. Bülthoff, H. H. Sheinberg, D. (1993). Shape from texture: Ideal observers and human psychophysics. Vision Research, 33, 1723–1737. [PubMed] [CrossRef] [PubMed]
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Bruno, N. Cutting, J. E. (1988). Minimodularity and the perception of layout. Journal of Experimental Psychology: General, 117, 161–170. [PubMed] [CrossRef] [PubMed]
Buckley, D. Frisby, J. P. Blake, A. (1996). Does the human visual system implement an ideal observer theory of slant from texture? Vision Research, 36, 1163–1176. [PubMed] [CrossRef] [PubMed]
Bülthoff, H. H. Mallot, H. A. (1988). Integration of depth modules: Stereo and shading. Journal of the Optical Society of America A, Optics and Image Science, 5, 1749–1758. [PubMed] [CrossRef] [PubMed]
Clark, J. J. Yuille, A. L. (1990). Data fusion for sensory information processing systems. Boston, MA: Kluwer Academic Publishers.
Curran, W. Johnston, A. (1994). Integration of shading and texture cues: Testing the linear model. Vision Research, 34, 1863–1874. [PubMed] [CrossRef] [PubMed]
Cutting, J. E. Millard, R. T. (1984). Three gradients and the perception of flat and curved surfaces. Journal of Experimental Psychology: General, 113, 198–216. [PubMed] [CrossRef] [PubMed]
Cutting, J. E. Vishton, P. M. Epstein, W. Rogers, S. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. Handbook of perception and cognition. (pp. 69–117). San Diego, CA: Academic Press.
Dosher, B. A. Sperling, G. Wurst, S. A. (1986). Tradeoffs between stereopsis and proximity luminance covariance as determinants of perceived 3D structure. Vision Research, 26, 973–990. [PubMed] [CrossRef] [PubMed]
Ellard, C. G. Goodale, M. A. Timney, B. (1984). Distance estimation in the Mongolian gerbil: The role of dynamic depth cues. Behavioural Brain Research, 14, 29–39. [PubMed] [CrossRef] [PubMed]
Ernst, M. O. Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433. [PubMed] [CrossRef] [PubMed]
Gepshtein, S. Banks, M. (2003). Viewing geometry determines how vision and haptics combine in size perception. Current Biology, 13, 483–488. [PubMed] [Article] [CrossRef] [PubMed]
Gepshtein, S. Burge, J. Ernst, M. O. Banks, M. S. (2005). The combination of vision and touch depends on spatial proximity. Journal of Vision, 5, (11):7, 1013–1023, http://journalofvision.org/5/11/7/, doi:10.1167/5.11.7. [PubMed] [Article] [CrossRef]
Gibson, J. J. (1950a). The perception of the visual world. Boston: Houghton Mifflin.
Gibson, J. J. (1950b). The perception of visual surfaces. American Journal of Psychology, 63, 646–664. [PubMed] [CrossRef]
Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin Company.
Goodale, M. A. Ellard, C. G. Booth, L. (1990). The role of image size and retinal motion in the computation of absolute distance by the Mongolian gerbil (Meriones unguiculatus). Vision Research, 30, 399–413. [PubMed] [CrossRef] [PubMed]
Heckbert, P. S. (1989). Fundamentals of texture mapping and image warping. Berkeley, CA: University of California, Berkeley.
Jacobs, R. A. (1999). Optimal integration of texture and motion cues to depth. Vision Research, 39, 3621–3629. [PubMed] [CrossRef] [PubMed]
Johnston, E. B. Cumming, B. G. Landy, M. S. (1994). Integration of stereopsis and motion shape cues. Vision Research, 34, 2259–2275. [PubMed] [CrossRef] [PubMed]
Knill, D. C. (1998). Discrimination of planar surface slant from texture: Human and ideal observers compared. Vision Research, 38, 1683–1711. [PubMed] [CrossRef] [PubMed]
Knill, D. C. Saunders, J. A. (2003). Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Research, 43, 2539–2558. [PubMed] [CrossRef] [PubMed]
Landy, M. S. Maloney, L. T. Johnston, E. B. Young, M. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389–412. [PubMed] [CrossRef] [PubMed]
Landy, M. S. Maloney, L. T. Young, M. J. Schenker, P. S. (1991). Psychophysical estimation of the human depth combination rule. Sensor fusion III: 3-D perception and recognition. (1383, pp. 247–254).
Maloney, L. T. Landy, M. S. Pearlman, W. A. (1989). A statistical framework for robust fusion of depth information. Visual communications and image processing IV (vol. pp. 1199, 1154–1163
McKendall, R. (1990). Statistical decision theory for sensor fusionn Proceedings of the 1990 DARPA Image Understanding Workshop (pp. 861–866). San Mateo, CA: Morgan Kaufmann Publishers, Inc.
O'Brien, J. Johnston, A. (2000). When texture takes precedence over motion in depth perception. Perception, 29, 437–452. [PubMed] [CrossRef] [PubMed]
Oruç, I. Maloney, L. T. Landy, M. S. (2003). Weighted linear cue combination with possibly correlated error. Vision Research, 43, 2451–2468. [PubMed] [CrossRef] [PubMed]
Parker, A. J. Cumming, B. G. Johnston, E. B. Hurlbert, A. C. Gazzaniga, M. S. (1995). Multiple cues for three-dimensional shape. The cognitive neurosciences. (pp. 351–364). Cambridge, MA: MIT Press.
Pelli, D. G. (1997). The video toolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [PubMed] [CrossRef] [PubMed]
Rosas, P. Wagemans, J. Ernst, M. O. Wichmann, F. A. (2005). Texture and haptic cues in slant discrimination: Reliability-based cue weighting without statistically optimal cue combination. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 22, 801–809. [PubMed] [CrossRef] [PubMed]
Rosas, P. Wichmann, F. A. Wagemans, J. (2004). Some observations on the effects of slant and texture type on slant-from-texture. Vision Research, 44, 1511–1535. [PubMed] [CrossRef] [PubMed]
Watson, A. B. Pelli, D. G. (1983). QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33, 113–120. [PubMed] [CrossRef] [PubMed]
Wichmann, F. A. (1999). Some aspects of modelling human spatial vision: Contrast discrimination.
Wichmann, F. A. Hill, N. J. (2001a). The psychometric function: I Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63, 1293–1313. [PubMed] [CrossRef]
Wichmann, F. A. Hill, N. J. (2001b). The psychometric function: II Bootstrap-based confidence intervals and sampling. Perception & Psychophysics, 63, 1314–1329. [PubMed] [CrossRef]
Young, M. J. Landy, M. S. Maloney, L. T. (1993). A perturbation analysis of depth perception from combinations of texture and motion cues. Vision Research, 33, 2685–2696. [PubMed] [CrossRef] [PubMed]
Figure 1
 
An example of the effect of texture type on slant discrimination. On the top part, the psychometric functions obtained with four texture types (error bars representing 68% confidence intervals) for slant discrimination around 37° slant (left plot) and 66° (right plot) are shown. The steepness of the psychometric function reflects how helpful the texture type is for slant discrimination. On the bottom part, the patterns used to obtain the previous data are shown. From left to right: circle, leopard-skin-like, Perlin noise, and 1/f noise textures. For this subject, as for all subjects in this study, the task was easier when the texture based on circles was mapped onto the slanted planes, reflected in the steepest psychometric function. The worst performance was obtained using 1/f noise.
Figure 1
 
An example of the effect of texture type on slant discrimination. On the top part, the psychometric functions obtained with four texture types (error bars representing 68% confidence intervals) for slant discrimination around 37° slant (left plot) and 66° (right plot) are shown. The steepness of the psychometric function reflects how helpful the texture type is for slant discrimination. On the bottom part, the patterns used to obtain the previous data are shown. From left to right: circle, leopard-skin-like, Perlin noise, and 1/f noise textures. For this subject, as for all subjects in this study, the task was easier when the texture based on circles was mapped onto the slanted planes, reflected in the steepest psychometric function. The worst performance was obtained using 1/f noise.
Figure 2
 
Examples of performance with texture and motion cues together. The top panel shows an example of better performance with texture and motion cues, whereas the bottom panel shows an example of performance that was not enhanced by motion. On the top, the estimated psychometric functions for slant discrimination around 66° slant with circle-only texture (black) and circle texture and vertical motion (gray) for subject H.Z. On the bottom, the estimated psychometric functions for slant discrimination around 37° slant with Perlin-noise-only texture (black) and Perlin noise texture and horizontal motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Figure 2
 
Examples of performance with texture and motion cues together. The top panel shows an example of better performance with texture and motion cues, whereas the bottom panel shows an example of performance that was not enhanced by motion. On the top, the estimated psychometric functions for slant discrimination around 66° slant with circle-only texture (black) and circle texture and vertical motion (gray) for subject H.Z. On the bottom, the estimated psychometric functions for slant discrimination around 37° slant with Perlin-noise-only texture (black) and Perlin noise texture and horizontal motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Figure 3
 
Estimated psychometric functions for slant discrimination around 37° slant with leopard texture and horizontal motion (black) and vertical motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Figure 3
 
Estimated psychometric functions for slant discrimination around 37° slant with leopard texture and horizontal motion (black) and vertical motion (gray) for subject B.W. Error bars depict 68% confidence intervals.
Figure 4
 
The area between the psychometric functions for bigger and smaller slants around a certain standard (indicated by a solid vertical line), enclosed by two performance levels, is taken as the measure for the helpfulness of the cue for the discrimination.
Figure 4
 
The area between the psychometric functions for bigger and smaller slants around a certain standard (indicated by a solid vertical line), enclosed by two performance levels, is taken as the measure for the helpfulness of the cue for the discrimination.
Figure 5
 
The benefit of adding motion as percentage of change of area between the psychometric functions. Each plot corresponds to the data of one subject.
Figure 5
 
The benefit of adding motion as percentage of change of area between the psychometric functions. Each plot corresponds to the data of one subject.
Figure 6
 
Comparing the performance between texture-only and texture and motion conditions. Difference in performance was measured as overlap of 68% confidence intervals at 75% correct. (A total of 32 comparisons [4 textures × 2 types of motion × 2 standards × 2 sides around the standard] per subject are reported. Subject P.R. had no cases of better performance with texture only.)
Figure 6
 
Comparing the performance between texture-only and texture and motion conditions. Difference in performance was measured as overlap of 68% confidence intervals at 75% correct. (A total of 32 comparisons [4 textures × 2 types of motion × 2 standards × 2 sides around the standard] per subject are reported. Subject P.R. had no cases of better performance with texture only.)
Figure 7
 
Example of motion threshold predictions with correlation term varying between −1 and 1. The black solid line is given by Equation 8, which we have used in our analysis. The gray dotted line depicts the alternative solution given by Equation 7.
Figure 7
 
Example of motion threshold predictions with correlation term varying between −1 and 1. The black solid line is given by Equation 8, which we have used in our analysis. The gray dotted line depicts the alternative solution given by Equation 7.
Figure 8
 
Example of motion threshold predictions for different texture types with correlated cues. In this example, a single motion threshold (approximately 5°) can be accepted for the different texture types at both sides of standard 66° for subject H.Z. To avoid crowding, we do not show the maximum values predicted for τ m because a single motion threshold can be found without resorting to those values. Also, for Perlin noise at smaller slants (left panel) and 1/ f noise at bigger slants (right panel), we selected the minimum value for ρ, and thus, we do not show intermediate results. Error bars are 95% confidence intervals obtained by bootstrapping.
Figure 8
 
Example of motion threshold predictions for different texture types with correlated cues. In this example, a single motion threshold (approximately 5°) can be accepted for the different texture types at both sides of standard 66° for subject H.Z. To avoid crowding, we do not show the maximum values predicted for τ m because a single motion threshold can be found without resorting to those values. Also, for Perlin noise at smaller slants (left panel) and 1/ f noise at bigger slants (right panel), we selected the minimum value for ρ, and thus, we do not show intermediate results. Error bars are 95% confidence intervals obtained by bootstrapping.
Figure 9
 
Example of significantly different motion threshold predictions from different texture types. Error bars are 95% confidence intervals obtained by bootstrapping.
Figure 9
 
Example of significantly different motion threshold predictions from different texture types. Error bars are 95% confidence intervals obtained by bootstrapping.
Figure 10
 
An example of the perturbation effect on slant discrimination. Each plot shows the data and estimated psychometric functions for the perturbation of one cue, whereas the other was fixed at a given slant (66°): perturbed motion in the top panel and perturbed texture in the bottom panel. Each standard represents a different perturbation level (color coded). The vertical axis depicts the fraction of trials in which a slant depicted by consistent cues was perceived as more slanted than each of the standards. Each data set shown on this figure was fitted independently, whereas, in Figure 11, we used a common model to obtain the weights.
Figure 10
 
An example of the perturbation effect on slant discrimination. Each plot shows the data and estimated psychometric functions for the perturbation of one cue, whereas the other was fixed at a given slant (66°): perturbed motion in the top panel and perturbed texture in the bottom panel. Each standard represents a different perturbation level (color coded). The vertical axis depicts the fraction of trials in which a slant depicted by consistent cues was perceived as more slanted than each of the standards. Each data set shown on this figure was fitted independently, whereas, in Figure 11, we used a common model to obtain the weights.
Figure 11
 
An example of the estimation of weights using perturbation analysis. On the top row, we show the psychometric functions estimated using the model obtained by fusing the data sets supporting a parallel family of psychometric functions so that the only variation is the PSE. In this particular case, the data set corresponding to texture cue set at 68° and motion cue set at 66° was dismissed because of lack of fit with a same-slope model. On the bottom row, we show how the slope of change of the PSEs, given an amount of perturbation, determines the weight of the perturbed cue.
Figure 11
 
An example of the estimation of weights using perturbation analysis. On the top row, we show the psychometric functions estimated using the model obtained by fusing the data sets supporting a parallel family of psychometric functions so that the only variation is the PSE. In this particular case, the data set corresponding to texture cue set at 68° and motion cue set at 66° was dismissed because of lack of fit with a same-slope model. On the bottom row, we show how the slope of change of the PSEs, given an amount of perturbation, determines the weight of the perturbed cue.
Figure 12
 
Texture weights obtained with perturbation analysis (with the exception of subject B.W., 1/ f noise texture at 37°, because of lack of fit. See text for further details). Each plot contains the results for one subject at both slant levels tested. Error bars represent 95% confidence intervals.
Figure 12
 
Texture weights obtained with perturbation analysis (with the exception of subject B.W., 1/ f noise texture at 37°, because of lack of fit. See text for further details). Each plot contains the results for one subject at both slant levels tested. Error bars represent 95% confidence intervals.
Figure 13
 
Comparing the motion threshold values obtained with Experiment 1 (consistent cues) and Experiment 2 (perturbation analysis) using the optimal cue combination of uncorrelated estimates. On the horizontal axis, we show the values obtained using the texture and motion thresholds and texture-only thresholds ( Equation 3), whereas, on the vertical axis, we show the values obtained from the texture weights and texture-only thresholds ( Equation 10). The error bars depict the standard deviation of values obtained by randomly sampling from the empirical distribution of thresholds. The diagonal line (identity) is plotted for reference.
Figure 13
 
Comparing the motion threshold values obtained with Experiment 1 (consistent cues) and Experiment 2 (perturbation analysis) using the optimal cue combination of uncorrelated estimates. On the horizontal axis, we show the values obtained using the texture and motion thresholds and texture-only thresholds ( Equation 3), whereas, on the vertical axis, we show the values obtained from the texture weights and texture-only thresholds ( Equation 10). The error bars depict the standard deviation of values obtained by randomly sampling from the empirical distribution of thresholds. The diagonal line (identity) is plotted for reference.
Figure 14
 
Values of ρ such that valid and identical motion thresholds would be obtained with data from Experiments 1 and 2.
Figure 14
 
Values of ρ such that valid and identical motion thresholds would be obtained with data from Experiments 1 and 2.
Figure 15
 
Values of motion thresholds that match the results from (at least) one equation for the consistent-cues data and (at least) one equation for the perturbation-analysis data. (The lines are plotted to facilitate visualization.)
Figure 15
 
Values of motion thresholds that match the results from (at least) one equation for the consistent-cues data and (at least) one equation for the perturbation-analysis data. (The lines are plotted to facilitate visualization.)
Figure 16
 
Comparison of motion thresholds estimated with data from Experiments 1 and 2 and empirical texture thresholds.
Figure 16
 
Comparison of motion thresholds estimated with data from Experiments 1 and 2 and empirical texture thresholds.
Table 1
 
Unique motion threshold for different texture types. For every ρ, the range of predictions for motion threshold ( τ m) is given. The intersection of these predictions are given in the rightmost column (see also Figure 8). α denotes slant angle.
Table 1
 
Unique motion threshold for different texture types. For every ρ, the range of predictions for motion threshold ( τ m) is given. The intersection of these predictions are given in the rightmost column (see also Figure 8). α denotes slant angle.
Circles Leopard Perlin 1/ f noise Intersection
ρ Minimum Maximum ρ Minimum Maximum ρ Minimum Maximum ρ Minimum Maximum
B.W.
Horizontal ( α > 66°) −0.35 3.13 13.05 0.20 3.73 10.93 0.30 3.69 9.18 0.87 4.29 5.83 4.29 5.83
Vertical ( α > 66°) −0.35 2.70 11.99 0.25 3.21 7.81 0.45 3.42 6.98 0.77 3.82 5.82 3.82 5.82

H.Z.
Horizontal ( α > 37°) −0.67 8.89 40.62 −0.40 9.69 33.70 0.99 12.42 16.84 0.80 11.91 17.30 12.42 16.84
Vertical ( α > 37°) −0.39 9.48 43.07 −0.06 10.45 62.32 0.48 11.54 23.56 0.92 13.73 18.09 13.73 18.09
Vertical ( α < 66°) −0.51 3.64 11.21 −0.26 3.31 12.30 0.64 4.45 6.82 0.90 4.39 5.70 4.45 5.70
Vertical ( α > 66°) −0.57 3.36 11.47 −0.35 3.33 13.46 0.70 4.47 6.87 0.65 4.66 6.28 4.66 6.28

P.R.
Horizontal ( α > 37°) −0.35 9.01 53.15 −0.68 8.70 37.13 0.00 10.21 19.37 0.79 13.23 19.22 13.23 19.22
Vertical ( α < 37°) −0.58 15.03 160.89 −0.67 20.37 67.66 0.40 25.60 39.04 0.53 27.78 41.75 27.78 39.04
Horizontal ( α > 66°) 0.00 3.92 16.73 0.00 4.77 18.89 −0.60 4.45 11.06 0.85 5.57 7.75 5.57 7.75
Vertical ( α < 66°) −0.40 4.82 32.60 −0.50 5.31 13.83 0.93 6.96 9.58 −0.35 5.59 16.12 6.96 9.58
Vertical ( α > 66°) 0.00 3.26 6.76 0.00 3.60 6.98 0.00 3.56 6.66 0.67 4.30 6.07 4.30 6.07
Table 2
 
Cue weights obtained with perturbation analysis. In some cases (all subjects, 1/f noise, 37 degrees) we could not estimate the weights because of the lack of fit of the parallel family of psychometric functions. For these cases, when possible, we report the weight using the restriction that both weights should add up to unity (marked with asterisk). See text for further details and Figure 12
Table 2
 
Cue weights obtained with perturbation analysis. In some cases (all subjects, 1/f noise, 37 degrees) we could not estimate the weights because of the lack of fit of the parallel family of psychometric functions. For these cases, when possible, we report the weight using the restriction that both weights should add up to unity (marked with asterisk). See text for further details and Figure 12
37° 66°
ω m 95% CI Mean normalized residual ω t 95% CI Mean normalized residual ω m 95% CI Mean normalized residual ω t 95% CI Mean normalized residual
B.W.
Circles 1.00 0.01 0.17 0.00 0.01 0.14 0.88 0.03 0.10 0.12 0.03 0.11
Leopard 0.89 0.08 0.03 0.11 0.08 −0.02 0.79 0.05 −0.06 0.22 0.05 −0.03
Perlin noise 0.64 0.06 −0.02 0.36 0.06 −0.02 0.49 0.08 −0.11 0.51 0.08 −0.18
1/ f noise 0.69 0.06 −0.14 0.31 0.06 −0.32

H.Z.
Circles 1.00 0.01 0.01 0.00 0.01 −0.02 0.95 0.04 −0.03 0.05 0.04 −0.05
Leopard 0.96 0.05 −0.01 0.04 0.05 0.01 0.75 0.05 0.03 0.25 0.05 −0.11
Perlin noise 1.00 0.02 −0.06 0.00 0.02 0.00 0.52 0.10 −0.10 0.48 0.10 −0.09
1/ f noise 0.96* 0.04 0.07 −0.04 0.71 0.07 0.15 0.30 0.07 0.12

P.R.
Circles 0.72 0.10 0.02 0.28 0.10 −0.08 0.61 0.08 −0.06 0.39 0.08 −0.09
Leopard 0.59 0.09 0.06 0.41 0.09 0.01 0.59 0.05 −0.19 0.41 0.05 −0.10
Perlin noise 0.64 0.11 0.03 0.36 0.11 −0.04 0.77 0.09 0.00 0.23 0.09 −0.05
1/ f noise 0.63* 0.37 0.14 0.09 0.17 0.07 −0.13 0.83 0.07 −0.24
Supplementary Movie
Supplementary Movie
Supplementary Movie
Supplementary Movie
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×