Journal of Vision Cover Image for Volume 19, Issue 14
December 2019
Volume 19, Issue 14
Open Access
Article  |   December 2019
Depth cue reweighting requires altered correlations with haptic feedback
Author Affiliations
  • Evan Cesanek
    Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USA
  • Fulvio Domini
    Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USA
Journal of Vision December 2019, Vol.19, 3. doi:https://doi.org/10.1167/19.14.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Evan Cesanek, Fulvio Domini; Depth cue reweighting requires altered correlations with haptic feedback. Journal of Vision 2019;19(14):3. https://doi.org/10.1167/19.14.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Depth cue reweighting is a feedback-driven learning process that modifies the relative influences of different sources of three-dimensional shape information in perceptual judgments and or motor planning. In this study, we investigated the mechanism supporting reweighting of stereo and texture information by manipulating the haptic feedback obtained during a series of grasping movements. At the end of each grasp, the fingers closed down on a physical object that was consistent with one of the two cues, depending on the condition. Previous studies have shown that this style of visuomotor training leads to cue reweighting for perceptual judgments, but the time course has never been documented for a single training session, and many questions remain regarding the underlying mechanism, such as the pattern of feedback signals required to drive reweighting. We address these issues in two experiments, finding short-term changes in the motor response consistent with cue reweighting: the slope of the grip aperture with respect to the reliable cue increased, whereas the slope with respect to the unreliable cue decreased. Critically, Experiment 2 shows that slope changes do not occur when one of the cues is rendered with a constant bias; the grip aperture simply becomes uniformly larger or smaller. Our findings support a model of cue reweighting driven by altered correlations between haptic feedback and individual cues, rather than simple mismatches, which can be resolved by other mechanisms such as sensorimotor adaptation or cue recalibration.

Introduction
When interacting with real objects, the motor system depends on estimates of three-dimensional (3D) shapes that are derived from multiple sources of visual information, including motion parallax, texture gradients, binocular disparities (stereo), shading, occlusion, and many others, generally known as depth cues. However, the particular set of available depth cues and the quality of these cues can vary from situation to situation, leading to bias and or noise in single-cue processing. For example, when viewing an object from a close distance, it will often be perceived as deeper than it truly is due to a constant bias in the perception of depth from binocular disparities (Johnston, 1991). Likewise, when objects have unusual surface markings or reflectance properties, perception of depth from texture can be affected by variable errors, sometimes overestimating and other times underestimating the true object depth (Rosenholtz & Malik, 1997; Todd, 2004). Since 3D shape estimates are routinely used to plan movements like grasping, sudden changes to the amount of bias or noise in each depth cue can pose real problems for fluent motor control. How, then, do we manage to avoid fumbling with objects despite frequent changes in viewing conditions? 
Previous work on 3D shape perception has produced a few different computational models of depth-cue integration, the neural process that combines multiple cues into a single 3D shape estimate (Landy, Maloney, Johnston, & Young, 1995; Tassinari, Domini, & Caudek, 2008). Whatever the correct model may be, it always can be locally approximated with a linear function where each depth cue has an associated slope; linear slopes thus serve as convenient measures of the influence (or weight) that each cue has on the elicited response. Evidence that the influence of each cue is continuously updated based on experience comes from several previous studies that measured such slopes before and after visuomotor training (Ernst, Banks, & Bülthoff, 2000; Atkins, Fiser, & Jacobs, 2001; Knill, 2007; Ho, Serwe, Trommershäuser, Maloney, & Landy, 2009). In these experiments, haptic feedback was manipulated to be consistent with a “reinforced” cue but inconsistent with one or more “faulty” cues, all of which are viewed simultaneously as part of a single visual surface. For example, a texture gradient specifying a 30° surface slant can be combined with a binocular-disparity gradient specifying a 10° slant, and the resulting visual stimulus displayed in the same location as a real physical surface. If the physical surface were also slanted by 30°, consistent with the texture slant, then one sensible way to limit the detrimental impact of the faulty stereo cue on perception and motor control would be to reduce the influence of stereo while maintaining or increasing the influence of texture. This form of supervised learning is known as cue reweighting
The main aim of this study was to investigate the haptic feedback conditions necessary to elicit cue reweighting. Our main hypothesis was motivated by a striking commonality in the stimulus design of all previous experiments on this topic. In the seminal paper on cue reweighting by Ernst et al. (2000), the relative influences of stereo and texture cues to slant were slightly modulated after repeatedly touching visual-haptic surfaces where the simulated slant from the faulty cue varied uniformly around the haptic slant. Thus, on every training trial there was a mismatch between the haptic feedback and the faulty cue. Critically, however, by varying the faulty cue uniformly around the haptic slant, Ernst et al. (2000) reduced the correlation between these two sources of information, thus creating mismatches that varied in sign and magnitude from one trial to the next. Notably, the presence of trial-by-trial variability in the mismatch between haptic feedback and the faulty depth cue, as opposed to a constant mismatch, also characterizes the training stimuli used in later studies by Atkins et al. (2001), Knill (2007), and Ho et al. (2009). Meanwhile, no study has tested whether cue reweighting occurs when the faulty cue is affected by a constant bias with respect to haptic feedback, despite the fact that reducing the influence of a biased cue would help to resolve the associated errors. 
Based on these considerations, we aimed to directly test the idea that cue reweighting depends on a learning mechanism sensitive to the correlation of each cue with haptic feedback. We test this hypothesis against the more generic claim that cue reweighting is driven by mere “consistency” or “alignment” with haptic feedback, which predicts that it will occur even in response to a suddenly biased cue that maintains the same correlation with haptic feedback. As motivation for our hypothesis, consider that when a depth cue is affected by a constant bias, but its reliability has not changed, other learning processes are available to resolve the resulting error signals. For instance, simply shifting the motor output via sensorimotor adaptation is a particularly efficient way to resolve movement errors resulting from a constant bias in one depth cue (Cesanek, Taylor, & Domini, 2019). Likewise, another learning process known as cue recalibration, which shifts the mapping of individual sensory signals onto world property estimates, would suffice to restore internal consistency in the face of a constant bias (Adams, Banks, & van Ee, 2001; Atkins, Jacobs, & Knill, 2003; Zaidel, Turner, & Angelaki, 2011). However, when one or more depth cues becomes unreliable (i.e., has a reduced correlation with haptic feedback), then the only way to minimize errors is via cue reweighting. Thus, we tested the prediction that cue reweighting would occur only when a faulty cue became uncorrelated with haptic feedback, and not when the faulty cue was biased. 
A secondary aim of this study was to document the time course of cue reweighting with a finer temporal resolution than in previous studies. To achieve this, we opted to focus on the trial-by-trial motor responses in our visuomotor training task, rather than conducting extensive perceptual tests before and after training as in most previous studies. To our knowledge, only one previous study (Knill, 2007) has examined cue reweighting in a visuomotor task. In that study, down-weighting of a foreshortening cue to slant was shown in session-wise averages across five days of training. Here, we take a closer look at short-term changes in the motor response within a single training block consisting of 99 grasping movements, performed over only 10-15 minutes. 
To measure cue reweighting, we analyzed changes in the maximum grip apertures (MGAs) of grasping movements. In a mirror-based virtual reality environment (Figure 1A), participants repeatedly grasped 3D paraboloid objects defined by stereo and texture cues (Figure 1B). At the end of each grasp, the hand closed down on a real object with a physical depth set to match one or both of the cues, depending on the feedback condition (Figure 1C). We regressed the MGA, our kinematic measure, against the simulated stereo and texture depths, taking the regression slope as an indicator of the influence of each cue. Our results show that cue reweighting reliably occurred when a single depth cue suddenly became uncorrelated with haptic feedback (Experiments 1 and 2), similar to our findings in another study looking at two-finger placement on slanted surfaces (experiment 2 of Cesanek et al., 2019). However, when the depth specified by a particular cue was biased to consistently under- or overestimate physical object depth, motor outputs were uniformly shifted to accurately target the physical depth, but cue reweighting was absent (Experiment 2). 
Figure 1
 
Photographs of the tabletop virtual reality setup. (A) The observer looks into a slanted mirror while wearing stereoscopic glasses, seeing a compelling 3D object on the far side of the mirror. This visual object is aligned with a motorized physical apparatus in the workspace that provides haptic feedback of different depths. During the experiment, the room was completely dark and a back panel was placed on the mirror. The participant reaches with the right hand to grasp the rendered object so the thumb lands on the tip and the index finger lands on the base. Infrared light-emitting diodes (IREDs) attached to the fingernails provide precise location information about the fingertips, allowing us to compute the in-flight grip aperture. (B) Frontal view of a rendered paraboloid (cyclopean rather than stereoscopic view for visualization). The tip of the paraboloid is perfectly centered on a small rounded nub to provide haptic feedback of the tip. (C) Side view of the physical apparatus for providing haptic feedback. A stepper motor spins a screw in order to slide a large round washer back and forth along the screw. This allowed us to create a physical object of any depth on each trial. The thumb landed on the rounded nub aligned with the tip of the paraboloid, while the index finger pinched down on the rear surface, which could be aligned with either the stereo or texture depth.
Figure 1
 
Photographs of the tabletop virtual reality setup. (A) The observer looks into a slanted mirror while wearing stereoscopic glasses, seeing a compelling 3D object on the far side of the mirror. This visual object is aligned with a motorized physical apparatus in the workspace that provides haptic feedback of different depths. During the experiment, the room was completely dark and a back panel was placed on the mirror. The participant reaches with the right hand to grasp the rendered object so the thumb lands on the tip and the index finger lands on the base. Infrared light-emitting diodes (IREDs) attached to the fingernails provide precise location information about the fingertips, allowing us to compute the in-flight grip aperture. (B) Frontal view of a rendered paraboloid (cyclopean rather than stereoscopic view for visualization). The tip of the paraboloid is perfectly centered on a small rounded nub to provide haptic feedback of the tip. (C) Side view of the physical apparatus for providing haptic feedback. A stepper motor spins a screw in order to slide a large round washer back and forth along the screw. This allowed us to create a physical object of any depth on each trial. The thumb landed on the rounded nub aligned with the tip of the paraboloid, while the index finger pinched down on the rear surface, which could be aligned with either the stereo or texture depth.
Methods
Participants
Sixty-five participants were recruited for Experiments 1 (N = 25) and 2 (N = 40; 22 in the Adapt+ condition, 18 in the Adapt− condition). Participants were between 18 and 35 years old and right-handed, with normal or corrected-to-normal vision. They were either granted course credit or paid hourly as compensation. Informed consent was obtained from each participant prior to their participation, in accordance with protocol approved by the Brown University Institutional Review Board and with the ethical standards set forth in the Declaration of Helsinki. 
Apparatus
Figure 1 presents a few photographs of the lab setup. Participants were seated in a height-adjustable chair so that the chin rested comfortably in a chinrest. Movements of the right hand were tracked using an Optotrak Certus motion-capture system (NDI, Waterloo, Canada). Small, lightweight posts containing three infrared-emitting diodes were attached to the fingernails of the index finger and thumb, and the system was calibrated prior to the experiment to track the extreme tips of the distal phalanges of each finger. This motion-capture system was coupled to a tabletop virtual reality environment: Participants looked into a half-silvered mirror slanted at 45° relative to the sagittal body midline, which reflected the image displayed on a 19-in. cathode-ray tube monitor (Sony, Tokyo, Japan) placed directly to the left of the mirror at the correct distance to provide consistent accommodative and vergence information. 
Participants viewed stereoscopic renderings of 3D paraboloid objects, where stereo and texture information were controlled independently via back-projection. We used a texture-generation model similar to that of Young, Landy, and Maloney (1993) but with sphere centroids constrained to occur on the paraboloid surface, creating a more regular pattern. The paraboloids were rendered with their tips at a viewing distance of 40 cm at eye level. This arrangement made the rendered 3D objects appear to be floating in space beyond the mirror. The bases of the paraboloids always subtended 6.5° of visual angle. By keeping the fixation point near the thumb's contact point, we mimicked the natural fixation patterns obtained when using a precision grip to grasp objects at eye level (Vodouris, Smeets, Fiehler, & Brenner, 2018). Stereoscopic presentation was achieved with a frame interlacing technique in conjunction with liquid-crystal goggles synchronized to the frame rate. Stereoscopic visual feedback of the thumb was provided throughout the experiment, to help participants keep track of their hand position. We presented only the thumb to prevent visual comparison of the stereo-rendered fingertips with the stereo depth of the object, which might have unintentionally reinforced stereo information in our haptic-for-texture conditions. Participants were shown a rotating 3D view of several cue-consistent paraboloids with varying depths prior to the experiment, so they were aware of the global shape of the paraboloids and knew their index finger would land on a flat rear surface circumscribed by the base contour, and not an occluded protrusion. 
To provide haptic feedback, a custom-built motorized apparatus was placed in the workspace. This apparatus consisted of a stepper motor with its shaft extended by a long screw. On the end of this screw, we attached a round metal nub to simulate the rounded tip of the paraboloid objects—perfect alignment between the physical and rendered paraboloid tips was established during the calibration phase at the start of each session. To simulate the flat, round rear end of the paraboloids, we threaded a metal washer (approximately 6 cm in diameter, equal to the average base diameter of the rendered objects) onto the screw. As the stepper motor spun, the washer traveled back and forth along the length of the screw, anchored on one side to ensure that one rotation of the stepper motor would linearly displace the washer by one thread pitch. On every trial, the resulting depth of the physical object was double-checked using additional Optotrak markers mounted on the physical apparatus and corrected if necessary. 
Procedure
Both experiments began with a perceptual Matching task, where participants adjusted the depth of a cue-consistent stimulus to match the perceived depth of each cue-conflict stimulus in the training set. In Experiment 1, participants matched the six off-diagonal objects in the uncorrelated set (Figure 2a); the three cue-consistent objects of this set did not require matching as they were already composed of consistent cues. In Experiment 2, the target stimuli were six cue-conflict objects where stereo depth and texture depth differed by a constant conflict of 10 mm. In one group of subjects (Adapt+), the biased cue was 10 mm shallower than the haptic depth, while in the other group (Adapt−), the biased cue was 10 mm deeper than the haptic depth. On each Matching trial, participants could switch freely between the fixed cue-conflict stimulus and the adjustable cue-consistent stimulus, using keypresses to make incremental changes to the depth of the cue-consistent stimulus until it appeared to match the depth of the cue-conflict stimulus. To prevent the use of motion information, we displayed a blank screen with a small fixation dot for an interstimulus interval of 750 ms whenever the stimulus was changed. Participants performed two repetitions for each of the six cue-conflict stimuli in each experiment, for a total of 12 Matching trials. 
Figure 2
 
Experiment 1: Stereo-texture paraboloid stimuli and Matching task results. (a) Nine paraboloid objects were rendered by independently manipulating texture and stereo cues. For ease of viewing, stereo depth is coded by a color gradient. The main diagonal of the matrix corresponds to the normally occurring covariation of stereo and texture information (i.e., cue-consistent stimuli), while the off-diagonal objects are cue-conflicts. Two oblique views of rendered 3D objects are shown on the far right—the dots are circular on the cue-consistent stimulus (bottom-right), while the dots appear stretched on the cue-conflict stimulus (top-right) such that the frontally viewed projection of the texture specifies a shallower stimulus. (b) At the beginning of each session, participants adjusted the depth of a cue-consistent stimulus (comparison) to create a perceived depth match with each of the cue-conflict stimuli (standards). In the Grasping task, these cue-consistent stimuli were presented in a Baseline phase to calibrate grasping behavior prior to introducing the cue-conflicts, and afterwards in a Washout phase. (c) Average depth setting of the cue-consistent object when adjusted to match the perceived depth of each stereo-texture conflict object. The cue-consistent stimuli (black dots) are plotted as reference points. Errors ribbons are ±1 SEM across subjects.
Figure 2
 
Experiment 1: Stereo-texture paraboloid stimuli and Matching task results. (a) Nine paraboloid objects were rendered by independently manipulating texture and stereo cues. For ease of viewing, stereo depth is coded by a color gradient. The main diagonal of the matrix corresponds to the normally occurring covariation of stereo and texture information (i.e., cue-consistent stimuli), while the off-diagonal objects are cue-conflicts. Two oblique views of rendered 3D objects are shown on the far right—the dots are circular on the cue-consistent stimulus (bottom-right), while the dots appear stretched on the cue-conflict stimulus (top-right) such that the frontally viewed projection of the texture specifies a shallower stimulus. (b) At the beginning of each session, participants adjusted the depth of a cue-consistent stimulus (comparison) to create a perceived depth match with each of the cue-conflict stimuli (standards). In the Grasping task, these cue-consistent stimuli were presented in a Baseline phase to calibrate grasping behavior prior to introducing the cue-conflicts, and afterwards in a Washout phase. (c) Average depth setting of the cue-consistent object when adjusted to match the perceived depth of each stereo-texture conflict object. The cue-consistent stimuli (black dots) are plotted as reference points. Errors ribbons are ±1 SEM across subjects.
The resulting sets of visual stimuli (six pairs of matched cue-conflict and cue-consistent paraboloids) were presented in the Baseline, Adaptation, and Washout phases of the Grasping task. During the Grasping task, participants used a precision grip to grasp the paraboloid objects from front to back. Trials were presented in a pseudo-random “binned” trial order, where each of the target objects in a given phase of the experiment was presented once before any one was presented again; as a result, each bin contains one presentation of each target object. Since there were nine target objects in the uncorrelated set, each bin of Experiment 1 contained nine trials. Since there were six target objects in the biased sets, each bin of Experiment 2 contained six trials, except during the Pretest and Posttest phases where we presented the uncorrelated set. On each trial, participants were shown the target object for 500 ms, then heard the “go” signal, and reached to grasp the target. There was no explicit time limit on these grasps, but the total elapsed time from movement onset to object contact never exceeded 1.5 seconds. 
Following the Matching task, the Grasping procedure of Experiment 1 was as follows. In the Baseline phase, participants grasped their personalized set of nine cue-consistent paraboloids, perceptually matched to the nine objects of the uncorrelated set, for three trial bins. Participants then proceeded immediately into the Adaptation phase, where the cue-consistent paraboloids were suddenly replaced by the perceptually matched cue-conflicts, and haptic feedback matched the depth of the reinforced cue (either haptic-for-stereo or haptic-for-texture). Following 11 bins of exposure to the uncorrelated set, Experiment 1 concluded with a two-bin Washout phase, identical to Baseline. 
The procedure of Experiment 2 was designed to be similar to Experiment 1, but with exposure to the biased set, which had a constant cue-conflict of 10 mm, rather than the uncorrelated set, which had variable positive and negative cue-conflicts. As in Experiment 1, participants began with a Baseline phase, grasping their personalized set of six cue-consistent paraboloids, which were perceptually matched to the six objects of the biased set, for three trial bins. Instead of proceeding directly into the Adaptation phase, where they would interact with the cue-conflict objects of the biased set, they first completed a Pretest phase consisting of two bins of trials where we presented the uncorrelated set. Next, in the Adaptation phase, we presented the six objects of the biased set for 10 bins of trials when testing the Adapt+ group, but only for five bins of trials when testing the Adapt− group. We shortened the Adaptation phase for Adapt− because this version of the experiment was run after the Adapt+ group, where we had already observed rapid convergence on the reinforced cue—longer adaptation periods were clearly not necessary to eliminate movement errors, while cue reweighting had been observed in only 18 trials of exposure to the uncorrelated set. Following Adaptation, participants completed a two-bin Posttest, identical to the Pretest, and concluded with a two-bin Washout phase, identical to Baseline. In Experiment 2, haptic feedback matched the depth of the reinforced cue (haptic-for-stereo or haptic-for-texture) during Adaptation as well as during Pretest and Posttest. 
Analysis
Raw motion-capture position data was processed and analyzed offline using custom software. Missing frames due to marker dropout were linearly interpolated, and the 85 Hz raw data was smoothed with a 20 Hz low-pass filter. The time series data from each trial was cropped by defining the start frame as the final frame where the thumb was more than 25 cm from its contact location on the tip of the object, and the end frame as the first frame where (a) the thumb came within 1 cm of its contact location, or (b) the index finger entered into a 3 cm wide by 3 cm high bounding box, extending 10 cm in depth (well beyond the rear edge of the deepest object). The grip aperture profile was computed for each trial by taking the vector distance between the index finger and thumb locations on each frame. The MGA, a widely used kinematic measure of grasp planning (Jeannerod, 1981), was extracted from this time series. 
Two criteria were used for trial exclusion: The percentage of missing frames due to marker dropout exceeded 90% or fewer than five frames were not missing. In Experiment 1, neither of these criteria were met for any of the movements, so no trials were excluded from analysis. In Experiment 2, 72 out of a total 9,480 trials were excluded by these criteria (∼0.7%). We used relatively liberal exclusion criteria based on the reasoning that it is preferable to obtain some estimate of the MGA on as many trials as possible, even if the extracted MGA does not perfectly match the true MGA. Moreover, an analysis of the frequency of missing frames in each valid trial demonstrated that our criteria were not overly liberal. In Experiment 1 (7,200 trials total), we found that (1) only 63 trials had more than 10 missing frames; (2) only 31 trials had fewer than 22 visible frames (i.e., less than 250 ms of visible trajectory); and (3) only 49 trials had greater than 20% missing frames. In Experiment 2 (9,408 trials total), the counts in these categories were 49, 33, and 65 trials, respectively. 
The factorial design of the uncorrelated set of stimuli allowed us to measure the relative influence of stereo and texture information (Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({z_S}\) and Display Formula\({z_T}\), respectively) in the Grasping task by estimating slopes for each cue (Display Formula\({k_T}\) and Display Formula\({k_T}\)) via multiple linear regression with the MGA as the response variable (Display Formula\(y\)):  
\begin{equation}\tag{1}{y_n} = {k_{{S_n}}}{z_S} + {k_{{T_n}}}{z_T} + {x_n} \end{equation}
 
A regression was computed for each bin of nine trials (bin number denoted by subscript Display Formula\(n\)) within the Adaptation phase of Experiment 1, and in the Pretest and Posttest phases of Experiment 2. In the Baseline and Washout phases of Experiment 1, we computed the slope of the MGA with respect to the perceptually matched cue-consistent depths using simple linear regression in each bin. 
Results
Experiment 1
Figure 2a depicts the uncorrelated set of stimuli, represented in a three-by-three matrix where rows correspond to different texture depths and columns correspond to different stereo depths. Along the main diagonal, we obtain three cue-consistent stimuli, where the two cues are rendered based on the same depth value. The six off-diagonal stimuli are cue-conflicts: Texture depth is greater than stereo depth in the lower-left region but less than stereo depth in the upper-right. In Experiment 1, 25 participants repeatedly grasped these nine stimuli along their depth dimension in two conditions: a haptic-for-texture condition and a haptic-for-stereo condition, where the depth specified by the indicated cue always matched the physical object encountered at the end of the grasp. Consequently, the other cue was uncorrelated with physical depth. 
To obtain a set of cue-consistent stimuli that could be used to calibrate grasping behavior before introducing the cue-conflicts, we asked participants to perform a Matching task at the start of each session (Figure 2b). For each of the six cue-conflicts from the test set, they adjusted the depth of a cue-consistent paraboloid until the two appeared to have the same depth. Each participant grasped objects from their personalized set of perceptually matched cue-consistent stimuli in a Baseline phase before the test stimuli were introduced, and again afterward in a Washout phase. The average depth settings from the Matching task are shown in Figure 2c; these settings correspond to an average relative weight on stereo information Display Formula\({w_S}\) of 0.75 (SEM = 0.019) according to Display Formula\({w_S} = \left( {{z_{{\rm{match}}}} - {z_T}} \right)/\left( {{z_S} - {z_T}} \right)\) (see Appendix for derivation; Maloney & Landy, 1989; Young et al., 1993). Notice that here we have used cue weights that sum to one, instead of freely varying coefficients as in our grasp planning model. This is because our psychophysical procedure relied on a comparison with a fixed standard, and thus cannot indicate the exact metric depth that was perceived—an independent metric probe would be required to do this (Young et al., 1993). The Matching procedure only allows a measurement of the relative influences of the two cues in perception. 
Having obtained a precise perceptual match for each of the target objects, in the subsequent Grasping task we were able to test whether these matches were treated as such by the visuomotor system, or if a switch from cue-consistent to cue-conflict stimuli would cause an immediate change in grasp performance due to reliance on a different cue-combination rule in visuomotor control versus perception. Figure 3a demonstrates that the MGA scaled roughly linearly with the cue-consistent depths presented during Baseline. Figure 3b plots these Baseline MGAs for cue-consistent stimuli against the MGAs for the first nine trials (i.e., first bin) of the Adaptation phase, where we replaced the cue-consistent stimuli with perceptually matched cue-conflicts. The values are nearly identical across the switch, suggesting that the cue-combined estimated used by the visuomotor system was the same as the perceptual encoding used for the Matching task. The cue-consistent depths presented during Baseline account for 97% of the variability in Baseline grip apertures, compared with 95% of the variability in early Adaptation (adjusted R2). This suggests that the motor responses analyzed in this study reflect the same cue-combination process that supports perceptual judgments. 
Figure 3
 
(a) Approximately linear scaling of maximum grip apertures (MGAs) across the range of cue-consistent object depths presented in Baseline. (b) Comparison of MGAs during Baseline and in the first bin of the Adaptation phase. Across this transition, the component stereo and texture depths of each stimulus changed from consistent to conflicting, but the perceived depth of each object remained the same due to the Matching procedure (see Figure 2c for average cue-consistent depths). The strong correlation between the MGAs supports the idea that the visuomotor system relies on the same analysis of depth as the perceptual Matching task. Errors bars are ±1 SEM across subjects.
Figure 3
 
(a) Approximately linear scaling of maximum grip apertures (MGAs) across the range of cue-consistent object depths presented in Baseline. (b) Comparison of MGAs during Baseline and in the first bin of the Adaptation phase. Across this transition, the component stereo and texture depths of each stimulus changed from consistent to conflicting, but the perceived depth of each object remained the same due to the Matching procedure (see Figure 2c for average cue-consistent depths). The strong correlation between the MGAs supports the idea that the visuomotor system relies on the same analysis of depth as the perceptual Matching task. Errors bars are ±1 SEM across subjects.
Most importantly, cue reweighting was revealed by changes in the slope of the MGA with respect to stereo and texture depth over the course of Adaptation (Figure 4; gray denotes slopes computed by regressing MGA on cue-consistent stimuli in Baseline and Washout, red and blue denote stereo and texture slopes computed by regressing on cue-conflict stimuli). Note that when analyzing grasping performance, we are able to estimate the slopes of the individual cues because the MGA is an absolute measure, unlike in the perceptual task above, where we could only estimate the relative weights. Restricting our analysis to the first and last bins of Adaptation, we performed a three-way repeated-measures analysis of variance (ANOVA; Condition × Bin × Cue) and found a significant main effect of Cue, F(1, 24) = 60.98, p < 0.0001, representing the stronger influence of stereo information, as well as a significant three-way interaction, F(1, 24) = 5.36, p = 0.029. The latter statistic is the critical one with respect to cue reweighting: It reflects our finding that the difference between stereo and texture slopes becomes smaller over time in the haptic-for-texture condition, and larger over time in the haptic-for-stereo condition. A follow-up two-way ANOVA restricted to the texture slopes yielded no significant effects (all ps > 0.5), whereas restricting the test to the stereo slopes yielded a significant interaction of Condition × Bin, F(1, 24) = 5.14, p = 0.033. This indicates that the three-way interaction of the omnibus test was driven primarily by opposing changes in the stereo slope. This finding was supported by a bin-by-bin analysis of the coefficients, shown in Figures 4a and 4d. When analyzing the difference in these estimates of stereo slope change per bin across haptic feedback conditions, we found that the rate of change was significantly modulated by condition [one-tailed t-test; t(24) = 2.20; p = 0.019]. Linear regressions on the stereo slopes as a function of bin number estimated an average change of +0.01 per bin in the haptic-for-stereo condition, and an average change of −0.01 per bin in the haptic-for-texture condition. 
Figure 4
 
Cue reweighting in Experiment 1. Top panel: Haptic-for-texture condition. Bottom panel: Haptic-for-stereo condition. Errors bars are ±1 SEM across subjects. (a, b) Slope parameters estimated by linear regression on maximum grip apertures (MGAs) as a function of depth information in each bin. In Baseline and Washout (black), we computed a single slope with respect to the cue-consistent match depths. For the cue-conflicts presented during Adaptation, we computed independent slopes with respect to the rendered stereo and texture depths in a multiple regression (Equation 1). To evaluate whether the influence of stereo and texture information changed in response to the haptic feedback within the Adaptation phase, we fit a further linear regression on these estimated slopes as a function of bin number (solid red and blue lines). (b, e) Stereo and texture slopes in the first (Bin 1) and last (Bin 11) bins of Adaptation. (c, f) Average MGAs for each of the nine target objects (texture depths indicated on x-axis, stereo depths indicated by line groups) in the first (light gray) and last (dark gray) bins of Adaptation.
Figure 4
 
Cue reweighting in Experiment 1. Top panel: Haptic-for-texture condition. Bottom panel: Haptic-for-stereo condition. Errors bars are ±1 SEM across subjects. (a, b) Slope parameters estimated by linear regression on maximum grip apertures (MGAs) as a function of depth information in each bin. In Baseline and Washout (black), we computed a single slope with respect to the cue-consistent match depths. For the cue-conflicts presented during Adaptation, we computed independent slopes with respect to the rendered stereo and texture depths in a multiple regression (Equation 1). To evaluate whether the influence of stereo and texture information changed in response to the haptic feedback within the Adaptation phase, we fit a further linear regression on these estimated slopes as a function of bin number (solid red and blue lines). (b, e) Stereo and texture slopes in the first (Bin 1) and last (Bin 11) bins of Adaptation. (c, f) Average MGAs for each of the nine target objects (texture depths indicated on x-axis, stereo depths indicated by line groups) in the first (light gray) and last (dark gray) bins of Adaptation.
The bin-by-bin slope analysis also appears to show a small aftereffect on the MGA slope with cue-consistent depths. During Baseline, the MGA slopes tended to relax toward a value slightly less than 1, as is typical for precision-grip grasping (Smeets & Brenner, 1999). However, in the first bin of Washout, the slopes differed significantly across conditions, t(24) = 2.00, p = 0.028. This is consistent with the fact that the sum of the stereo and texture slopes, which approximately determines the slope with cue-consistent stimuli, was reduced during haptic-for-texture adaptation but increased during haptic-for-stereo adaptation. However, this effect is noisy, and when analyzed in the traditional manner (i.e., the change in slope from Baseline to the first Washout bin), the difference between conditions is not significant, t(24) = 1.45, p = 0.079, so it should be interpreted with caution. Still, since aftereffects are typically considered a hallmark of implicit adaptation processes, the apparent trend provides some converging evidence of cue reweighting. 
Experiment 2
As mentioned in the Introduction, sensorimotor adaptation is a highly efficient process for eliminating movement errors due to the presence of a biased depth cue that uniformly over- or underestimates physical depth. However, when an available depth cue suddenly becomes noisier than it was before, reducing its correlation with physical depth, conflicting movement errors will occur (positive errors for spuriously large values of the noisy cue and negative errors for spuriously low values). Faced with conflicting error signals across the domain of visual inputs, the uniform shifts of the motor output invoked by sensorimotor adaptation would oscillate unhelpfully, as would the adjustments of single-cue estimates induced by cue recalibration. The slope adjustments that occur in cue reweighting are the only way to produce increases in some regions of the visual input space and decreases in other regions, as seen in Experiment 1 (Figures 4c, 4f). 
In Experiment 2, we tested our main hypothesis that cue reweighting is a specific response to the variable errors that occur when a cue becomes less correlated with haptic feedback. This hypothesis predicts that cue reweighting should be observed only during exposure to the uncorrelated stimulus set of Experiment 1 (Figure 2a), and not during exposure to a biased stimulus set where the faulty cue always specifies less (or more) depth than the reinforced cue. This stands in contrast to the more generic claim that a lack of consistency with haptic feedback drives cue reweighting. This alternative hypothesis predicts that cue reweighting should occur during exposure to either set because they both involve inconsistency between the faulty cue and haptic feedback. This would suggest a mechanism that does not distinguish between constant and variable errors, or (assuming a Bayesian cue-combination model) attempt to estimate relative cue reliabilities by using correlations with haptic signals as a proxy. Instead, this alternative hypothesis proposes a simplified approximation, where weights could be adjusted to favor cues that show the smallest mismatches with haptic feedback and inhibit those showing the largest mismatches. 
The biased stimulus sets of Experiment 2 were comprised of six cue-conflict stimuli where texture depth and stereo depth differed by 10 mm across all objects. The shallower cue always ranged from 20 to 45 mm, the deeper from 30 to 55 mm. We recruited two groups of participants for this experiment. For one group (Adapt+), the reinforced cue was the deeper of the two cues; for the other group (Adapt−), the reinforced cue was the shallower of the two cues. Each participant performed a haptic-for-texture condition and a haptic-for-stereo condition in separate sessions. Participants began each session by creating perceptual matches between cue-consistent paraboloids and the six cue-conflict stimuli in the biased set. They then performed grasping movements through five phases: (1) Baseline grasping of the six perceptually matched cue-consistent stimuli; (2) Pretest grasping of the uncorrelated set from Experiment 1 to estimate cue slopes prior to exposure, with haptic feedback matching the reinforced cue (to maintain consistency with Experiment 1); (3) Adaptation grasping of the relevant biased set (10 bins for the Adapt+ group; five bins for Adapt−); (4) Posttest grasping of the uncorrelated set from Experiment 1 to estimate cue slopes after exposure, with haptic feedback still remaining consistent with the reinforced cue; and (5) Washout grasping of the perceptually matched cue-consistent stimuli. 
Figure 5 depicts the main results of the experiment, with one panel for each feedback condition (haptic-for-texture, haptic-for-stereo) of each group (Adapt+, Adapt−). In the middle of each panel, we present the Baseline-centered average MGAs for each bin of the Adaptation phase (right-hand y-axis, open circles). The dashed red and blue lines spanning the Adaptation phase represent the rendered stereo and texture depths in the biased sets, with the constant 10-mm cue-conflict; one of these cues was consistent with haptic feedback. The positions of these dashed lines with respect to the average Baseline MGA (zero, right-hand y-axis) reflect the changes in the rendered texture and stereo depths from Baseline to Adaptation. Notice that the dashed red line is slightly closer to zero; this is because cue-consistent depths were set closer to the stereo depths than to the texture depths of the cue-conflicts during perceptual matching, consistent with the stronger influence of stereo information on perceived depth. 
Figure 5
 
Experiment 2 results. Each panel shows the results of one group-condition pairing. Panels a and b depict the two conditions of the Adapt+ group, while panels c and d show the conditions of the Adapt− group. In the central Adaptation phase, participants grasped objects with a constant cue-conflict: 10-mm separation between texture depth and stereo depth (blue and red dashed lines, respectively). For each bin (six trials) of the Adaptation phase, we depict the Baseline-centered average maximum grip apertures (MGAs; right y-axis); the symbol color corresponds to the reinforced cue. The length of the Adaptation phase for the Adapt− group was shortened by half based on the rapid adaptation observed for the Adapt+ group. Flanking the main Adaptation phase, the bar graphs indicate the slope of the MGA with respect to stereo and texture information (left y-axis) during the Pre-test and Post-test phases, where we presented the uncorrelated set of stimuli (matrix of Figure 2a).
Figure 5
 
Experiment 2 results. Each panel shows the results of one group-condition pairing. Panels a and b depict the two conditions of the Adapt+ group, while panels c and d show the conditions of the Adapt− group. In the central Adaptation phase, participants grasped objects with a constant cue-conflict: 10-mm separation between texture depth and stereo depth (blue and red dashed lines, respectively). For each bin (six trials) of the Adaptation phase, we depict the Baseline-centered average maximum grip apertures (MGAs; right y-axis); the symbol color corresponds to the reinforced cue. The length of the Adaptation phase for the Adapt− group was shortened by half based on the rapid adaptation observed for the Adapt+ group. Flanking the main Adaptation phase, the bar graphs indicate the slope of the MGA with respect to stereo and texture information (left y-axis) during the Pre-test and Post-test phases, where we presented the uncorrelated set of stimuli (matrix of Figure 2a).
During the Adaptation phase, MGAs increased (Adapt+: t(21) = 4.18, p = 0.00021) or decreased (Adapt−: t(17) = 3.21, p = 0.0026) from their Baseline values to target the reinforced cue. We were surprised, however, to find that the time course of these data did not reflect the exponential learning curve characteristic of adaptation to a constant bias. Even in the very first bin of Adaptation (six trials), grasp planning had already compensated for most or all of the change in the haptic feedback. Originally, we expected to observe a more gradual shift of the MGA, as participants in previous grasp adaptation experiments required approximately 10 trials to fully adapt in response to similar perturbations (Cesanek & Domini, 2017; Cesanek et al., 2019). It is likely that the inclusion of the Pretest phase between Baseline and Adaptation disrupted the typical time course. In any case, the key result of the Adaptation phase is that MGAs were significantly altered from Baseline, appearing to specifically target the reinforced cue by the end of the phase. 
In the Pretest and Posttest phases, we measured the influences of stereo and texture information during 18 grasps toward the uncorrelated set (matrix of Figure 2a). Note that even during these Test trials, haptic feedback remained consistent with the reinforced cue. As in Experiment 1, we estimated a slope parameter for each cue using multiple linear regression (left-hand y-axis, bar graphs). We then performed a mixed-design ANOVA on these slopes with a single between-subjects factor (Group: Adapt+ or Adapt−) and three within-subjects factors (Condition: haptic-for-stereo or haptic-for-texture; Test Phase: Pretest or Posttest; Cue: stereo or texture). This analysis revealed a significant main effect of Cue, F(1, 38) = 106.72, p < 0.001, as well as a three-way interaction of Condition × Test Phase × Cue; F(1, 38) = 9.41, p = 0.0040. 
At first glance, these results appear to suggest that, contrary to our predictions, cue reweighting did in fact take place during exposure to the biased set. However, this conclusion overlooks the possibility that these tests captured a gradual accumulation of cue reweighting in the Pretest and the Posttest. Recall that within each Test phase, participants performed two bins of nine grasps toward the stimuli of the uncorrelated set, with haptic feedback continuing to reinforce the reliable cue in each condition. We did this so that the slopes measured during these Test phases would be obtained under conditions identical to those in the Adaptation phase of Experiment 1. Accordingly, we also evaluated the possibility that cue reweighting occurred within the Pretest and the Posttest, during exposure to the uncorrelated set, and not across the central Adaptation phase during exposure the biased sets. 
We fit multiple linear regressions to measure the influences of stereo and texture information in each of the two bins of Pretest and Posttest, so we could compare cue reweighting that occurred within the Test phases (from Bin 1 to Bin 2) with that occurring across the Adaptation phase (from Pretest Bin 2 to Posttest Bin 1). This is appropriate because Bin 2 of the Pretest provides the most up-to-date measure of cue influences on grasping prior to any exposure to the biased set. Recall that we predicted no cue reweighting during exposure to the biased set, so the relative influence of the reinforced cue should not be enhanced across the Adaptation phase, whereas we might expect some degree of cue reweighting within the Test phases. 
First, we used a mixed-design ANOVA as an omnibus test of the slope-change data displayed in Figure 6, with one between-subjects factor, Group (columns of Figure 6), and two within-subjects factors, Condition (rows of Figure 6) and Order (x-axis of Figure 6; changes occurring within the Test phases vs. those occurring across the Adaptation phase). Since cue reweighting is marked by opposing changes in the stereo and texture slopes, we have simply taken the difference between the slope changes, change in reinforced minus change in faulty, as our dependent variable. This analysis revealed a significant interaction of Condition × Order, F(1, 38) = 6.39, p = 0.016, indicating that the within-versus-across difference varied as a function of the feedback condition. Accordingly, we followed up with two specific paired t tests, one for each condition. In the haptic-for-texture condition (Figures 6a, 6c), we found that cue reweighting was significantly greater within the Test phases than across the Adaptation phase, t(39) = 2.92, p = 0.0058. No such difference was found in the haptic-for-stereo condition (p = 0.47)—Figures 6b and 6d reveal mostly negligible slope changes in this condition. An apparent exception can be spotted in Figure 6d (Adapt−, haptic-for-stereo), where it appears that the strength of stereo information increased across the Adaptation phase. However, on closer inspection we found the Pretest of this condition to be somewhat anomalous, with unusually low stereo and texture slopes in Bin 2 of Pretest (0.67 and 0.12, compared with 0.89 and 0.27 in the preceding bin). The low slopes in this bin were accompanied by a very large intercept parameter (39.8 mm, compared with 27.4 mm in the preceding bin), suggesting that participants had adopted a uniformly larger grip aperture and temporarily reduced their normal reliance on depth information. The return to typical stereo slopes in Bin 1 of Posttest should therefore not be taken as evidence of cue reweighting—indeed, a post-hoc test of this subset of the data revealed that the observed increase in stereo slope across Adaptation was not significant (p = 0.075). 
Figure 6
 
Changes in slope parameters observed within the Test phases, as a result of exposure to the uncorrelated set, versus those observed across the Adaptation phase, as a result of exposure to the biased sets. The shading of the background indicates the expected direction of slope change for each cue, if cue reweighting took place. For example, in a haptic-for-texture condition, cue reweighting would be marked by an increase in the slope of the MGA with respect to texture information (blue) and/or a decrease in the slope with respect to stereo (red).
Figure 6
 
Changes in slope parameters observed within the Test phases, as a result of exposure to the uncorrelated set, versus those observed across the Adaptation phase, as a result of exposure to the biased sets. The shading of the background indicates the expected direction of slope change for each cue, if cue reweighting took place. For example, in a haptic-for-texture condition, cue reweighting would be marked by an increase in the slope of the MGA with respect to texture information (blue) and/or a decrease in the slope with respect to stereo (red).
Overall, by breaking down our Pretest and Posttest phases into their constituent bins, we found evidence that the overall cue reweighting from Pretest to Posttest (as seen in Figure 5) actually resulted from cumulative exposure to the uncorrelated set in the two Test phases. These data show that the constant bias introduced during the Adaptation phase was handled by simply increasing or decreasing the grip aperture, with no signs of cue reweighting. Yet, during this phase, participants had plenty of exposure to a systematic mismatch between haptic feedback and the faulty depth cue, so these results run counter to the hypothesis that cue reweighting is driven by the consistency each cue with haptic feedback. 
Discussion
Both of the reported experiments induced reweighting of stereo and texture information as measured by changes in the slope of the MGA with respect to each cue. Consistent with our hypothesis, cue reweighting occurred only in response to the faulty cue's reduced correlation with haptic feedback, and not in response to a constant mismatch with haptic feedback, which instead produced only a shift of the MGA. We suspect that the rapid adjustment of the planned grip aperture in biased cue conditions is mainly the result of sensorimotor adaptation, as opposed to the slower process of cue recalibration (Adams et al., 2001; Zaidel et al., 2011), although further experimentation would be needed to confirm this, perhaps by evaluating performance on single-cue test stimuli. The observed time course of grasping behavior in each experiment demonstrates that cue reweighting occurs considerably more slowly than sensorimotor adaptation, but still quickly enough to be relevant on a situation-by-situation basis, rather than only over prolonged exposure periods of multiple days. 
It should be noted that the cue reweighting within the Test phases of Experiment 2 has a few noticeable differences with that observed in Experiment 1. First, in the haptic-for-stereo condition of Experiment 2, we found no evidence of cue reweighting, in contrast to the haptic-for-stereo condition of Experiment 1. However, an asymmetry in cue reweighting between haptic-for-stereo and haptic-for-texture conditions is also evident in the data of Ernst et al. (2000), as well as our earlier study on this topic (Cesanek et al., 2019). One explanation involves a ceiling effect in haptic-for-stereo conditions—at near viewing distances, some participants initially show heavy reliance on stereo and minimal reliance on texture, so there is not much room for additional cue reweighting in favor of stereo. A related explanation of the asymmetry is based on the possibility that cue reweighting is not driven by the experienced correlation of each cue with haptic feedback per se, which is equivalent in the two conditions, but instead by sensory prediction errors related to the timing and magnitude of contact forces felt during each grasp. If sensory predictions about contact forces are made on the basis of the cue-combined shape estimate, then resulting errors will necessarily be smaller when haptic feedback reinforces the more influential cue. We elaborate on this model of the cue reweighting mechanism as follows. In any case, we would argue that the asymmetrical cue reweighting across conditions in Experiment 2 should be seen as less surprising than the relative symmetry in Experiment 2. A second difference is that in the haptic-for-texture condition of Experiment 2, the texture slopes substantially increased, whereas we found no change in the texture slope for either feedback condition of Experiment 1. We have been unable to determine the source of these differences, but in general they appear to be slight variations in the quantitative measurement of cue reweighting, rather than qualitative differences in the phenomenon. Most importantly, they do not affect the main finding of Experiment 2, which is that no evidence of cue reweighting was found across the constant-bias Adaptation phase in either condition, for either group. 
Mechanism of cue reweighting
The fact that cue reweighting occurs only in response to altered correlations with haptic feedback helps to constrain the set of possible underlying mechanisms. First, we can rule out a mechanism that adjusts the influence of each cue according to its absolute difference with haptic estimates of 3D shape, since such a mechanism should produce cue reweighting in response to a biased cue, which we did not observe. Having rejected this possibility, an obvious candidate mechanism is one that continuously estimates the correlation between each depth cue and haptic information and uses these estimates directly to set the influence of each cue. 
However, the fact that the experimenter must reduce one cue's correlation with haptic feedback to produce cue reweighting does not necessarily mean that the system keeps track of these correlations directly. As an alternative, we propose that cue reweighting also could be driven on a trial-by-trial basis by sensory-prediction errors, the same signals believed to drive sensorimotor adaptation. These errors are registered by comparing actual sensory feedback with an internal prediction of the expected feedback, based on the cue-combined 3D shape estimate and the outgoing motor command. Thus, sensory-prediction errors are quite generic: There is not a separate error computed with respect to each available cue, unlike the model mentioned already, in which each cue must be compared with haptic information to monitor the correlation. 
Nonetheless, generic sensory prediction errors still contain information about the latent correlation of each cue with haptic feedback. This is because added noise in a cue (i.e., a reduced correlation with haptic feedback) inevitably leads to sensory-prediction errors that are positively correlated with that cue. For example, when the faulty cue takes on a spuriously large depth value, you might open your grip much wider than necessary during a grasp, leading to a positive sensory prediction error, since the time it takes to make contact is longer than expected. But when the faulty cue takes on a spuriously small depth value, you bump the target sooner than expected, producing a negative error. When an input signal is positively correlated with errors, the influence of that input signal can be reduced gradually through well-known online supervised learning algorithms, such as the delta rule (Widrow & Stearns, 1985). Since reduced correlations with haptic feedback inevitably result in positive correlations with generic error signals, it is possible to explain cue reweighting as the result of simple backpropagation of these errors. This is arguably more parsimonious than positing dedicated sensory mechanisms that compare each available single-cue estimate of 3D shape with concurrent haptic estimates. 
Lastly, with respect to the underlying mechanism of cue reweighting, we must acknowledge that the present study does not directly show whether the observed changes were perceptual in nature or if they were contained to the motor system. However, the near-equivalence of MGAs in the Baseline and early Adaptation phases (Figure 3b) suggests that the same cue-combined depth estimates used in the Matching task were also used for motor planning. So, unless the relative contributions of the two cues can be further modified for motor control after they have been combined for perception, it is reasonable to conclude that the observed changes were the result of a perceptual change. Additional support comes from the fact that the changes in MGA shown in this study are comparable in magnitude to the perceptual effects originally reported by Ernst et al. (2000) with a similar style and duration of training. Future studies should aim to establish whether cue reweighting can be elicited independently for motor responses and perceptual judgments. 
Acknowledgments
This work was supported by a National Science Foundation grant to F. D. (BCS-1827550). 
Commercial relationships: none. 
Corresponding author: Evan Cesanek. 
Address: Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USA. 
References
Adams, W. J., Banks, M. S., & van Ee, R. (2001). Adaptation to three-dimensional distortions in human vision. Nature Neuroscience, 4 (11), 1063–1064.
Atkins, J. E., Fiser, J., & Jacobs, R. A. (2001). Experience-dependent visual cue integration based on consistencies between visual and haptic percepts. Vision Research, 41, 449–461.
Atkins, J. E., Jacobs, R. A., & Knill, D. C. (2003). Experience-dependent visual cue recalibration based on discrepancies between visual and haptic percepts. Vision Research, 43, 2603–2613.
Cesanek, E., & Domini, F. (2017). Error correction and spatial generalization in human grasp control. Neuropsychologia, 106, 112–122.
Cesanek, E., Taylor, J. A., & Domini, F. (2019). Sensorimotor adaptation compensates for distortions of 3D shape information. bioRxiv, https://doi.org/10.1101/540187.
Ernst, M. O., Banks, M. S., & Bülthoff, H. H. (2000). Touch can change visual slant perception. Nature Neuroscience, 3 (1), 69–73.
Ho, Y. X., Serwe, S., Trommershäuser, J., Maloney, L. T., & Landy, M. S. (2009). The role of visuohaptic experience in visually perceived depth. Journal of Neurophysiology, 101 (6), 2789–2801.
Jeannerod, M. (1981). Intersegmental coordination during reaching at natural visual objects. In Long J. & Baddeley A. (Eds.), Attention and performance IX (pp. 153–168). Hillsdale, NJ: Lawrence Erlbaum Associates.
Johnston, E. B. (1991). Systematic distortions of shape from stereopsis. Vision Research, 31 (7–8), 1351–1360.
Knill, D. C. (2007). Robust cue integration: A Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. Journal of Vision, 7 (7): 5, 1–20, https://doi.org/10.1167/7.7.5. [PubMed] [Article]
Landy, M. S., Maloney, L. T., Johnston, E. B., & Young, M. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35 (3), 389–412.
Maloney, L. T., & Landy, M. S. (1989). A statistical framework for robust fusion of depth information. In Visual communications and image processing IV ( Vol. 1199, pp. 1154–1163). Bellingham, WA: International Society for Optics and Photonics.
Rosenholtz, R., & Malik, J. (1997). Surface orientation from texture: Isotropy or homogeneity (or both)? Vision Research, 37 (16), 2283–2294.
Smeets, J. B., & Brenner, E. (1999). A new view on grasping. Motor Control, 3 (3), 237–271.
Tassinari, H., Domini, F., & Caudek, C. (2008). The intrinsic constraint model for stereo-motion integration. Perception, 37 (1), 79–95.
Todd, J. T. (2004). The visual perception of 3D shape. Trends in Cognitive Sciences, 8 (3), 115–121.
Voudouris, D., Smeets, J. B., Fiehler, K., & Brenner, E. (2018). Gaze when reaching to grasp a glass. Journal of Vision, 18 (8): 16, 1–12, https://doi.org/10.1167/18.8.16. [PubMed] [Article]
Widrow, B., & Stearns, S. D. (1985). Adaptive signal processing. Englewood Cliffs, NJ: Prentice Hall, Inc.
Young, M. J., Landy, M. S., & Maloney, L. T. (1993). A perturbation analysis of depth perception from combinations of texture and motion cues. Vision Research, 33 (18), 2685–2696.
Zaidel, A., Turner, A. H., & Angelaki, D. E. (2011). Multisensory calibration is independent of cue reliability. Journal of Neuroscience, 31 (39), 13949–13962.
Appendix
The equation for the perceptual weights of stereo and texture information in the Matching task of Experiment 1 is derived as follows. Begin by assuming the absolute perceived depth of the adjustable cue-consistent stimulus Display Formula\({\hat z_{{\rm{consistent}}}}\) can be approximated by  
\begin{equation}\tag{A1}{\hat z_{{\rm{consistent}}}} = \left( {{k_S} + {k_T}} \right) \times {z_{{\rm{match}}}} \end{equation}
where Display Formula\({k_S}\) and Display Formula\({k_T}\) are non-negative perceptual scaling parameters on the simulated stereo and texture depths (note, these need not sum to one) and Display Formula\({z_{{\rm{match}}}}\) is the simulated metric depth specified by both cues in the cue-consistent match stimulus. Similarly, the perceived depth of the target cue-conflict stimulus Display Formula\({\hat z_{{\rm{conflict}}}}\) is  
\begin{equation}\tag{A2}{\hat z_{{\rm{conflict}}}} = {k_S} \times {z_S} + {k_T} \times {z_T} \end{equation}
where Display Formula\({z_S}\) and Display Formula\({z_T}\) are the simulated metric depths specified by the conflicting stereo and texture cues. When the perceived depths are matched by the participant in our task, we can state  
\begin{equation}\tag{A3}{\hat z_{{\rm{consistent}}}} = {\hat z_{{\rm{conflict}}}} \end{equation}
which is equivalent to  
\begin{equation}\tag{A4}{z_{{\rm{match}}}} = {{{k_S}} \over {{k_S} + {k_T}}} \times {z_S} + {{{k_T}} \over {{k_S} + {k_T}}} \times {z_T} \end{equation}
 
From this, the stereo weight Display Formula\({w_S}\) and texture weight Display Formula\({w_T}\) are defined as  
\begin{equation}\tag{A5}{w_S} = {{{k_S}} \over {{k_S} + {k_T}}} \end{equation}
 
\begin{equation}\tag{A6}{w_T} = {{{k_T}} \over {{k_S} + {k_T}}} \end{equation}
and therefore  
\begin{equation}\tag{A7}{w_T} = 1 - {w_S} \end{equation}
 
Thus, by substituting Display Formula\({w_S}\) and Display Formula\(\left( {1 - {w_S}} \right)\) in the equation for Display Formula\({z_{{\rm{match}}}}\) and rearranging, we obtain:  
\begin{equation}\tag{A8}{w_S} = {{{z_{{\rm{match}}}} - {z_T}} \over {{z_S} - {z_T}}} \end{equation}
 
Figure 1
 
Photographs of the tabletop virtual reality setup. (A) The observer looks into a slanted mirror while wearing stereoscopic glasses, seeing a compelling 3D object on the far side of the mirror. This visual object is aligned with a motorized physical apparatus in the workspace that provides haptic feedback of different depths. During the experiment, the room was completely dark and a back panel was placed on the mirror. The participant reaches with the right hand to grasp the rendered object so the thumb lands on the tip and the index finger lands on the base. Infrared light-emitting diodes (IREDs) attached to the fingernails provide precise location information about the fingertips, allowing us to compute the in-flight grip aperture. (B) Frontal view of a rendered paraboloid (cyclopean rather than stereoscopic view for visualization). The tip of the paraboloid is perfectly centered on a small rounded nub to provide haptic feedback of the tip. (C) Side view of the physical apparatus for providing haptic feedback. A stepper motor spins a screw in order to slide a large round washer back and forth along the screw. This allowed us to create a physical object of any depth on each trial. The thumb landed on the rounded nub aligned with the tip of the paraboloid, while the index finger pinched down on the rear surface, which could be aligned with either the stereo or texture depth.
Figure 1
 
Photographs of the tabletop virtual reality setup. (A) The observer looks into a slanted mirror while wearing stereoscopic glasses, seeing a compelling 3D object on the far side of the mirror. This visual object is aligned with a motorized physical apparatus in the workspace that provides haptic feedback of different depths. During the experiment, the room was completely dark and a back panel was placed on the mirror. The participant reaches with the right hand to grasp the rendered object so the thumb lands on the tip and the index finger lands on the base. Infrared light-emitting diodes (IREDs) attached to the fingernails provide precise location information about the fingertips, allowing us to compute the in-flight grip aperture. (B) Frontal view of a rendered paraboloid (cyclopean rather than stereoscopic view for visualization). The tip of the paraboloid is perfectly centered on a small rounded nub to provide haptic feedback of the tip. (C) Side view of the physical apparatus for providing haptic feedback. A stepper motor spins a screw in order to slide a large round washer back and forth along the screw. This allowed us to create a physical object of any depth on each trial. The thumb landed on the rounded nub aligned with the tip of the paraboloid, while the index finger pinched down on the rear surface, which could be aligned with either the stereo or texture depth.
Figure 2
 
Experiment 1: Stereo-texture paraboloid stimuli and Matching task results. (a) Nine paraboloid objects were rendered by independently manipulating texture and stereo cues. For ease of viewing, stereo depth is coded by a color gradient. The main diagonal of the matrix corresponds to the normally occurring covariation of stereo and texture information (i.e., cue-consistent stimuli), while the off-diagonal objects are cue-conflicts. Two oblique views of rendered 3D objects are shown on the far right—the dots are circular on the cue-consistent stimulus (bottom-right), while the dots appear stretched on the cue-conflict stimulus (top-right) such that the frontally viewed projection of the texture specifies a shallower stimulus. (b) At the beginning of each session, participants adjusted the depth of a cue-consistent stimulus (comparison) to create a perceived depth match with each of the cue-conflict stimuli (standards). In the Grasping task, these cue-consistent stimuli were presented in a Baseline phase to calibrate grasping behavior prior to introducing the cue-conflicts, and afterwards in a Washout phase. (c) Average depth setting of the cue-consistent object when adjusted to match the perceived depth of each stereo-texture conflict object. The cue-consistent stimuli (black dots) are plotted as reference points. Errors ribbons are ±1 SEM across subjects.
Figure 2
 
Experiment 1: Stereo-texture paraboloid stimuli and Matching task results. (a) Nine paraboloid objects were rendered by independently manipulating texture and stereo cues. For ease of viewing, stereo depth is coded by a color gradient. The main diagonal of the matrix corresponds to the normally occurring covariation of stereo and texture information (i.e., cue-consistent stimuli), while the off-diagonal objects are cue-conflicts. Two oblique views of rendered 3D objects are shown on the far right—the dots are circular on the cue-consistent stimulus (bottom-right), while the dots appear stretched on the cue-conflict stimulus (top-right) such that the frontally viewed projection of the texture specifies a shallower stimulus. (b) At the beginning of each session, participants adjusted the depth of a cue-consistent stimulus (comparison) to create a perceived depth match with each of the cue-conflict stimuli (standards). In the Grasping task, these cue-consistent stimuli were presented in a Baseline phase to calibrate grasping behavior prior to introducing the cue-conflicts, and afterwards in a Washout phase. (c) Average depth setting of the cue-consistent object when adjusted to match the perceived depth of each stereo-texture conflict object. The cue-consistent stimuli (black dots) are plotted as reference points. Errors ribbons are ±1 SEM across subjects.
Figure 3
 
(a) Approximately linear scaling of maximum grip apertures (MGAs) across the range of cue-consistent object depths presented in Baseline. (b) Comparison of MGAs during Baseline and in the first bin of the Adaptation phase. Across this transition, the component stereo and texture depths of each stimulus changed from consistent to conflicting, but the perceived depth of each object remained the same due to the Matching procedure (see Figure 2c for average cue-consistent depths). The strong correlation between the MGAs supports the idea that the visuomotor system relies on the same analysis of depth as the perceptual Matching task. Errors bars are ±1 SEM across subjects.
Figure 3
 
(a) Approximately linear scaling of maximum grip apertures (MGAs) across the range of cue-consistent object depths presented in Baseline. (b) Comparison of MGAs during Baseline and in the first bin of the Adaptation phase. Across this transition, the component stereo and texture depths of each stimulus changed from consistent to conflicting, but the perceived depth of each object remained the same due to the Matching procedure (see Figure 2c for average cue-consistent depths). The strong correlation between the MGAs supports the idea that the visuomotor system relies on the same analysis of depth as the perceptual Matching task. Errors bars are ±1 SEM across subjects.
Figure 4
 
Cue reweighting in Experiment 1. Top panel: Haptic-for-texture condition. Bottom panel: Haptic-for-stereo condition. Errors bars are ±1 SEM across subjects. (a, b) Slope parameters estimated by linear regression on maximum grip apertures (MGAs) as a function of depth information in each bin. In Baseline and Washout (black), we computed a single slope with respect to the cue-consistent match depths. For the cue-conflicts presented during Adaptation, we computed independent slopes with respect to the rendered stereo and texture depths in a multiple regression (Equation 1). To evaluate whether the influence of stereo and texture information changed in response to the haptic feedback within the Adaptation phase, we fit a further linear regression on these estimated slopes as a function of bin number (solid red and blue lines). (b, e) Stereo and texture slopes in the first (Bin 1) and last (Bin 11) bins of Adaptation. (c, f) Average MGAs for each of the nine target objects (texture depths indicated on x-axis, stereo depths indicated by line groups) in the first (light gray) and last (dark gray) bins of Adaptation.
Figure 4
 
Cue reweighting in Experiment 1. Top panel: Haptic-for-texture condition. Bottom panel: Haptic-for-stereo condition. Errors bars are ±1 SEM across subjects. (a, b) Slope parameters estimated by linear regression on maximum grip apertures (MGAs) as a function of depth information in each bin. In Baseline and Washout (black), we computed a single slope with respect to the cue-consistent match depths. For the cue-conflicts presented during Adaptation, we computed independent slopes with respect to the rendered stereo and texture depths in a multiple regression (Equation 1). To evaluate whether the influence of stereo and texture information changed in response to the haptic feedback within the Adaptation phase, we fit a further linear regression on these estimated slopes as a function of bin number (solid red and blue lines). (b, e) Stereo and texture slopes in the first (Bin 1) and last (Bin 11) bins of Adaptation. (c, f) Average MGAs for each of the nine target objects (texture depths indicated on x-axis, stereo depths indicated by line groups) in the first (light gray) and last (dark gray) bins of Adaptation.
Figure 5
 
Experiment 2 results. Each panel shows the results of one group-condition pairing. Panels a and b depict the two conditions of the Adapt+ group, while panels c and d show the conditions of the Adapt− group. In the central Adaptation phase, participants grasped objects with a constant cue-conflict: 10-mm separation between texture depth and stereo depth (blue and red dashed lines, respectively). For each bin (six trials) of the Adaptation phase, we depict the Baseline-centered average maximum grip apertures (MGAs; right y-axis); the symbol color corresponds to the reinforced cue. The length of the Adaptation phase for the Adapt− group was shortened by half based on the rapid adaptation observed for the Adapt+ group. Flanking the main Adaptation phase, the bar graphs indicate the slope of the MGA with respect to stereo and texture information (left y-axis) during the Pre-test and Post-test phases, where we presented the uncorrelated set of stimuli (matrix of Figure 2a).
Figure 5
 
Experiment 2 results. Each panel shows the results of one group-condition pairing. Panels a and b depict the two conditions of the Adapt+ group, while panels c and d show the conditions of the Adapt− group. In the central Adaptation phase, participants grasped objects with a constant cue-conflict: 10-mm separation between texture depth and stereo depth (blue and red dashed lines, respectively). For each bin (six trials) of the Adaptation phase, we depict the Baseline-centered average maximum grip apertures (MGAs; right y-axis); the symbol color corresponds to the reinforced cue. The length of the Adaptation phase for the Adapt− group was shortened by half based on the rapid adaptation observed for the Adapt+ group. Flanking the main Adaptation phase, the bar graphs indicate the slope of the MGA with respect to stereo and texture information (left y-axis) during the Pre-test and Post-test phases, where we presented the uncorrelated set of stimuli (matrix of Figure 2a).
Figure 6
 
Changes in slope parameters observed within the Test phases, as a result of exposure to the uncorrelated set, versus those observed across the Adaptation phase, as a result of exposure to the biased sets. The shading of the background indicates the expected direction of slope change for each cue, if cue reweighting took place. For example, in a haptic-for-texture condition, cue reweighting would be marked by an increase in the slope of the MGA with respect to texture information (blue) and/or a decrease in the slope with respect to stereo (red).
Figure 6
 
Changes in slope parameters observed within the Test phases, as a result of exposure to the uncorrelated set, versus those observed across the Adaptation phase, as a result of exposure to the biased sets. The shading of the background indicates the expected direction of slope change for each cue, if cue reweighting took place. For example, in a haptic-for-texture condition, cue reweighting would be marked by an increase in the slope of the MGA with respect to texture information (blue) and/or a decrease in the slope with respect to stereo (red).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×