Free
Article  |   March 2012
Different types of feedback change decision criterion and sensitivity differently in perceptual learning
Author Affiliations
Journal of Vision March 2012, Vol.12, 3. doi:https://doi.org/10.1167/12.3.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kristoffer C. Aberg, Michael H. Herzog; Different types of feedback change decision criterion and sensitivity differently in perceptual learning. Journal of Vision 2012;12(3):3. https://doi.org/10.1167/12.3.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In (perceptual) learning, performance improves with practice either by changes in sensitivity or decision criterion. Often, changes in sensitivity are regarded as the appropriate measure of learning while changes in criterion are considered unavoidable nuisances. Very little is known about the distinguishing characteristics of both learning types. Here, we show first that block feedback, which affects sensitivity, does not affect criterion. Second, contrary to changes in sensitivity, changes in decision criterion are limited to the training session and do not transfer overnight. Finally, training with biased trial-wise feedback induces a sensitivity change such that a left offset Vernier may be perceived as a right offset Vernier.

Introduction
In signal detection theory (SDT), it is assumed that neural responses to a stimulus are noisy, i.e., the presentation of the very same stimulus can lead to different responses at different trials. In a Vernier discrimination task, participants discriminate whether a lower line is offset to the left or to the right relative to an upper line (Figure 1A). According to SDT, 1 the neural responses to the two Verniers can be modeled by two Gaussian functions, respectively (Figure 1A). The distance between the means of the Gaussians reflects the sensitivity of the task (the variance of the Gaussians is assumed to be 1.0 for both Verniers). The harder the discrimination task, the closer are the means of the two Gaussians to each other. Because of the overlap of the Gaussians, there are no unique decision possible and, hence, a decision criterion is needed. The criterion bisects the decision space such that a “LEFT” response is elicited when the presentation of a Vernier elicits a value to the left of the criterion while a “RIGHT” response is elicited by values to the right of the criterion (Figure 1A). Thus, according to SDT, performance, determined in terms of the percentage of correct responses, depends on both the sensitivity, i.e., the distance between the means of the Gaussians, and the location of the criterion. The sensitivity calculations are derived from the standard definition d′ = z(H) − z(FA), where z(H) and z(FA) is the hit rate and the false alarm rate, respectively. The sensitivity between two Verniers V 1 and V 2 offset to different sides is calculated as
d V 1 V 2 = z ( H V 2 ) + z ( H V 1 )
and the sensitivity between two Verniers V 1 and V 2 offset to the same side is calculated as
d V 1 V 2 = z ( H V 2 ) z ( H V 1 )
. The criterion measure, c, is calculated as
c = z ( F A V S L )
, where
F A V S L
is the false alarm rate for the Vernier with the smallest offset (see General materials and methods section). 
Figure 1
 
Stimuli and procedure. (A) Presentation of a Vernier V i elicits a neural response that in signal detection theory is modeled by a Gaussian function. The Gaussian reflects the probability that the Vernier elicits a neural activity x. The Gaussian mean reflects the size and direction of the Vernier offset; the variance is assumed to be equal for all Verniers. A response is determined by comparing the neural activity x with a decision criterion c. A value to the left of c evokes a “left” response, while a “right” response is evoked by a value to the right of c. In this example, the decision criterion “badly” bisects the decision space because it is placed close to the mean of the right Gaussian. Discrimination improves if the decision criterion c is shifted to optimally bisect the decision space (red dashed line). Discrimination also improves when the overlap of the Gaussians is decreased, for example, by shifting the left Gaussian further to the left (red dashed Gaussian). (B) We adapted this paradigm to five Verniers. In each trial, one of the five Verniers was presented in the center of the screen. Verniers with a big (B) or a medium (M) offset were offset either to the left (L) or to the right (R). A Vernier with a small (S) offset was offset only to the left. BL, ML, MR, and BR Verniers were presented with a probability of 1/7, while the SL Vernier was presented with a probability of 3/7. Due to the reverse feedback for the SL Vernier (see text), hit rates could improve either by shifting the decision criterion c to the left (red dashed line) or by shifting the Gaussian corresponding to the SL Vernier to the right (red dashed Gaussian). (C) Procedure. On Day 1, baseline performance for Verniers was measured in two blocks of 80 trials. During training, six groups of participants trained with the five Verniers under six different feedback conditions (see text). On Day 2, participants performed three blocks with the Verniers without feedback, followed by three blocks with correct feedback.
Figure 1
 
Stimuli and procedure. (A) Presentation of a Vernier V i elicits a neural response that in signal detection theory is modeled by a Gaussian function. The Gaussian reflects the probability that the Vernier elicits a neural activity x. The Gaussian mean reflects the size and direction of the Vernier offset; the variance is assumed to be equal for all Verniers. A response is determined by comparing the neural activity x with a decision criterion c. A value to the left of c evokes a “left” response, while a “right” response is evoked by a value to the right of c. In this example, the decision criterion “badly” bisects the decision space because it is placed close to the mean of the right Gaussian. Discrimination improves if the decision criterion c is shifted to optimally bisect the decision space (red dashed line). Discrimination also improves when the overlap of the Gaussians is decreased, for example, by shifting the left Gaussian further to the left (red dashed Gaussian). (B) We adapted this paradigm to five Verniers. In each trial, one of the five Verniers was presented in the center of the screen. Verniers with a big (B) or a medium (M) offset were offset either to the left (L) or to the right (R). A Vernier with a small (S) offset was offset only to the left. BL, ML, MR, and BR Verniers were presented with a probability of 1/7, while the SL Vernier was presented with a probability of 3/7. Due to the reverse feedback for the SL Vernier (see text), hit rates could improve either by shifting the decision criterion c to the left (red dashed line) or by shifting the Gaussian corresponding to the SL Vernier to the right (red dashed Gaussian). (C) Procedure. On Day 1, baseline performance for Verniers was measured in two blocks of 80 trials. During training, six groups of participants trained with the five Verniers under six different feedback conditions (see text). On Day 2, participants performed three blocks with the Verniers without feedback, followed by three blocks with correct feedback.
Sensitivity increases when the means of the Gaussians are moved apart because this reduces the overlap. For example, in Figure 1A, sensitivity increases when the left Gaussian is moved further to the left (to the position of the red dashed Gaussian). Performance also increases when the criterion approaches optimal bisection of the decision space. For example, in Figure 1A, moving the criterion to the left (to the position of the red dashed line) improves discrimination between the two Verniers. 
Changes in sensitivity and changes in decision criterion are usually studied separately, i.e., criterion changes are neglected in studies of sensitivity and vice versa. Models of perceptual learning typically assume an unbiased criterion (Dosher & Lu, 1999; Gold, Bennett, & Sekuler, 1999; Petrov, Dosher, & Lu, 2005). However, recent studies showed that sensitivity and criterion changes can occur simultaneously (Wenger, Copeland, Bittner, & Thomas, 2008: Wenger & Rasche, 2006). Training Gabor detection increased sensitivities but also unexpectedly increased false alarm rates. We previously showed a strong interaction between criterion changes and sensitivity changes, namely, sensitivity changes were prevented during criterion changes (Herzog, Ewald, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). In these studies, biased feedback was provided to enforce participants to change the decision criterion (Herzog et al., 2006; Herzog & Fahle, 1999). Participants discriminated the offset direction of one out of five different Verniers (see Figure 1B), e.g., the lower line could be offset to the right (R) with a big (B) or medium (M) offset size or to the left (L) with a big (B), medium (M), or small (S) offset size. In each trial, one of the Verniers was presented at the center of the screen. An error tone was provided after each erroneous response for all Verniers except the Vernier with a small offset to the left (SL Vernier). For this Vernier, reverse feedback was provided, i.e., an error tone was presented when participants indicated a left offset while no tone was presented if indicated to the right. Following training, the hit rate decreased for all Verniers offset to the left, indicating that the reverse feedback had induced a strong shift of the decision criterion. There were no indications of sensitivity changes (Herzog et al., 2006; Herzog & Fahle, 1999). 
Here, we used this paradigm to further study the different characteristics of sensitivity and decision learning and the role of feedback in perceptual learning. Sensitivity changes in a Vernier task depend on the feedback provided during training (Herzog & Fahle, 1997). Trial-wise feedback, e.g., an error tone, or block-wise feedback, e.g., the percentage of correct responses after a number of trials, improves performance stronger as compared to when no feedback is provided. In addition, sensitivity changes are maintained between sessions (Aberg, Tartaglia, & Herzog, 2009; Tartaglia, Aberg, & Herzog, 2009) and sometimes sensitivity improves between, rather than within, sessions, often requiring sleep (Karni & Sagi, 1993; Karni, Tanne, Rubenstein, Askenasy, & Sagi, 1994; Mednick et al., 2002; Mednick, Nakayama, & Stickgold, 2003; Stickgold, LaTanya, & Hobson, 2000; Stickgold, Whidbee, Schirmer, Patel, & Hobson, 2000; Yotsumoto et al., 2009). 
Here, we tested whether criterion changes also depend on the type of feedback and whether criterion changes occurred between or within sessions. Furthermore, we tested whether sensitivity and criterion changes occurred simultaneously. 
The results show that changes in decision criterion are different from changes in sensitivity. First, in contrast to sensitivity changes, the decision criterion was not changed by block feedback. Second, whereas sensitivity changes are long lasting and often occur between sessions, criterion changes occurred within a session and dissipated between sessions. Third, reverse feedback changed the sensitivity for the Vernier with a small offset to the left relative to the other Verniers. 
General materials and methods
Participants
Thirty-six naïve participants (23 males) from the Ecole Polytechnique Fédérale de Lausanne (EPFL) joined the experiment after providing informed written consent. All participants had normal or corrected-to-normal vision, as measured with the Freiburg visual acuity test (Bach, 1996) and were paid for their participation (20 CHF/h). 
Apparatus and stimuli
Verniers consisted of two vertical lines of length 10′ (arcmin) separated by a gap of 1′. In each trial, the lower line was offset randomly to the left or to the right relative to the upper line. Verniers were presented on an XY display (Tektronic 608) controlled by a PC via fast 16-bit D/A converters (1-MHz pixel rate). Lines were composed of dots drawn at a dot size of 250–350 μm at a dot rate of 1 MHz. The dot pitch was selected so that dots slightly overlapped, i.e., the dot size (or line width) was of the same magnitude as the dot pitch. Stimuli were refreshed at 200 Hz. Luminance was 80 cd/m2 measured with a Minolta LS-100 luminance meter. The room was dimly illuminated (0.5 lux). Background luminance on the screen was below 1 cd/m2. Verniers were presented foveally for 150 ms from a distance of 2 m. 
Task and procedure
In each trial, one Vernier was presented in the center of the screen and participants indicated whether the lower line was offset to the left or to the right relative to the upper line by pushing one of two buttons. 
The experiment was conducted over 2 days (Figure 1C). On the first day, baseline performance for Vernier discrimination was determined in two blocks of 80 trials each. A threshold was determined in each block by varying the Vernier offset size by an adaptive staircase method (PEST; Taylor & Creelman, 1967). A threshold of 75% correct responses was determined by maximum likelihood estimation of the parameters of the psychometric function. The initial offset size was set to 0.8′. Trial-by-trial correct feedback was provided during the baseline measurements. 
Directly after baseline measurements, participants trained ten blocks of 84 trials each. One out of five different Verniers was presented in each trial. Verniers offset to the right (R) could have either a big (B) or a medium (M) offset, while Verniers offset to the left (L) could have a big (B), medium (M), or small (S) offset (Figure 1B). These Verniers are referred to as BR, MR, BL, ML, and SL, respectively. BR, MR, BL, and ML Verniers had a display probability of 1/7 each, while the SL Vernier had a display probability of 3/7. The offset sizes were calculated for each participant based on baseline performance: Big (B) offset = 1.1 * baseline threshold, Medium (M) offset = 0.66 * baseline threshold, and Small (S) offset: 0.33 * baseline threshold. On the second day, participants performed six more blocks. No feedback was provided for the first three blocks while trial-by-trial correct feedback was provided for the last three blocks. 
Thirty-six participants joined the experiment and were randomly assigned to one of six groups. The groups differed by the type of feedback provided during the training on Day 1. One group received no feedback during the training (noFB; n = 6). A second group received trial-by-trial correct feedback, i.e., an error tone was presented directly following an incorrect response (tt-correctFB; n = 6). A third group received trial-by-trial reverse feedback (tt-reverseFB; n = 6) where an error tone was presented following an incorrect response for all Verniers except the Vernier with a small offset to the left. For this Vernier, an error tone was presented for correct responses, i.e., when the Vernier was indicated as offset to the left and no tone was presented for incorrect responses. A fourth group received correct block feedback after seven trials, i.e., displaying the hit rate on the screen (7tb-correctFB; n = 6). A fifth group received reverse block feedback after seven trials (7tb-reverseFB; n = 6). For this group, the block feedback was calculated as the sum of the hit rates for the BL, ML, BR, and MR Verniers plus the error rate for the SL Vernier. Finally, a sixth group received block reverse feedback after 84 trials (84tb-reverseFB; n = 6). 
Analysis
Performance data were analyzed according to the standard model of SDT. This model assumes Gaussian noise distributions that are constant and of equal variance. Furthermore, one decision criterion was used to discriminate the offset direction for all Verniers. The decision criterion c and sensitivity d′ were calculated according to standard procedures (Macmillan & Creelman, 2005). The decision criterion can be calculated as
c = 0.5 ( z ( H ) + z ( F A ) )
as well as
c = z ( F A V S L )
. Since the results did not differ between these measures, we used the latter for simplicity (for a comparison with the other measure, see Herzog et al., 2006). The sensitivity between two Verniers V 1 and V 2 offset to different sides was calculated as
d V 1 V 2 = z ( H V 2 ) + z ( H V 1 )
and the sensitivity between two Verniers offset to the same side was calculated as
d V 1 V 2 = z ( H V 2 ) z ( H V 1 )
. H V 1 and H V 2 are the hit rates for the two Verniers V 1 and V 2. Analysis of variance (ANOVA) and permutation tests (Moore & McCabe, 2005) were used for statistical comparisons. 
Results
Hit rates are shown in Figures 2A2F. There was little or no change in hit rate for the noFB group (Figure 2A). For the trial-by-trial correct feedback group, there were slight increments of hit rates for the left offset Verniers and slight decrements for the right offset Verniers (Figure 2B). In the trial-by-trial reverse feedback condition, and in accordance with previous studies (Herzog et al., 2006; Herzog & Fahle, 1999), hit rates decreased for left offset Verniers while slightly increasing for right offset Verniers (Figure 2C). There were little or no change in hit rates for the groups receiving block feedback (7tb-correctFB: Figure 2D; 7tb-reverseFB: Figure 2E; 84tb-reverseFB: Figure 2F). 
Figure 2
 
Hit rates for the six feedback conditions. In blocks 1–10 (first day of training), different groups received different types of feedback (see below and text). In blocks 11–16 (second day of training), no feedback was provided for blocks 11–13, while correct feedback was provided for blocks 14–16 for all groups. (A) No feedback condition (n = 6). (B) Trial-by-trial correct feedback condition (n = 6). (C) Trial-by-trial reverse feedback condition (n = 6). (D) Seven-trial correct block feedback condition (n = 6). (E) Seven-trial reverse block feedback condition (n = 6). (F) Eighty-four-trial reverse block feedback condition (n = 6). The hit rate decreased for all right offset Verniers in the trial-by-trial correct feedback condition (B) and for all left offset Verniers in the trial-by-trial reverse feedback condition (C). There were only small, or no changes in the other conditions (see text). Mean hit rate (%) ± SEM.
Figure 2
 
Hit rates for the six feedback conditions. In blocks 1–10 (first day of training), different groups received different types of feedback (see below and text). In blocks 11–16 (second day of training), no feedback was provided for blocks 11–13, while correct feedback was provided for blocks 14–16 for all groups. (A) No feedback condition (n = 6). (B) Trial-by-trial correct feedback condition (n = 6). (C) Trial-by-trial reverse feedback condition (n = 6). (D) Seven-trial correct block feedback condition (n = 6). (E) Seven-trial reverse block feedback condition (n = 6). (F) Eighty-four-trial reverse block feedback condition (n = 6). The hit rate decreased for all right offset Verniers in the trial-by-trial correct feedback condition (B) and for all left offset Verniers in the trial-by-trial reverse feedback condition (C). There were only small, or no changes in the other conditions (see text). Mean hit rate (%) ± SEM.
Changes in decision criterion
Next, we investigated whether the changes in hit rate could be explained by changes in the decision criterion. The results are shown in Figure 3A. There were large criterion changes for participants receiving trial-by-trial feedback (tt-reverseFB and tt-correctFB) while there were small or no changes for the other groups. To determine whether the criterion changed with training, regression lines were fitted to each participant's data for blocks 1–10. The averaged regression slopes are shown in Figure 3B. The slopes were significantly different from 0.0 for the trial-by-trial feedback groups, indicating that the criterion changed with training [tt-reverseFB: mean criterion change = −0.055 c/block, p < 0.01; tt-correctFB: mean criterion change = 0.055 c/block, p < 0.01]. These results are in line with previous studies showing that performance changes were caused by criterion changes (Herzog et al., 2006; Herzog & Fahle, 1999). There were no significant criterion changes for participants receiving no or block feedback [noFB: mean criterion change = 0.012 c/block, p = 0.57; 7tb-reverseFB: mean criterion change = −0.005 c/block, p > 0.75; 84tb-reverseFB: mean criterion change = −0.016 c/block, p > 0.46; 7tb-correctFB: mean criterion change = 0.001 c/block, p > 0.96]. Hence, criterion changes, as opposed to sensitivity changes (Herzog & Fahle, 1997; Shibata, Yamagishi, Ishii, & Kawato, 2009), require trial-by-trial feedback. 
Figure 3
 
Decision criterion. (A) Decision criterion as a function of training. There is a large change in decision criterion in blocks 1–10 for the two groups receiving trial-by-trial feedback, while there is no or little change for the other groups. There was a break after the tenth block, followed the next day by three blocks without feedback and three blocks with trial-wise correct feedback. Mean c ± SEM. (B) Average regression slopes for the criterion in blocks 1–10. Slopes were significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are negative for the tt-reverseFB group and positive for the tt-correctFB group because the decision criterion was shifted in different directions. Mean c/block ± SEM. (C) The criterion at different points during training. There was only a significant difference in criterion at the start of training (blocks 1–2) and the end of training (blocks 9–10) for the two groups receiving trial-by-trial feedback. There was no difference in criterion at the start of training on Day 1 (blocks 1–2) and Day 2 (blocks 11–12) for any feedback condition. Hence, changes in decision criterion vanished between the sessions. Mean c ± SEM (*p < 0.05, **p < 0.01).
Figure 3
 
Decision criterion. (A) Decision criterion as a function of training. There is a large change in decision criterion in blocks 1–10 for the two groups receiving trial-by-trial feedback, while there is no or little change for the other groups. There was a break after the tenth block, followed the next day by three blocks without feedback and three blocks with trial-wise correct feedback. Mean c ± SEM. (B) Average regression slopes for the criterion in blocks 1–10. Slopes were significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are negative for the tt-reverseFB group and positive for the tt-correctFB group because the decision criterion was shifted in different directions. Mean c/block ± SEM. (C) The criterion at different points during training. There was only a significant difference in criterion at the start of training (blocks 1–2) and the end of training (blocks 9–10) for the two groups receiving trial-by-trial feedback. There was no difference in criterion at the start of training on Day 1 (blocks 1–2) and Day 2 (blocks 11–12) for any feedback condition. Hence, changes in decision criterion vanished between the sessions. Mean c ± SEM (*p < 0.05, **p < 0.01).
Overnight changes in decision criterion
Sleep is beneficial for sensitivity learning (Censor, Karni, & Sagi, 2006; Karni et al., 1994; Matarazzo, Frank, Maquet, & Vogels, 2008; Stickgold, LaTanya et al., 2000) and sensitivity changes are maintained over a night (Aberg et al., 2009; Tartaglia et al., 2009). We tested whether criterion learning behaved similarly. Participants performed another session on the following day without feedback. Training without feedback “freezes” the performance level such that there are fewer or slower changes in learning (Herzog & Fahle, 1997). The mean criterion for blocks 1–2 and 9–10 were used to indicate the decision criterion at the Start of training and End of training on Day 1, respectively. The mean criterion for blocks 11–12 was used to indicate initial performance on Day 2 (Figure 3C). A two-way repeated measures ANOVA with factors Group (6 groups) and Time (Start, End, and Day 2) revealed no main effect of Time [F(2,60) = 0.23, p > 0.79] but a significant effect of Group [F(5,30) = 4.72, p < 0.01] and interaction between Group and Time [F(10,60) = 2.04, p < 0.05]. While there were significant differences in criterion between the Start and the End for the trial-by-trial feedback groups [tt-reverseFB: mean criterion difference = 0.47, p < 0.05; tt-correctFB: mean criterion difference = −0.40, p < 0.05], there were no significant differences for any of the other groups [largest mean criterion difference < 0.12, all p > 0.31). There were no significant differences in criterion between the Start of Day 1 and Day 2 for any of the groups [tt-reverseFB: mean criterion difference = 0.05, p = 0.81; tt-correctFB: mean criterion difference = −0.11, p = 0.63; largest mean criterion difference for the other groups: mean criterion difference < 0.15, all p > 0.06]. Hence, unlike sensitivity changes (Aberg et al., 2009; Tartaglia et al., 2009), criterion changes are not maintained overnight. 
Changes in sensitivity
Perceptual training of a “normal” Vernier task with one Vernier offset to the left and one Vernier offset to the right improves sensitivity (Herzog & Fahle, 1997). However, we found no sensitivity changes in a previous study with a similar design as used here but with seven blocks of training (Herzog & Fahle, 1999). Since perceptual learning is rather slow and requires a large number of trials, here, we increased the number of trials as compared to our previous study to provoke sensitivity changes. 
We investigated whether the different feedback conditions changed the sensitivity for the SL Vernier relative to the other Verniers. First, the sensitivities between the SL Vernier and the other two left offset Verniers and the two right offset Verniers were calculated (Figures 4A4F). For each block, the sensitivities were averaged across the sides, i.e., d RIGHT-SL′ = 0.5(d BR-SL′ + d MR-SL′) for the right side and d LEFT-SL′ = 0.5(d BL-SL′ + d ML-SL′) for the left side. To estimate whether the SL Vernier was “shifted” toward one side or the other, the mean sensitivity for the SL Vernier relative the left offset Verniers was subtracted from the mean sensitivity for the SL Vernier relative the right offset Verniers, i.e., Δ = d RIGHT-SL′ − d LEFT-SL′ (Figure 5A). A positive Δ indicates that the SL Vernier was closer to the right side, while a negative Δ indicates that the SL Vernier was closer to the left side. Finally, we determined how the Δ changed with training, ΔSHIFT, by calculating the regression slopes over blocks 1–10. The means of the slope values are shown in Figure 5B. A positive ΔSHIFT indicates that the training shifted the SL Vernier toward the right side, while a negative ΔSHIFT indicates that the training shifted the SL Vernier toward the left side. There was a significant positive shift for the trial-by-trial reverse feedback group (tt-reverseFB: ΔSHIFT = 0.060 d′/block; p < 0.01) and there was a significant negative shift for the trial-by-trial correct feedback group (tt-correctFB: ΔSHIFT = −0.055 d′/block; p < 0.05). There were no significant changes in any other group (largest ΔSHIFT < 0.37 d′/block; all p > 0.25). 
Figure 4
 
The relative sensitivities for the six feedback conditions. For each of the BR, MR, ML, and BL Verniers, the sensitivity was calculated relative to the SL Vernier. Thus, a value of 0.0 for a Vernier indicates that this Vernier could not be discriminated from the SL Vernier. Negative values indicat Verniers offset to the right relative to the SL Vernier and positive values indicate Verniers offset to the left relative to the SL Vernier. (A) No feedback condition. (B) Trial-by-trial correct feedback condition. (C) Trial-by-trial reverse feedback condition. (D) Seven-trial correct block feedback condition. (E) Seven-trial reverse block feedback condition. (F) Eighty-four-trial reverse block feedback condition. There were significant changes in the sensitivity for the SL Vernier in the trial-by-trial feedback groups (see text). For the tt-correctFB group, the SL Vernier was shifted toward the left (B), and for the tt-reverseFB group, the SL Vernier was shifted toward the right (C). Hence, the direction of the shift was determined by the direction indicated by the feedback. Mean d′ ± SEM.
Figure 4
 
The relative sensitivities for the six feedback conditions. For each of the BR, MR, ML, and BL Verniers, the sensitivity was calculated relative to the SL Vernier. Thus, a value of 0.0 for a Vernier indicates that this Vernier could not be discriminated from the SL Vernier. Negative values indicat Verniers offset to the right relative to the SL Vernier and positive values indicate Verniers offset to the left relative to the SL Vernier. (A) No feedback condition. (B) Trial-by-trial correct feedback condition. (C) Trial-by-trial reverse feedback condition. (D) Seven-trial correct block feedback condition. (E) Seven-trial reverse block feedback condition. (F) Eighty-four-trial reverse block feedback condition. There were significant changes in the sensitivity for the SL Vernier in the trial-by-trial feedback groups (see text). For the tt-correctFB group, the SL Vernier was shifted toward the left (B), and for the tt-reverseFB group, the SL Vernier was shifted toward the right (C). Hence, the direction of the shift was determined by the direction indicated by the feedback. Mean d′ ± SEM.
Figure 5
 
Sensitivity for the SL Vernier relative to the other Verniers. (A) Parameter Δ is the sensitivity difference between the left offset Verniers relative to the SL Vernier and right offset Verniers relative to the SL Vernier (Δ = d RIGHT-SL′ − d LEFT-SL′). Parameter Δ at block 1 has been subtracted from each block to better illustrate how Δ changes with training. Parameter Δ increases with training for the trial-by-trial reverse feedback condition, suggesting that the SL Vernier shifts to the right. In contrast, Δ decreases with training for the trial-by-trial correct feedback condition, suggesting that the SL Vernier shifts to the left. Mean d′ ± SEM. (B) Average sensitivity shift, ΔSHIFT. Parameter ΔSHIFT was significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are positive for the tt-reverseFB group and negative for the tt-correctFB group because the SL Vernier was shifted in different directions. Mean d′/block ± SEM. (C) Correlation between decision criterion changes and the sensitivity shift ΔSHIFT for the SL Vernier. There was a significant negative correlation between ΔSHIFT and the criterion changes (*p < 0.05).
Figure 5
 
Sensitivity for the SL Vernier relative to the other Verniers. (A) Parameter Δ is the sensitivity difference between the left offset Verniers relative to the SL Vernier and right offset Verniers relative to the SL Vernier (Δ = d RIGHT-SL′ − d LEFT-SL′). Parameter Δ at block 1 has been subtracted from each block to better illustrate how Δ changes with training. Parameter Δ increases with training for the trial-by-trial reverse feedback condition, suggesting that the SL Vernier shifts to the right. In contrast, Δ decreases with training for the trial-by-trial correct feedback condition, suggesting that the SL Vernier shifts to the left. Mean d′ ± SEM. (B) Average sensitivity shift, ΔSHIFT. Parameter ΔSHIFT was significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are positive for the tt-reverseFB group and negative for the tt-correctFB group because the SL Vernier was shifted in different directions. Mean d′/block ± SEM. (C) Correlation between decision criterion changes and the sensitivity shift ΔSHIFT for the SL Vernier. There was a significant negative correlation between ΔSHIFT and the criterion changes (*p < 0.05).
Correlation between changes in decision criterion and changes in sensitivity
Finally, we tested the correlation between criterion changes (Figure 3B) and the sensitivity shift Δ SHIFT for the SL Vernier (Figure 5B). The results are shown in Figure 5C. There was a significant negative correlation between criterion changes and Δ SHIFT for the SL Vernier (Pearson r = −0.61, p < 0.001). 
Discussion
Perceptual learning improves perception. For example, sensitivity increases when participants train to discriminate left offset Verniers from right offset Verniers (Herzog & Fahle, 1997). Usually, changes in sensitivity are the primary focus of perceptual learning research while changes in criterion are considered unavoidable nuisances that occlude changes in sensitivity. Decision and sensitivity learning are usually considered as independent processes. However, we have shown in previous publications that decision and sensitivity learning strongly interact with each other. Reverse feedback was provided for the SL Vernier with a subthreshold offset, whereas correct feedback was provided for Verniers with larger offsets (Herzog et al., 2006; Herzog & Fahle, 1999). The basic idea with this setup is that the correct feedback for the large offset Verniers ensures observers about the stimulus–response mapping and the correctness of feedback. It was found that reverse feedback induced criterion changes but not sensitivity changes (Herzog et al., 2006; Herzog & Fahle, 1999) that otherwise would have occurred (Herzog & Fahle, 1997). Here, we used this paradigm to investigate the characteristics of sensitivity and decision learning and the role of feedback further. 
We found that the decision criterion changed when reverse feedback was provided trial-wise, as in previous studies, but not when feedback was provided block-wise (Figures 3A and 3B). This is contrary to sensitivity learning where sensitivity improves with block feedback in standard learning paradigms, e.g., when one left and one right offset Verniers with the same offset size were presented (Herzog & Fahle, 1997; see also Shibata et al., 2009). Hence, sensitivity and decision learning differ strongly when it comes to the types of feedback provided. Why? The setting of the criterion is under voluntary control, i.e., the ratio of decisions for the left or right response, respectively, can easily be changed. For example, a participant may quickly change the criterion when he/she realizes that right responses are highly rewarded compared to left responses. Hence, criterion learning seems to depend on explicit information. For example, from trial-by-trial feedback, it can easily be inferred that there are more feedback tones for a left response than a right response. From block feedback, this information cannot be inferred. Changes in sensitivity are not voluntary but implicit as shown in many previous studies (Cohen & Squire, 1980; Fahle & Daum, 2002; Squirem, Knowlton, & Musen, 1993). One tempting proposition is that decision learning can exploit trial-wise, explicit feedback only, whereas sensitivity learning can “use” both types of feedback. These considerations are in accordance with other types of implicit and explicit learning (Maddox, 2002). 
Criterion changes were not maintained overnight (Figure 3C), whereas sensitivity changes are usually maintained overnight (Aberg et al., 2009; Tartaglia et al., 2009) and often even just occur after sleep (Otto, Ogmen, & Herzog, 2010; Karni et al., 1994; Mednick et al., 2002, 2003; Matarazzo et al., 2008). Why? We suggest that implicit, sensitivity learning with one type of stimuli comes with heavy and crucial neural changes affecting the processing of other basic visual stimuli (stability–plasticity dilemma). Sleep and other consolidation processes are needed to prevent “blindly” overriding of previously learned tasks and stimuli. As a speculation, these consolidation processes may occur during REM sleep where changes, induced by one stimulus type, “tested” the influence of processing other stimulus types (Karni et al., 1994; Mednick et al., 2002, 2003; Stickgold, LaTanya et al., 2000). Decision learning is of a rather transient type of learning that does not need long-term consolidation, particularly, because it can be changed quickly (Herzog et al., 2006). 
In previous publications (Herzog et al., 2006; Herzog & Fahle, 1999), we found no sensitivity changes when reverse feedback was provided. We proposed that decision learning precedes sensitivity learning such that sensitivity learning only occurs when the decision criterion is unbiased, i.e., when there are as many feedback tones for the left and the right response keys. In the previous studies, reverse feedback was provided for seven blocks. Here, we used ten blocks and found significant changes in sensitivity. We speculate that this sensitivity change occurred because the decision criterion was unbiased, possibly from about blocks seven onward (see Figure 3A). We attribute sensitivity changes to synaptic changes of the representations of the Verniers (dashed Gaussians in Figure 1B). It is debated whether these changes occur on lower (Fahle & Poggio, 2002) or higher (Mollon & Danilova, 1996) areas. Changes in criterion occur on a subsequent level. For example, the decision criterion might change the gating of neural information from the Vernier representations to the motor response (Dosher & Lu, 1999; Herzog & Fahle, 1998). In this speculative scenario, there are no synaptic changes on the Vernier representation level as long as the criterion changes, i.e., as long as gating is ongoing. In this sense, ongoing decision learning suppresses sensitivity learning. In mathematical terms, the learning rate for sensitivity learning is zero when the learning rate for decision learning is non-zero. 
It remains an open question whether the left SL Vernier really appears as offset to the right. Providing reverse feedback for the SL Vernier increased the discriminability of the SL Vernier relative to the other left offset Verniers, while the reverse feedback decreased the discriminability of the SL Vernier relative to the right offset Verniers (Figures 5A and 5B). However, at this point, we cannot tell whether this means that the SL Vernier was actually perceived as offset to right. Further studies will tell how far the manipulated feedback can alter perception. 
We found a strong correlation between criterion and sensitivity changes (Figure 5C), supporting the notion of an interaction between sensitivity and criterion learning (see above and Herzog et al., 2006; Herzog & Fahle, 1999). Similar findings were reported in Gabor detection tasks (Wenger et al., 2008; Wenger & Rasche, 2006). In one task, participants determined whether a Gabor was presented in the first interval, the second interval, or in none of the two intervals. Training increased sensitivity but, surprisingly, also the false alarm rates. Since
d = z ( H ) z ( F A ) 2
, the increase in hit rate was larger than the increase in false alarm rate. These results were found for participants training with trial-wise feedback and for participants training without feedback. One difference between Gabor detection and the Vernier discrimination task used is how the error rate depends on the placement of the decision criterion. In the Gabor detection task, adapting a liberal decision criterion increases the tendency to report Gabors as being present in trials without Gabors, i.e., the false alarm rate and the error rate increases. However, a liberal decision criterion also increases the hit rate for trials with subthreshold Gabors, i.e., the error rate decreases. Thus, the effects of increasing the false alarm rate and increasing the hit rate for subthreshold Gabors may cancel each other out and have an negligible effect on the overall error rate reported by the feedback. In addition, strong increases in sensitivity reduces the error rate and may have masked the increased error rate due to adapting a liberal criterion (Wenger et al., 2008; Wenger & Rasche, 2006). In contrast, in our study, the error rate directly depended on the position of the decision criterion, and in addition, sensitivity changes did not occur without criterion changes (Figures 5B and 5C). Therefore, the feedback was crucial in order to guide criterion placement to reduce the error rate in the Vernier task, while it may have had negligible effects on criterion changes and error rates in the Gabor detection task. 
The model of STD used in the present study assumes that the decision criterion is not noisy. Thus, the same output from the encoding stage always evokes the same response of the decision stage. However, recent studies argue that the decision criterion is also noisy, i.e., that the same output from the encoding stage may evoke different responses (Benjamin, Diaz, & Wee, 2009; Mueller & Weidemann, 2007; Rosner & Kochanski, 2009). While it is beyond the scope of this article to review the influence of decision noise on the standard model of SDT (for reviews, see Benjamin et al., 2009; Mueller & Weidemann, 2007; Rosner & Kochanski, 2009), it cannot be ruled out that sensitivity changes found in the present study are, at least in part, due to a sharpening of the decision process, i.e., a reduction of decision noise (Rosner & Kochanski, 2009). In addition, the standard model of STD assumes equal variance for all stimuli. Assuming equal variance when there is unequal variance leads to an overestimation of the sensitivity. However, an overestimation of sensitivity does not influence the changes in sensitivity, which is the main focus of the present study. 
In summary, sensitivity and decision learning show very different characteristics and strongly interact. Sensitivity learning is sensitive to implicit and explicit feedback types but may need consolidation. Consolidation is not needed for decision learning because the decision criterion can be changed quickly and transiently. Both types of feedback interact with each other, which makes it important to study both simultaneously, e.g., by monitoring both d′ and c, which is rarely done in most studies on perceptual learning. One hypothesis to pursue is that sensitivity learning just occurs when decision learning is accomplished, i.e., feedback and other factors do not favor one response alternative over the other (Herzog et al., in press). 
Acknowledgments
We would like to thank Marc Repnow for technical support. Kristoffer Aberg was supported by the Swiss National Science Foundation (SNF) through the Sinergia project “State representation in reward based learning—From spiking neuron models to psychophysics.” 
Commercial relationships: none. 
Corresponding author: Kristoffer C. Aberg. 
Email: kc.aberg@gmail.com. 
Address: University Medical Center, CMU, 1 rue Michel-Servet, Geneva 1211, Switzerland. 
Footnote
Footnotes
1  Here, we used the standard model of STD. This model assumes identical variance of Gaussian noise for all stimuli and one decision criterion.
References
Aberg K. C. Tartaglia E. M. Herzog M. H. (2009). Perceptual learning with chevrons requires a minimal number of trials, transfers to untrained directions, but does not require sleep. Vision Research, 49, 2087–2094. [CrossRef] [PubMed]
Bach M. (1996). The Freiburg visual acuity test—Automatic measurement of visual acuity. Optometry and Vision Science, 73, 49–53. [CrossRef] [PubMed]
Benjamin A. Diaz M. Wee S. (2009). Signal detection with criterion noise: Applications to recognition memory. Psychological Review, 116, 84–115. [CrossRef] [PubMed]
Censor N. Karni A. Sagi D. (2006). A link between perceptual learning, adaptation and sleep. Vision Research, 46, 4071–4074. [CrossRef] [PubMed]
Cohen N. J. Squire L. (1980). Preserved learning and retention of pattern-analyzing skills in amnesia: Dissociation of knowing how and knowing that. Science, 210, 207–210. [CrossRef] [PubMed]
Dosher B. Lu Z. (1999). Mechanisms of perceptual learning. Vision Research, 39, 3197–3221. [CrossRef] [PubMed]
Fahle M. Daum I. (2002). Perceptual learning in amnesia. Neuropsychologica, 40, 1167–1172. [CrossRef]
Fahle M. Poggio T. (Eds.) (2002). Perceptual learning. Cambridge, MA: MIT Press.
Gold J. Bennett P. Sekuler A. (1999). Signal but not noise changes with perceptual learning. Nature, 402, 176–178. [CrossRef] [PubMed]
Herzog M. Ewald K. Hermens F. Fahle M. (2006). Reverse feedback induces position and orientation specific changes. Vision Research, 46, 3761–3770. [CrossRef] [PubMed]
Herzog M. Fahle M. (1997). The role of feedback in learning a vernier discrimination task. Vision Research, 37, 2133–2141. [CrossRef] [PubMed]
Herzog M. Fahle M. (1998). Modelling perceptual learning: Difficulties and how they can be overcome. Biological Cybernetics, 78, 107–117. [CrossRef] [PubMed]
Herzog M. Fahle M. (1999). Effects of biased feedback on learning and deciding in a vernier discrimination task. Vision Research, 39, 4232–4243. [CrossRef] [PubMed]
Herzog M. H. Aberg K. C. Fremaux N. Gerstner W. Sprekeler H. (in press). Perceptual learning, roving and the unsupervised bias. Vision Research.
Karni A. Sagi D. (1993). The time course of learning a visual skill. Nature, 365, 250–252. [CrossRef] [PubMed]
Karni A. Tanne D. Rubenstein B. S. Askenasy J. J. M. Sagi D. (1994). Dependence on REM sleep of overnight improvement of a perceptual skill. Science, 265, 679–682. [CrossRef] [PubMed]
Macmillan N. A. Creelman C. D. (2005). Detection theory: A user's guide (2nd ed.). Hillsdale, New Jersey, USA: Lawrence Erlbaum Associates.
Maddox T. W. (2002). Toward a unified theory of decision learning in perceptual categorization. Journal of the Experimental Analysis of Behavior, 78, 567–595. [CrossRef] [PubMed]
Matarazzo L. Franko E. Maquet P. Vogels R. (2008). Offline processing of memories induced by perceptual visual learning during subsequent wakefulness and sleep: A behavioral study. Journal of Vision, 8(4):7, 1–9, http://www.journalofvision.org/content/8/4/7, doi:10.1167/8.4.7. [PubMed] [Article] [CrossRef] [PubMed]
Mednick S. C. Nakayama K. Cantero J. L. Atienza M. Levin A. A. Pathak N. Stickgold R. (2002). The restorative effect of naps on perceptual deterioration. Nature Neuroscience, 5, 677–681. [PubMed]
Mednick S. C. Nakayama K. Stickgold R. (2003). Sleep-dependent learning: A nap is as good as a night. Nature Neuroscience, 6, 697–698. [CrossRef] [PubMed]
Mollon J. D. Danilova M. V. (1996). Three remarks on perceptual learning. Spatial Vision, 10, 51–58. [CrossRef] [PubMed]
Moore D. S. McCabe G. (2005). Introduction to the practice of statistics (5th ed.). New York, NY, USA: W H Freeman.
Mueller S. Weidemann C. (2007). Decision noise: An explanation for observed violations of signal detection theory. Psychonomic Bulletin and Review, 15, 465–494. [CrossRef]
Otto T. U. Ogmen H. Herzog M. H. (2010). Perceptual learning in a nonretinotopic frame of reference. Psychological Science, 21, 1058–1063. [CrossRef] [PubMed]
Petrov A. Dosher B. Lu Z. L. (2005). The dynamics of perceptual learning: An incremental reweighting model. Psychological Review, 112, 715–743. [CrossRef] [PubMed]
Rosner B. Kochanski G. (2009). The law of categorical judgment (corrected) and the interpretation of changes in psychophysical performance. Psychological Review, 116, 116–128. [CrossRef] [PubMed]
Shibata K. Yamagishi N. Ishii S. Kawato M. (2009). Boosting perceptual learning by fake feedback. Vision Research, 49, 2574–2585. [CrossRef] [PubMed]
Squirem L. Knowlton B. Musen G. (1993). The structure and organization of memory. Annual Review of Psychology, 44, 453–495. [CrossRef] [PubMed]
Stickgold R. LaTanya J. Hobson J. A. (2000). Visual discrimination learning requires sleep after training. Nature Neuroscience, 3, 1237–1238. [CrossRef] [PubMed]
Stickgold R. Whidbee D. Schirmer B. Patel V. Hobson J. A. (2000). Visual discrimination task improvement: A multi-step process occurring during sleep. Journal of Cognitive Neuroscience, 12, 246–254. [CrossRef] [PubMed]
Tartaglia E. Aberg K. Herzog M. (2009). Perceptual learning and roving: Stimulus types and overlapping neural populations. Vision Research, 49, 1420–1427. [CrossRef] [PubMed]
Taylor M. M. Creelman C. D. (1967). Pest: Efficient estimates on probability functions. Journal of the Acoustical Society of America, 41, 782–787. [CrossRef]
Wenger M. Copeland A. Bittner J. Thomas R. (2008). Evidence for criterion shifts in visual perceptual learning: Data and implications. Perception & Psychophysics, 70, 1248–1273. [CrossRef] [PubMed]
Wenger M. Rasche C. (2006). Perceptual learning in contrast detection: Presence and cost of shifts in response criteria. Psychonomic Bulletin and Review, 13, 656–661. [CrossRef] [PubMed]
Yotsumoto Y. Sasaki Y. Chan P. Vasios C. E. Bonmassar G. Ito N. Nanez J. E., Sr. Shimojo S. Watanabe T. (2009). Location-specific cortical activation changes during sleep after training for perceptual learning. Current Biology, 19, 1278–1282. [CrossRef] [PubMed]
Figure 1
 
Stimuli and procedure. (A) Presentation of a Vernier V i elicits a neural response that in signal detection theory is modeled by a Gaussian function. The Gaussian reflects the probability that the Vernier elicits a neural activity x. The Gaussian mean reflects the size and direction of the Vernier offset; the variance is assumed to be equal for all Verniers. A response is determined by comparing the neural activity x with a decision criterion c. A value to the left of c evokes a “left” response, while a “right” response is evoked by a value to the right of c. In this example, the decision criterion “badly” bisects the decision space because it is placed close to the mean of the right Gaussian. Discrimination improves if the decision criterion c is shifted to optimally bisect the decision space (red dashed line). Discrimination also improves when the overlap of the Gaussians is decreased, for example, by shifting the left Gaussian further to the left (red dashed Gaussian). (B) We adapted this paradigm to five Verniers. In each trial, one of the five Verniers was presented in the center of the screen. Verniers with a big (B) or a medium (M) offset were offset either to the left (L) or to the right (R). A Vernier with a small (S) offset was offset only to the left. BL, ML, MR, and BR Verniers were presented with a probability of 1/7, while the SL Vernier was presented with a probability of 3/7. Due to the reverse feedback for the SL Vernier (see text), hit rates could improve either by shifting the decision criterion c to the left (red dashed line) or by shifting the Gaussian corresponding to the SL Vernier to the right (red dashed Gaussian). (C) Procedure. On Day 1, baseline performance for Verniers was measured in two blocks of 80 trials. During training, six groups of participants trained with the five Verniers under six different feedback conditions (see text). On Day 2, participants performed three blocks with the Verniers without feedback, followed by three blocks with correct feedback.
Figure 1
 
Stimuli and procedure. (A) Presentation of a Vernier V i elicits a neural response that in signal detection theory is modeled by a Gaussian function. The Gaussian reflects the probability that the Vernier elicits a neural activity x. The Gaussian mean reflects the size and direction of the Vernier offset; the variance is assumed to be equal for all Verniers. A response is determined by comparing the neural activity x with a decision criterion c. A value to the left of c evokes a “left” response, while a “right” response is evoked by a value to the right of c. In this example, the decision criterion “badly” bisects the decision space because it is placed close to the mean of the right Gaussian. Discrimination improves if the decision criterion c is shifted to optimally bisect the decision space (red dashed line). Discrimination also improves when the overlap of the Gaussians is decreased, for example, by shifting the left Gaussian further to the left (red dashed Gaussian). (B) We adapted this paradigm to five Verniers. In each trial, one of the five Verniers was presented in the center of the screen. Verniers with a big (B) or a medium (M) offset were offset either to the left (L) or to the right (R). A Vernier with a small (S) offset was offset only to the left. BL, ML, MR, and BR Verniers were presented with a probability of 1/7, while the SL Vernier was presented with a probability of 3/7. Due to the reverse feedback for the SL Vernier (see text), hit rates could improve either by shifting the decision criterion c to the left (red dashed line) or by shifting the Gaussian corresponding to the SL Vernier to the right (red dashed Gaussian). (C) Procedure. On Day 1, baseline performance for Verniers was measured in two blocks of 80 trials. During training, six groups of participants trained with the five Verniers under six different feedback conditions (see text). On Day 2, participants performed three blocks with the Verniers without feedback, followed by three blocks with correct feedback.
Figure 2
 
Hit rates for the six feedback conditions. In blocks 1–10 (first day of training), different groups received different types of feedback (see below and text). In blocks 11–16 (second day of training), no feedback was provided for blocks 11–13, while correct feedback was provided for blocks 14–16 for all groups. (A) No feedback condition (n = 6). (B) Trial-by-trial correct feedback condition (n = 6). (C) Trial-by-trial reverse feedback condition (n = 6). (D) Seven-trial correct block feedback condition (n = 6). (E) Seven-trial reverse block feedback condition (n = 6). (F) Eighty-four-trial reverse block feedback condition (n = 6). The hit rate decreased for all right offset Verniers in the trial-by-trial correct feedback condition (B) and for all left offset Verniers in the trial-by-trial reverse feedback condition (C). There were only small, or no changes in the other conditions (see text). Mean hit rate (%) ± SEM.
Figure 2
 
Hit rates for the six feedback conditions. In blocks 1–10 (first day of training), different groups received different types of feedback (see below and text). In blocks 11–16 (second day of training), no feedback was provided for blocks 11–13, while correct feedback was provided for blocks 14–16 for all groups. (A) No feedback condition (n = 6). (B) Trial-by-trial correct feedback condition (n = 6). (C) Trial-by-trial reverse feedback condition (n = 6). (D) Seven-trial correct block feedback condition (n = 6). (E) Seven-trial reverse block feedback condition (n = 6). (F) Eighty-four-trial reverse block feedback condition (n = 6). The hit rate decreased for all right offset Verniers in the trial-by-trial correct feedback condition (B) and for all left offset Verniers in the trial-by-trial reverse feedback condition (C). There were only small, or no changes in the other conditions (see text). Mean hit rate (%) ± SEM.
Figure 3
 
Decision criterion. (A) Decision criterion as a function of training. There is a large change in decision criterion in blocks 1–10 for the two groups receiving trial-by-trial feedback, while there is no or little change for the other groups. There was a break after the tenth block, followed the next day by three blocks without feedback and three blocks with trial-wise correct feedback. Mean c ± SEM. (B) Average regression slopes for the criterion in blocks 1–10. Slopes were significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are negative for the tt-reverseFB group and positive for the tt-correctFB group because the decision criterion was shifted in different directions. Mean c/block ± SEM. (C) The criterion at different points during training. There was only a significant difference in criterion at the start of training (blocks 1–2) and the end of training (blocks 9–10) for the two groups receiving trial-by-trial feedback. There was no difference in criterion at the start of training on Day 1 (blocks 1–2) and Day 2 (blocks 11–12) for any feedback condition. Hence, changes in decision criterion vanished between the sessions. Mean c ± SEM (*p < 0.05, **p < 0.01).
Figure 3
 
Decision criterion. (A) Decision criterion as a function of training. There is a large change in decision criterion in blocks 1–10 for the two groups receiving trial-by-trial feedback, while there is no or little change for the other groups. There was a break after the tenth block, followed the next day by three blocks without feedback and three blocks with trial-wise correct feedback. Mean c ± SEM. (B) Average regression slopes for the criterion in blocks 1–10. Slopes were significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are negative for the tt-reverseFB group and positive for the tt-correctFB group because the decision criterion was shifted in different directions. Mean c/block ± SEM. (C) The criterion at different points during training. There was only a significant difference in criterion at the start of training (blocks 1–2) and the end of training (blocks 9–10) for the two groups receiving trial-by-trial feedback. There was no difference in criterion at the start of training on Day 1 (blocks 1–2) and Day 2 (blocks 11–12) for any feedback condition. Hence, changes in decision criterion vanished between the sessions. Mean c ± SEM (*p < 0.05, **p < 0.01).
Figure 4
 
The relative sensitivities for the six feedback conditions. For each of the BR, MR, ML, and BL Verniers, the sensitivity was calculated relative to the SL Vernier. Thus, a value of 0.0 for a Vernier indicates that this Vernier could not be discriminated from the SL Vernier. Negative values indicat Verniers offset to the right relative to the SL Vernier and positive values indicate Verniers offset to the left relative to the SL Vernier. (A) No feedback condition. (B) Trial-by-trial correct feedback condition. (C) Trial-by-trial reverse feedback condition. (D) Seven-trial correct block feedback condition. (E) Seven-trial reverse block feedback condition. (F) Eighty-four-trial reverse block feedback condition. There were significant changes in the sensitivity for the SL Vernier in the trial-by-trial feedback groups (see text). For the tt-correctFB group, the SL Vernier was shifted toward the left (B), and for the tt-reverseFB group, the SL Vernier was shifted toward the right (C). Hence, the direction of the shift was determined by the direction indicated by the feedback. Mean d′ ± SEM.
Figure 4
 
The relative sensitivities for the six feedback conditions. For each of the BR, MR, ML, and BL Verniers, the sensitivity was calculated relative to the SL Vernier. Thus, a value of 0.0 for a Vernier indicates that this Vernier could not be discriminated from the SL Vernier. Negative values indicat Verniers offset to the right relative to the SL Vernier and positive values indicate Verniers offset to the left relative to the SL Vernier. (A) No feedback condition. (B) Trial-by-trial correct feedback condition. (C) Trial-by-trial reverse feedback condition. (D) Seven-trial correct block feedback condition. (E) Seven-trial reverse block feedback condition. (F) Eighty-four-trial reverse block feedback condition. There were significant changes in the sensitivity for the SL Vernier in the trial-by-trial feedback groups (see text). For the tt-correctFB group, the SL Vernier was shifted toward the left (B), and for the tt-reverseFB group, the SL Vernier was shifted toward the right (C). Hence, the direction of the shift was determined by the direction indicated by the feedback. Mean d′ ± SEM.
Figure 5
 
Sensitivity for the SL Vernier relative to the other Verniers. (A) Parameter Δ is the sensitivity difference between the left offset Verniers relative to the SL Vernier and right offset Verniers relative to the SL Vernier (Δ = d RIGHT-SL′ − d LEFT-SL′). Parameter Δ at block 1 has been subtracted from each block to better illustrate how Δ changes with training. Parameter Δ increases with training for the trial-by-trial reverse feedback condition, suggesting that the SL Vernier shifts to the right. In contrast, Δ decreases with training for the trial-by-trial correct feedback condition, suggesting that the SL Vernier shifts to the left. Mean d′ ± SEM. (B) Average sensitivity shift, ΔSHIFT. Parameter ΔSHIFT was significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are positive for the tt-reverseFB group and negative for the tt-correctFB group because the SL Vernier was shifted in different directions. Mean d′/block ± SEM. (C) Correlation between decision criterion changes and the sensitivity shift ΔSHIFT for the SL Vernier. There was a significant negative correlation between ΔSHIFT and the criterion changes (*p < 0.05).
Figure 5
 
Sensitivity for the SL Vernier relative to the other Verniers. (A) Parameter Δ is the sensitivity difference between the left offset Verniers relative to the SL Vernier and right offset Verniers relative to the SL Vernier (Δ = d RIGHT-SL′ − d LEFT-SL′). Parameter Δ at block 1 has been subtracted from each block to better illustrate how Δ changes with training. Parameter Δ increases with training for the trial-by-trial reverse feedback condition, suggesting that the SL Vernier shifts to the right. In contrast, Δ decreases with training for the trial-by-trial correct feedback condition, suggesting that the SL Vernier shifts to the left. Mean d′ ± SEM. (B) Average sensitivity shift, ΔSHIFT. Parameter ΔSHIFT was significantly different from 0.0 only for the groups receiving trial-by-trial feedback. The slopes are positive for the tt-reverseFB group and negative for the tt-correctFB group because the SL Vernier was shifted in different directions. Mean d′/block ± SEM. (C) Correlation between decision criterion changes and the sensitivity shift ΔSHIFT for the SL Vernier. There was a significant negative correlation between ΔSHIFT and the criterion changes (*p < 0.05).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×