Free
Article  |   November 2013
A synchronous surround increases the motion strength gain of motion
Author Affiliations
Journal of Vision November 2013, Vol.13, 12. doi:https://doi.org/10.1167/13.13.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel Linares, Shin'ya Nishida; A synchronous surround increases the motion strength gain of motion. Journal of Vision 2013;13(13):12. https://doi.org/10.1167/13.13.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Coherent motion detection is greatly enhanced by the synchronous presentation of a static surround (Linares, Motoyoshi, & Nishida, 2012). To further understand this contextual enhancement, here we measured the sensitivity to discriminate motion strength for several pedestal strengths with and without a surround. We found that the surround improved discrimination of low and medium motion strengths, but did not improve or even impaired discrimination of high motion strengths. We used motion strength discriminability to estimate the perceptual response function assuming additive noise and found that the surround increased the motion strength gain, rather than the response gain. Given that eye and body movements continuously introduce transients in the retinal image, it is possible that this strength gain occurs in natural vision.

Introduction
The visual system needs to be quick in extracting the relevant information from the retinal image because the image is continuously changing due to eye and body movements. Our recent finding suggests that information about motion might be extracted rapidly thanks to the spatial context (Linares, Motoyoshi, & Nishida, 2012). We found that the sensitivity to detect motion coherence in a briefly presented random-dots display was greatly enhanced by the synchronous presentation of a static surround. This motion facilitation is different from the well-known effect by which the sensitivity to detect slow motion is increased by surrounding static references (e.g., Graham, 1965, Chapter 20) because our effect only occurred for briefly presented motion and vanished when the surround was presented sustainedly from well before until well after the random-dots presentation. These characteristics indicate that the transient signal generated by the surround is important to enhance motion. As transient stimulation is common in the retina due to eye and body movements, it is possible that this type of facilitation occurs in natural vision. 
In our previous study, we measured the sensitivity to detect weak motion. Here, to further understand how the surround affects motion perception, we measured the sensitivity to discriminate motion strength for several pedestal strengths with and without a surround (Experiment 1). Then, we used motion discriminability to estimate how the perceptual response changes as a function of motion strength, by making the Fechnerian assumption that pairs of physical intensities that are equally discriminable produce the same difference in the magnitude of the perceptual response (e.g., Gescheider, 1997, Chapter 1). This effectively corresponds—under the framework of signal detection theory—to assume additive noise. In Experiment 2, we validated the obtained response function by an alternative method consisting of the direct comparison of perceptual motion strengths. 
Influenced by the discussion of how attention changes the transduction of contrast (e.g., Reynolds, Pasternak, & Desimone, 2000), we considered that, at a first approximation, the surround could change the perceptual response function to motion strength in two different ways. According to a motion strength gain hypothesis, the surround should have an effect equivalent to increasing the motion strength of the input signal multiplicatively. According to a response gain hypothesis, the surround should increase the response multiplicatively. 
General methods
The data and the R code (R Core Team, 2013) to conduct the statistics and create the figures are available on www.dlinares.org. Observers included the authors (DL, SN) and IM who were aware of the rationale of the experiments. The rest of the observers were not aware of the rationale of the experiments. Stimuli were generated using PsychoPy (Peirce, 2007) and displayed with a monitor (GDM-F520, SONY, Tokyo) at a refresh rate of 85 Hz (800 × 600 pixels). Observers viewed the display from a distance of 57 cm in a dimly lit room while fixating a small black ring (.38° diameter) in the center of the screen. The experiments were approved by the NTT Communication Science Laboratories Research Ethics Committee and were conducted according to the Declaration of Helsinki. 
The random-dots display consisted of 100 dots of diameter .18° displayed within a circular aperture of diameter 2.8° in a grey background (29 cd/m2). Half of the dots were black (4 cd/m2) and half were white (62 cd/m2). The life-time of the signal-dots was four frames (in every frame one quarter of the dots died and was reborn in a random location within the circular aperture), and the speed was 7.85°/s. The life-time of the noise-dots was one frame. The surround was a ring of width 1.4° composed of static dots (“infinite” life-time) with the same density as the random-dots display. 
Experiment 1: Motion strength discrimination
Method
Each trial began with the presentation of an arrow pointing to the coherent motion direction of that trial (leftwards or rightwards, selected at random at each trial). After that, two random-dots displays were presented consecutively, and the observer indicated which had contained stronger motion—more coherent signal—in the direction cued by the arrow (Figure 1). The arrow was presented for 588 ms; 706 ms after the arrow, the first random-dots display was presented for 59 ms; 706 ms after the first random-dots display, the second random-dots display was presented for 59 ms. Within a block of trials, one of the displays maintained the coherence fixed to one of the following six pedestals: 0%, 10%, 20%, 30%, 40%, and 50%. The coherence of the other display was higher than the coherence of the pedestal and varied in each trial according to the method of constant stimuli (Kingdom & Prins, 2009, Chapter 3). The range of variable coherences levels for each pedestal was determined individually for each observer based on a preliminary short session. The number of measures for each coherence level ranged from 36 to 324 depending on the time availability of the observer. Overall, observer DL performed 11,592 trials; SN, 5,184; RH, 5,187; MS, 5,184; MT, 2,821; and KN, 828. The display with the pedestal and the display with the variable coherence level were presented in a random order. We tested three conditions in different blocks: In the Synchronous surround condition, a surrounding ring of static dots was presented in synchrony with the random-dots displays; in the Sustained surround condition, the surrounding ring was displayed from the onset of the trial and remained on the screen until the observer responded; in the No surround condition, the surrounding ring was not presented. 
Figure 1
 
Illustration of the motion discrimination experiment (Experiment 1). Everything depicted in yellow was not displayed in the experiment. In both surround conditions, the dots in the surround were static.
Figure 1
 
Illustration of the motion discrimination experiment (Experiment 1). Everything depicted in yellow was not displayed in the experiment. In both surround conditions, the dots in the surround were static.
Results
Figure 2A shows, for each observer, the proportion of correct discriminations as a function of the coherence of the display with variable coherence for different pedestal coherences (each row displays a different pedestal) and the three surround conditions (Figure 1). For each observer, pedestal coherence and surround condition, using maximum likelihood estimation (MLE), we fitted a cumulative normal distribution with mean and standard deviation as free parameters. We fixed the lower asymptote to 0.5 (chance level) and the upper to 1 (no lapses). 
Figure 2
 
Motion strength discrimination. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer (column), pedestal coherence (rows), and surround condition (different colors). The curves correspond to cumulative normal distributions. The horizontal colored segments indicated the 75% coherence thresholds. (B) 75% coherence thresholds as a function of the pedestal coherence. The errors bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009, Chapter 4). The black rims indicate whether the thresholds were significantly different from the thresholds for the No surround condition. To assess whether two thresholds were significantly different, we calculated the difference between 1,000 simulated thresholds using parametric bootstrap and evaluate whether the 95% confidence intervals did not contain zero. To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; and errors bars smaller than −.1 and larger than 1.2 were not plotted (for some observers these were very large).
Figure 2
 
Motion strength discrimination. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer (column), pedestal coherence (rows), and surround condition (different colors). The curves correspond to cumulative normal distributions. The horizontal colored segments indicated the 75% coherence thresholds. (B) 75% coherence thresholds as a function of the pedestal coherence. The errors bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009, Chapter 4). The black rims indicate whether the thresholds were significantly different from the thresholds for the No surround condition. To assess whether two thresholds were significantly different, we calculated the difference between 1,000 simulated thresholds using parametric bootstrap and evaluate whether the 95% confidence intervals did not contain zero. To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; and errors bars smaller than −.1 and larger than 1.2 were not plotted (for some observers these were very large).
We assessed the goodness of fit of each psychometric function using the deviance statistic (Wichmann & Hill, 2001; Zychaluk & Foster, 2009). First, we simulated 1000 bootstrap samples using the MLE parameters. For each sample, we fitted a MLE cumulative normal distribution and calculated the deviance of the simulated data to the fit. Then, we estimated the deviance of the original data and determined whether it was within the 95% confidence interval of the simulated deviances. Using this procedure, we found that from the 96 fits, 86 were adequate. 
Figure 2B shows the discrimination thresholds for each observer, pedestal coherence, and surround condition, that is, how much larger the coherence of the display with the variable coherence needed to be to reach 75% of correct discriminations. We used bootstrap methods to statistically assess whether the thresholds for the Synchronous and Sustained surround condition were statistically different from the threshold for the No surround and indicated the significant differences using a black rim (see legend in Figure 2B for further details). We found that synchronous presentation of the surround, in comparison with the situation in which the surround was not presented, improved motion discrimination of low-medium coherences (for all observers except MS), but did not improve (observers DL and SN) or impaired (the rest of the observers) discrimination of high coherences. The sustained surround affected little coherence discrimination. 
Estimation of the perceptual response functions
To estimate the perceptual response function μ for the different surround conditions, we followed the Fechnerian assumption that pairs of physical intensities that are equally discriminable produced the same difference in the magnitude of the perceptual response. To implement this assumption according to a more modern approach in which perception is noisy, we used a standard signal detection theory model in which all data points for each surround condition in Figure 2A are fitted by a single μ, under the assumption that the noise in the perceptual response is additive (constant) across intensities and surround conditions (García-Pérez & Alcalá-Quintana, 2007; Kingdom & Prins, 2009; Nachmias & Sansbury, 1974; Wilson, 1980). In the Appendix, we discuss why, given that we measured the full psychometric functions, we prefer this conjoint, one-step fitting procedure (García-Pérez & Alcalá-Quintana, 2007) to a more conventional two-step fitting procedure in which the discrimination threshold is first estimated at each pedestal using an arbitrary shape for the psychometric function, and then the thresholds are integrated across pedestals to estimate the perceptual response function (e.g., Morrone, Denti, & Spinelli, 2002; Motoyoshi & Nishida, 2001; Zenger-Landolt & Heeger, 2003). 
First, we chose an analytical expression of the response function. Figure 2B indicates that increasing coherence first improves, and then impairs discrimination performance as the pedestal coherence increases, which suggests that μ should include expansive and compressive non-linearities for low and high response ranges respectively. For a function having such non-linearities, we chose a Naka-Rushton function to parameterize μ   
Next, to link this response function with the discrimination performance, we assume that the perceptual response elicited by the display containing the pedestal coherence is described by a random variable Rp normally distributed with mean μ(coherence pedestal) given by Equation 1 and a constant variance σ2 (additive noise) and that the perceptual response elicited by the display with the variable coherence is described by Rv distributed normally with mean μ(coherence variable) and the same variance σ2. Then, we consider that the observer will choose the display with the variable coherence as displaying stronger motion when the random variable D, defined as D = Rv − Rp, distributed normally with mean μ(coherence variable) − μ(coherence pedestal) and variance 2σ2, is positive. Thus, the probability of a correct response would be given by  where N is the normal distribution. Using this model and MLE, we estimated, for each observer, the 10 parameters that better fit the proportion of correct discriminations. These 10 parameters are N, M and a of Equation 1 for each surround condition (3 × 3) plus a common variance σ2. With the estimated parameters, we obtained the perceptual response functions for each surround condition using Equation 1 (Figure 3). The improvement of motion discrimination of low-medium coherences by the surround is visualized in the response function as a steeper slope for these coherences. The lack of improvement or impairment at high coherences is visualized as no-change or flatter slope of the response function for high coherences. 
Figure 3
 
Perceptual response functions. For each observer, we estimated the perceptual response functions up to a coherence equal to 0.5 plus the coherence threshold for the No surround condition for the 0.5 pedestal coherence. We restricted the range of coherences because for coherences larger than that, the estimated response should be unreliable given that we do not measure pedestals larger than 0.5.
Figure 3
 
Perceptual response functions. For each observer, we estimated the perceptual response functions up to a coherence equal to 0.5 plus the coherence threshold for the No surround condition for the 0.5 pedestal coherence. We restricted the range of coherences because for coherences larger than that, the estimated response should be unreliable given that we do not measure pedestals larger than 0.5.
The conjoint model also provides psychometric fits for the proportion of correct discriminations (Equation 2). Those fits are shown in Figure 4A. Visual inspection of Figure 4A suggests that the quality of the fits is similar to that obtained by fitting independent psychometric functions (Figure 2A) despite the large decrease in the number of free parameters from 10 to 36—when fitting independent psychometric functions, the number of free parameters for each observer was 36 (6 pedestal coherences × 3 surround conditions × 2 parameters of the cumulative normal function). To quantitatively compare the relative quality of the two models to fit the proportions of correct discriminations, we calculated for each model the Akaike Information Criterion (AIC; Akaike, 1974; Burnham, & Anderson, 2002) defined as AIC = 2k − 2 log(L) where k is the number of free parameters of the model and L is the likelihood of the data given the model. The AIC was smaller for the conjoint model for all observers indicating that the model was better than the model fitting independent psychometric functions. Δ defined as Δ = AICindependent-fit − AICconjoint ranged from 11 for observer SN to 54 for DL. The relative probability of the independent-fit model relative to the conjoint model (exp[−Δ/2]) ranged from 10−3 to 10−12 (Burnham & Anderson, 2002, Chapter 2). Hence, the perceptual response functions shown in Figure 3 are those that can best account for the obtained discrimination data for each observer. 
Figure 4
 
Psychometric functions and thresholds fitting all proportions conjointly. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer, pedestal coherence, and surround condition. The curves were obtained using Equation 2 with the MLE parameters. (B) 75% coherence thresholds as a function of the pedestal coherence. The error bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009). To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; for observers MT and RH, the threshold for the 0.5 pedestal for the Synchronous surround condition is not plotted because it was very large; for observer KN, the upper limit of the confidence interval for the 0.5 pedestal for the Synchronous surround condition was constrained to be 0.75 (it was 2).
Figure 4
 
Psychometric functions and thresholds fitting all proportions conjointly. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer, pedestal coherence, and surround condition. The curves were obtained using Equation 2 with the MLE parameters. (B) 75% coherence thresholds as a function of the pedestal coherence. The error bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009). To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; for observers MT and RH, the threshold for the 0.5 pedestal for the Synchronous surround condition is not plotted because it was very large; for observer KN, the upper limit of the confidence interval for the 0.5 pedestal for the Synchronous surround condition was constrained to be 0.75 (it was 2).
Figure 4B shows the 75% thresholds for the psychometric functions in Figure 4A. The thresholds are qualitatively similar to the thresholds obtained by fitting independent psychometric functions, but the variation with the pedestal is smoother. 
Figure 3 indicates that the response function for the Sustained surround condition is similar to that for the No surround condition, whereas the response function for the Synchronous surround condition runs above the other two. This relationship is always observed except where, for some observers, the three response functions converge at high motion coherence due to a response saturation of the perceptual response for the Synchronous surround condition. Overall, while our previous study suggested that a Synchronous surround enhanced the perceptual motion strength at threshold motion coherences, Figure 3 suggests that the enhancement also occurs for moderate supra-threshold motion coherences. 
Experiment 2: Perceived motion strength
In Experiment 1, we derived the perceptual response functions using a performance-based procedure (Kingdom & Prins, 2009, Chapter 3) that assumed that the noise in perceptual responses was constant for all coherences and surround conditions. Under this assumption, the synchronous surround facilitates discrimination for low-medium strengths because it increases their perceived strength. Constant noise, however, is a strong assumption and it is under debate in other contexts (see references in Solomon, 2009). An alternative possibility is that the surround facilitates discrimination, not because it changes perceived strength, but because it makes the perceptual response to motion less variable—reduces the noise (Kingdom & Prins, 2009, Chapter 7). To test the validity of the response functions estimated in Experiment 1 under the constant noise assumption, here we used an appearance-based procedure (Carrasco, Ling, & Read, 2004; Kingdom & Prins, 2009, Chapter 3) to measure how the surround changed perceived motion strength of a suprathreshold motion coherence. Specifically, we measured the perceived motion coherence of a 40% coherence display when a surround was presented in synchrony by comparing its perceived motion strength to that of variable coherences displayed without a surround. 
Method
The procedure was the same as that in Experiment 1 with the following exceptions. One of the displays had coherence fixed to 40% while the other had a variable coherence. The display with variable coherence was always presented without a surround. The 40% coherence display was presented with a synchronous, sustained or no surround in different blocks. The display with the fixed coherence and the display with the variable coherence were presented in a random order and the observer indicated which display was more coherent. Nine observers participated, including five having participated in Experiment 1. Each observer performed between 300 and 784 trials. 
Results
We fitted cumulative normal distributions to the proportion of responses in which the observers reported the No surround display as less coherent than the 40% coherence Synchronous display and determined the point of subjective equality as the coherence for which the proportion of responses was 50%. For seven out of nine observers, the point of subjective equality for 40% coherence with the synchronous surround was significantly larger than 40% (red vertical dotted lines in Figure 5A; the grey areas indicate the confidence intervals). This result indicates that the synchronous surround did increase the perceived motion coherence. In addition, for five observers, the point of subjective equality was larger with the synchronous surround than with the sustained surround (blue line in Figure 5A). As expected, the points of subjective equality for the No surround condition (green lines in Figure 5A) were close to 40%. Only for HM was the point of subjective equality different from 40% (the bootstrap confidence intervals did not contain 40%). Given that at 40% coherence, the display with the fixed coherence and the display with the variable coherence are identical—and thus the proportion of responses should be 0.5 for 40% coherence—it is possible that the bias is caused by the cumulative normal distribution not being the most appropriate shape to fit the data (Anton-Erxleben, Abrams, & Carrasco, 2010; Anton-Erxleben, Abrams, & Carrasco, 2011; Schneider, 2011). 
Figure 5
 
Perceived motion strength (Experiment 1). (A) Proportion of trials in which a display with No surround was perceived as less coherent than a 0.4 coherence display with a synchronous, sustained, or no surround as a function of the coherence of the display with No surround. The curves correspond to MLE cumulative normal distributions with lower asymptote 0 and upper asymptote 1. From the 27 fits, 22 were adequate according to the deviance statistic (using the procedure described in Experiment 1). The dotted lines indicate the points of subjective equality and the grey areas, the confidence intervals of the points of subjective equality for the Synchronous surround condition (using 1,000 parametric bootstrap samples). (B) Perceptual response functions from Figure 3 and the points of subjective equality with its confidence intervals for the Synchronous surround condition in Figure 5A. The black lines indicate the estimated perceived coherence of a 0.4 coherence display with a Synchronous surround using the perceptual response functions.
Figure 5
 
Perceived motion strength (Experiment 1). (A) Proportion of trials in which a display with No surround was perceived as less coherent than a 0.4 coherence display with a synchronous, sustained, or no surround as a function of the coherence of the display with No surround. The curves correspond to MLE cumulative normal distributions with lower asymptote 0 and upper asymptote 1. From the 27 fits, 22 were adequate according to the deviance statistic (using the procedure described in Experiment 1). The dotted lines indicate the points of subjective equality and the grey areas, the confidence intervals of the points of subjective equality for the Synchronous surround condition (using 1,000 parametric bootstrap samples). (B) Perceptual response functions from Figure 3 and the points of subjective equality with its confidence intervals for the Synchronous surround condition in Figure 5A. The black lines indicate the estimated perceived coherence of a 0.4 coherence display with a Synchronous surround using the perceptual response functions.
For five observers (whose data are shown in the bottom row of Figure 5A), we measured their ability to discriminate coherence in Experiment 1. We could therefore quantitatively compare, via the perceptual response functions in Figure 3, the perceived coherence that we measured from the appearance matching with the perceived coherence estimated from the discrimination performance. To read out the point of subjective equality of the Synchronous surround condition predicted from the response functions, one needs just to look at the perceptual response to a 40% coherence display presented with the synchronous surround and evaluate how much coherence is required in the No surround condition to give the same amount of perceptual response (crossing point of horizontal black line with the green curve in Figure 5B). To facilitate this comparison, in Figure 5B we have again plotted the point of subjective equality for the Synchronous surround condition in Figure 5A. For DL and SN, the apparent coherence is somewhat stronger than that predicted from the performance procedure. For MT and MS, it is somewhat weaker. For RH, the two measures agree with each other. In general, the predictions are reasonable, a circumstance which gives support to the hypothesis that noise is the same for all the surround conditions and the synchronous surround increases perceived motion coherence. 
Two observers (MS and MT) showed little enhancement in apparent coherence by synchronous surround (Figure 5A). For MS, the lack of enhancement was predicted from his response functions (Figure 5B). The reason MT showed little enhancement is not clear, but we suspect that it might be related to the observer trying to equate the number of “higher coherence” responses for each surround condition (response equalization, Erlebacher & Sekuler, 1971). 
The effect of manipulations, such as attention or introducing a surround, in the transduction of the full intensity range of some sensory variable, such as contrast, is often studied using only performance measures (Foley & Schwartz, 1998; García-Pérez, Alcalá-Quintana, Woods, & Peli, 2011; Huang & Dobkins, 2005; Lee, Itti, Koch, & Braun, 1999; Pestilli, Carrasco, Heeger, & Gardner, 2011; Zenger-Landolt & Heeger, 2003). This experiment suggests that the simultaneous measurement of appearance (Carrasco, Ling, & Read, 2004) is helpful to clarify how the manipulation affects the noise in the response. 
Discussion
We found that the synchronous presentation of a static stimulus surrounding a random-dots display improved motion coherence discrimination at low and medium motion coherences (Figures 2 and 4). The improvement for the 0% coherence pedestal corresponds to an improvement in detection, which replicates, using a two interval, forced choice procedure (Kingdom & Prins, 2009, Chapter 3), our previous finding (Linares, Motoyoshi & Nishida, 2012) using a stimulus forced choice symmetric procedure (Kingdom & Prins, 2009, Chapter 3). The lowering of the discrimination threshold by the synchronous surround corresponds to a steeper slope of the perceptual response function at low-medium coherences (Figure 3). 
The synchronous surround did not improve (for some observers) or impair (for others) discrimination of high coherences (Figures 2 and 4), a result which corresponds to no-change or a reduction in the slope of the response function at high coherences (Figure 3). The impaired discrimination, however, does not imply a suppression of the response to motion: For all coherences, the perceptual response function for the Synchronous surround condition is higher than the perceptual response function for the No surround condition. Rather, the impairment evidenced by some observers is related to a compressive non-linearity at high coherences saturating the response, which flattens the slope and makes discrimination hard. 
When the surround was not presented, discrimination was better for medium coherences than for low ones (Figures 2 and 4). This is a pedestal effect (Solomon, 2009) for motion coherence and is consistent with a previous study that used a less conventional random-dots display including dots moving in two opposite directions (Simpson & Finsten, 1995). The pedestal effect occurs for many sensory variables such as luminance contrast (Nachmias & Sansbury, 1974), orientation contrast (Motoyoshi & Nishida, 2001), chromatic contrast (Morrone, Denti & Spinelli, 2002), amplitude of tactile vibration (Arabzadeh, Clifford, & Harris, 2008), and time duration (Burr, Silva, Cicchini, Banks, & Morrone, 2009); but somehow at odds with the pedestal effect for coherence, many studies did not find a pedestal effect for speed discrimination (De Bruyn & Orban, 1988; McKee, 1981; McKee & Nakayama, 1984; Orban, De Wolf, & Maes, 1984; but see also Gori, Mazzilli, Sandini, & Burr, 2011). In terms of the perceptual response function (Figure 3), the pedestal effect corresponds to an expansive non-linearity at low coherences. The result of this non-linearity is that to increase the perceptual response by a fixed amount, the coherence should be increased more for low than for medium coherences. For the highest coherences, we did not find evidence of impairment in performance (e.g., Weber's law) as is often found for other intensity variables (Solomon, 2009). 
When the surround was presented sustainedly, we found little effect on motion discrimination (Figures 2 and 4), an outcome which is consistent with our previous results for motion detection (Linares, Motoyoshi, & Nishida, 2012) and indicates that transient signals are important when computing motion in the presence of static stimulation. 
The activity of MT neurons has been associated with the perception of motion coherence (e.g., Britten, Shadlen, Newsome, & Movshon, 1992; Britten, Shadlen, Newsome, & Movshon, 1993; Cohen & Newsome, 2009). Most MT neurons respond linearly with coherence (Britten et al., 1993), which is consistent with the linear behavior of the perceptual response function that we found for high coherences for the No surround condition (Figure 3). Some other MT neurons respond with expansive nonlinearities, which is consistent with the pedestal effect, but about the same number respond with compressive nonlinearities (Britten et al., 1993). While it is encouraging to find qualitative agreement of the psychophysical response functions with the physiological responses of single MT neurons, we do not expect a quantitative agreement in the exact non-linear shape of the response functions for several reasons. First, stimulus conditions are not the same. While we used brief stimuli, the effect of coherence for a large range of coherences on MT neurons has been tested only using long stimuli and their activity averaged across long periods (Britten et al., 1993). Second, while we used a discrimination threshold as the unit of psychophysical response assuming additive noise, this is not necessarily the case for neural response. For long stimuli, for example, the noise of MT neurons increases with the response (Britten et al., 1993; Snowden, Treue, & Andersen, 1992). It is possible, however, that the noise becomes more additive for brief stimuli (Müller, Metha, Krauskopf, & Lennie, 2001). Third, one needs to consider that perception depends not on the activity of a single neuron, but on the activity of a population of neurons (Cohen & Newsome, 2009; Sanborn & Dayan, 2011) and how this activity is decoded (Gold & Ding, 2013). 
Motion strength gain or response gain?
In the attention literature, it is often considered that, at a first approximation, attention can change the transduction of the intensity-variable contrast by changing the response or the contrast gain (e.g., Herrmann, Montaser-Kouhsari, Carrasco, & Heeger, 2010; Huang & Dobkins, 2005; Reynolds & Heeger, 2009; Reynolds, Pasternak, & Desimone, 2000; Williford & Maunsell, 2006). Similarly, we considered that the synchronous surround could increase the perceptual response multiplicatively—response gain hypothesis—or have an effect equivalent to increasing the motion strength multiplicatively—motion strength gain hypothesis. The response gain hypothesis predicts that the surround should increase the slope of the perceptual response function and facilitate discrimination for all coherences, which is inconsistent with our estimated perceptual response functions (Figure 6). The motion strength gain hypothesis, however, predicts a leftward shift of the perceptual response that is consistent with our estimated perceptual responses (Figure 6). Given that a coherence gain change is effectively scaling the response, the amplitude of the neural response contributing to perception (decoded activity from a population of neurons) should also change according to a coherence gain change independently of the mapping between the perceptual and the neural response (a response gain change would be maintained only if the mapping between the perceptual and the neural response is an homogeneous transformation). 
Figure 6
 
Perceptual response functions in Figure 3 plotted twice with the predictions of the response and coherence gain hypotheses (dotted lines). To obtain the predicted curve from the response gain hypothesis, we multiplied the perceptual response function for the No surround condition by the factor that better fitted—using least squares—the perceptual response for the Synchronous surround (and did the same for the Sustained surround condition). We obtained the predictions according to the coherence gain hypothesis similarly, but using a multiplicative factor for the coherence.
Figure 6
 
Perceptual response functions in Figure 3 plotted twice with the predictions of the response and coherence gain hypotheses (dotted lines). To obtain the predicted curve from the response gain hypothesis, we multiplied the perceptual response function for the No surround condition by the factor that better fitted—using least squares—the perceptual response for the Synchronous surround (and did the same for the Sustained surround condition). We obtained the predictions according to the coherence gain hypothesis similarly, but using a multiplicative factor for the coherence.
To assess how the surround might increase the coherence gain of the perceptual response to motion coherence, we considered the two-stage computational model of MT neurons of Simoncelli and Heeger (S&H, Simoncelli & Heeger, 1998). Consistent with our perceptual response function for the No surround condition, S&H reported that the response of a simulated MT neuron with coherence is slightly expansive (figure 11B in Simoncelli & Heeger, 1998). This expansive behavior is also consistent with the response to motion coherence of an energy sensor (Adelson & Bergen, 1985) that integrates spectral power in a narrow band of the frequency space (Britten et al., 1993). The reported expansive nonlinearity, however, was not pronounced enough to explain our perceptual response functions. The same occurred when, instead of considering the response of just one neuron, we subtracted the activity of two neurons tuned to opposite directions (Britten et al, 1992). Maintaining this simple decoding strategy, we also found that when allowing the semi-saturation level of the second stage to vary freely, the bend was still not enough to explain the expansive behavior. In the default version of S&H, the linear velocity-selective responses were squared (equation 6 in Simoncelli & Heeger, 1998) in the second stage of the model. But, when we allowed the exponent to vary and also the semi-saturation, we could capture the expansive behavior (Figure 7A). The best parameters corresponded to exponents near 4 and small semi-saturation levels (Figure 7B). To mimic the effect of the synchronous surround on the perceptual response function, we tried to change the semi-saturation level in the first stage of the model, but it did not affect the response of the MT neurons to coherence. A reduction of the semi-saturation level in the second stage, however, together with a reduction in the exponent, mimicked the coherence gain effect produced by the synchronous surround on the perceptual response functions (Figure 7). The reduction of the semi-saturation level in the Naka-Rushton function (Equation 1) is what the motion strength (coherence) gain hypothesis predicts (Huang & Dobkins, 2005), but this analysis further suggests that the reduction occurs in the second stage of S&H model and is accompanied by a change in expansive non-linearity. The biophysical mechanisms that could produce these changes in the parameters of the descriptive equations of S&H model are little known (Carandini & Heeger, 2011; Reynolds & Heeger, 2009). 
Figure 7
 
Simulations using the Simoncelli and Heeger's model (S&H; Simoncelli & Heeger, 1998). (A) The curves correspond to the perceptual response functions in Figure 3. The points show the results of the simulations using S&H. We used the MATLAB implementation of the model available on-line (www.cns.nyu.edu/∼lcv/MTmodel). First, we obtained the response to coherence of an MT neuron tuned to the direction of the stimulus for several values of the semi-saturation level σ (from 0 to 0.3 in steps of 0.01) and the exponent n (from 0.1 to 4 in steps of 0.1) of the second stage of the model (equation 6 in S&H, notice that the default value of the exponent is 2). We did the same for a neuron tuned to the opposite direction of motion. We then subtracted the responses and, by adjusting the scale factor, looked for σ and n that better fitted (least squares) the perceptual response function for the No surround condition. The values are indicated in the table in (B) (No surround condition). We obtained the values of σ and n for the other two conditions by using the scale factor of the No surround condition and looking for the values of σ and n that minimized the least squares distance to the corresponding perceptual response functions.
Figure 7
 
Simulations using the Simoncelli and Heeger's model (S&H; Simoncelli & Heeger, 1998). (A) The curves correspond to the perceptual response functions in Figure 3. The points show the results of the simulations using S&H. We used the MATLAB implementation of the model available on-line (www.cns.nyu.edu/∼lcv/MTmodel). First, we obtained the response to coherence of an MT neuron tuned to the direction of the stimulus for several values of the semi-saturation level σ (from 0 to 0.3 in steps of 0.01) and the exponent n (from 0.1 to 4 in steps of 0.1) of the second stage of the model (equation 6 in S&H, notice that the default value of the exponent is 2). We did the same for a neuron tuned to the opposite direction of motion. We then subtracted the responses and, by adjusting the scale factor, looked for σ and n that better fitted (least squares) the perceptual response function for the No surround condition. The values are indicated in the table in (B) (No surround condition). We obtained the values of σ and n for the other two conditions by using the scale factor of the No surround condition and looking for the values of σ and n that minimized the least squares distance to the corresponding perceptual response functions.
Our previous findings (Linares, Motoyoshi, & Nishida, 2012), the findings reported here, and recent results from other laboratories (Churan, Richard, & Pack, 2009; Wexler, Glennerster, Cavanaugh, Ito, & Seno, 2013) suggest that transient signals strongly affect how motion is computed. Given that our eye and body movements continuously introduce transients in the retinal image, it is possible that these findings are related to how motion is computed in natural vision. We speculate, for example, that the effects of transients on motion perception might be related to the post-saccadic enhancement of smooth pursuit (Lisberger, 1998; Wilmer & Nakayama, 2007) or the ocular following response (Kawano & Miles, 1986). 
Acknowledgments
We would like to thank Warrick Roseboom, Isamu Motoyoshi, and Daniel Baker for helpful discussions about the pedestal effect; Takahiro Kawabe and Isamu Motoyoshi for help with the data collection; and Warrick Roseboom for proofreading the manuscript. 
Commercial relationships: none. 
Corresponding author: Daniel Linares. 
Address: Communication Science Laboratories, NTT, Atsugi, Kanagawa, Japan. 
References
Adelson E. H. Bergen J. R. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, Optics and Image Science, 2 (2), 284–299. [CrossRef] [PubMed]
Akaike H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19 (6), 716–723. [CrossRef]
Anton-Erxleben K. Abrams J. Carrasco M. (2010). Evaluating comparative and equality judgments in contrast perception: Attention alters appearance. Journal of Vision, 10 (11): 6, 1–22, http://www.journalofvision.org/content/10/11/6, doi:10.1167/10.11.6. [PubMed] [Article] [CrossRef] [PubMed]
Anton-Erxleben K. Abrams J. Carrasco M. (2011). Equality judgments cannot distinguish between attention effects on appearance and criterion: A reply to Schneider (2011). Journal of Vision, 11 (13): 8, 1–8, http://www.journalofvision.org/content/11/13/8, doi:10.1167/11.13.8. [PubMed] [Article] [CrossRef] [PubMed]
Arabzadeh E. Clifford C. W. G. Harris J. A. (2008). Vision merges with touch in a purely tactile discrimination. Psychological Science, 19 (7), 635–641. doi:10.1111/j.1467-9280.2008.02134.x. [CrossRef] [PubMed]
Bird C. M. Henning G.B. Wichmann F.A. (2002). Contrast discrimination with sinusoidal gratings of different spatial frequency. Journal of the Optical Society of America A, 19, 1267–1273. [CrossRef]
Britten K. H. Shadlen M. N. Newsome W. T. Movshon J. A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. The Journal of Neuroscience, 12 (12), 4745–4765. [PubMed]
Britten K. H. Shadlen M. N. Newsome W. T. Movshon J. A. (1993). Responses of neurons in macaque MT to stochastic motion signals. Visual Neuroscience, 10 (6), 1157–1169. [CrossRef] [PubMed]
Burnham K. P. Anderson D. R. (2002). Model selection and multi-model inference: A practical information-theoretic approach. 2nd Ed. New York: Springer.
Burr D. Silva O. Cicchini G. M. Banks M. S. Morrone M. C. (2009). Temporal mechanisms of multimodal binding. Proceedings of the Royal Society B: Biological Sciences, 276 (1663), 1761–1769. [CrossRef]
Carandini M. Heeger D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13 (1), 51–62, doi:10.1038/nrn3136.
Carrasco M. Ling S. Read S. (2004). Attention alters appearance. Nature Neuroscience, 7 (3), 308–313. [CrossRef] [PubMed]
Churan J. Richard A. G. Pack C. C. (2009). Interaction of spatial and temporal factors in psychophysical estimates of surround suppression. Journal of Vision, 9 (4): 15, 1–15, http://www.journalofvision.org/content/9/4/15, doi:10.1167/9.4.15. [PubMed] [Article] [CrossRef] [PubMed]
Cohen M. R. Newsome W. T. (2009). Estimates of the contribution of single neurons to perception depend on timescale and noise correlation. The Journal of Neuroscience, 29 (20), 6635–6648. [CrossRef] [PubMed]
De Bruyn B. Orban G. A. (1988). Human velocity and direction discrimination measured with random dot patterns. Vision Research, 28 (12), 1323–1335. [CrossRef] [PubMed]
Erlebacher A. Sekuler R. (1971). Response frequency equalization: A bias model for psychophysics. Perception and Psychophysics, 9 (3), 315–320. [CrossRef]
Foley J. M. Schwarz W. (1998). Spatial attention: Effect of position uncertainty and number of distractor patterns on the threshold-versus-contrast function for contrast discrimination. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 15 (5), 1036–1047. [CrossRef]
García-Pérez M. A. Alcalá-Quintana R. (2007). The transducer model for contrast detection and discrimination: Formal relations, implications, and an empirical test. Spatial Vision, 20 (1-2), 5–43. [CrossRef] [PubMed]
García-Pérez M. A. Alcalá-Quintana R. Woods R. L. Peli E. (2011). Psychometric functions for detection and discrimination with and without flankers. Attention, Perception & Psychophysics, 73 (3), 829–853. [CrossRef] [PubMed]
Gescheider G. A. (1997). Psychophysics: The fundamentals. Mahwah, NJ: Lawrence Erlbaum.
Gold J. I. Ding L. (2013). How mechanisms of perceptual decision-making affect the psychometric function. Progress in Neurobiology, 103, 98–114. [CrossRef] [PubMed]
Gori M. Mazzilli G. Sandini G. Burr D. (2011). Cross-sensory facilitation reveals neural interactions between visual and tactile motion in humans. Frontiers in Psychology, 2, 55, doi:10.3389/fpsyg.2011.00055.
Graham C. H. Barlett N. R. Brown J. L. Hsia Y. Mueeller C. G. Riggs L. A. (1965). Vision and Visual Perception, chapter 20. New York: John Wiley.
Green D. M. Swets J. A. (1966). Signal detection theory and psychophysics. New York: Wiley.
Herrmann K. Montaser-Kouhsari L. Carrasco M. Heeger D. J. (2010). When size matters: Attention affects performance by contrast or response gain. Nature Neuroscience, 13 (12), 1554–1559. [CrossRef] [PubMed]
Huang L. Dobkins K. R. (2005). Attentional effects on contrast discrimination in humans: Evidence for both contrast gain and response gain. Vision Research, 45 (9), 1201–1212. [CrossRef] [PubMed]
Kawano K. Miles F. A. (1986). Short-latency ocular following responses of monkey. II. Dependence on a prior saccadic eye movement. Journal of Neurophysiology, 56 (5), 1355–1380. [PubMed]
Kingdom F. A. Prins N. (2009). Psychophysics: A practical introduction. London: Academic Press, .
Lee D. K. Itti L. Koch C. Braun J. (1999). Attention activates winner-take-all competition among visual filters. Nature Neuroscience, 2 (4), 375–381. [PubMed]
Linares D. Motoyoshi I. Nishida S. (2012). Surround facilitation for rapid motion perception. Journal of Vision, 12 (10): 3, 1–10, http://www.journalofvision.org/content/12/10/3, doi:10.1167/12.10.3. [PubMed] [Article] [CrossRef] [PubMed]
Lisberger S. G. (1998). Postsaccadic enhancement of initiation of smooth pursuit eye movements in monkeys. Journal of Neurophysiology, 79 (4), 1918–1930. [PubMed]
McKee S. P. (1981). A local mechanism for differential velocity detection. Vision Research, 21 (4), 491–500. [CrossRef] [PubMed]
McKee S. P. Nakayama K. (1984). The detection of motion in the peripheral visual field. Vision Research, 24 (1), 25–32. [CrossRef] [PubMed]
Morrone M. C. Denti V. Spinelli D. (2002). Color and luminance contrasts attract independent attention. Current Biology, 12 (13), 1134–1137. [CrossRef] [PubMed]
Motoyoshi I. Nishida S. (2001). Visual response saturation to orientation contrast in the perception of texture boundary. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 18 (9), 2209–2219. [CrossRef] [PubMed]
Müller J. R. Metha A. B. Krauskopf J. Lennie P. (2001). Information conveyed by onset transients in responses of striate cortical neurons. Journal of Neuroscience, 21 (17), 6978–6990. [PubMed]
Nachmias J. Sansbury R. V. (1974). Letter: Grating contrast: Discrimination may be better than detection. Vision Research, 14 (10), 1039–1042. [CrossRef] [PubMed]
Orban G. A. de Wolf J. Maes H. (1984). Factors influencing velocity coding in the human visual system. Vision Research, 24 (1), 33–39. [CrossRef] [PubMed]
Peirce J. W. (2007). PsychoPy—Psychophysics software in Python. Journal of Neuroscience Methods, 162, 8–13. [CrossRef] [PubMed]
Pestilli F. Carrasco M. Heeger D. J. Gardner J. L. (2011). Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. Neuron, 72 (5), 832–846. [CrossRef] [PubMed]
R Core Team. (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna Austria. Available at http://www.R-project.org/.
Reynolds J. H. Heeger D. J. (2009). The normalization model of attention. Neuron, 61 (2), 168–185. [CrossRef] [PubMed]
Reynolds J. H. Pasternak T. Desimone R. (2000). Attention increases sensitivity of V4 neurons. Neuron, 26 (3), 703–714. [CrossRef] [PubMed]
Sanborn A. N. Dayan P. (2011). Optimal decisions for contrast discrimination. Journal of Vision, 11 (14): 9, 1–13, http://www.journalofvision.org/content/11/14/9, doi:10.1167/11.14.9. [PubMed] [Article] [CrossRef] [PubMed]
Schneider K. A. (2011). Attention alters decision criteria but not appearance: A reanalysis of Anton-Erxleben, Abrams, and Carrasco(2010). Journal of Vision, 11 (13): 7, 1–8, http://www.journalofvision.org/content/11/13/7, doi:10.1167/11.13.7. [PubMed] [Article] [CrossRef] [PubMed]
Simoncelli E. P. Heeger D. J. (1998). A model of neuronal responses in visual area MT. Vision Research, 38 (5), 743–761. [CrossRef] [PubMed]
Simpson W. A. Finsten B. A. (1995). Pedestal effect in visual motion discrimination. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 12 (12), 2555–2563. [CrossRef] [PubMed]
Snowden R. J. Treue S. Andersen R. A. (1992). The response of neurons in areas V1 and MT of the alert rhesus monkey to moving random dot patterns. Experimental Brain Research, 88 (2), 389–400. [CrossRef] [PubMed]
Solomon J. A. (2009). The history of dipper functions. Attention, Perception & Psychophysics, 71 (3), 435–443. [CrossRef] [PubMed]
Wexler M. Glennerster A. Cavanagh P. Ito H. Seno T. (2013). Default perception of high-speed motion. Proceedings of the National Academy of Sciences, USA, 110 (17), 7080–7085. [CrossRef]
Wichmann F. A. Hill N. J. (2001). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception and Psychophysics, 63 (8), 1293–1313. [CrossRef] [PubMed]
Williford T. Maunsell J. H. R. (2006). Effects of spatial attention on contrast response functions in macaque area V4. Journal of Neurophysiology, 96 (1), 40–54. [CrossRef] [PubMed]
Wilmer J. B. Nakayama K. (2007). Two Distinct Visual Motion Mechanisms for Smooth Pursuit: Evidence from Individual Differences. Neuron, 54 (6), 987–1000. [CrossRef] [PubMed]
Wilson H. R. (1980). A transducer function for threshold and suprathreshold human vision. Biological cybernetics, 38 (3), 171–178. [CrossRef] [PubMed]
Zenger-Landolt B. Heeger D. J. (2003). Response suppression in v1 agrees with psychophysics of surround masking. Journal of Neuroscience, 23 (17), 6884–6893. [PubMed]
Zychaluk K. Foster D. H. (2009). Model-free estimation of the psychometric function. Attention, Perception & Psychophysics, 71 (6), 1414–1425. [CrossRef] [PubMed]
Appendix
Here, we estimated the perceptual response functions μ using the thresholds in Figure 2B (e.g., Morrone, Denti, & Spinelli, 2002; Motoyoshi & Nishida, 2001; Zenger-Landolt & Heeger, 2003). We considered that under the additive noise assumption, two different coherences could be discriminated if the difference in the perceptual intensity that they cause reaches a certain value that we arbitrarily fix to 1. Using the same parameterization of μ that in the main text (Equation 1), the specific shape of μ would be given by the parameters N, M and a that minimizes E0+E1+E2+E3+E4+E5 where 
E0 = μ(threshold coherence for 0% pedestal)- μ(0)−1 
E1 = μ(threshold coherence for 10% pedestal+.1)- μ(threshold coherence for 0% pedestal)−1 
E2 = μ(threshold coherence for 20% pedestal+.2)- μ(threshold coherence for 10% pedestal+.1)−1 
E3 = μ(threshold coherence for 30% pedestal+.3)- μ(threshold coherence for 20% pedestal+.2)−1 
E4 = μ(threshold coherence for 40% pedestal+.4)- μ(threshold coherence for 30% pedestal+.3)−1 
E5 = μ(threshold coherence for 50% pedestal+.5)- μ(threshold coherence for 40% pedestal+.4)−1 
With the best parameters fitted by least squares, we obtained the perceptual response functions using Equation 1. Figure 8 shows that the perceptual response functions obtained using this method (continuous lines) are very similar to the perceptual response functions that we derived in the main text (dotted lines). We think, however, that the method used in the main text theoretically makes more sense. We reason why below. 
Figure 8
 
Perceptual response functions obtained using the thresholds in Figure 2B (continuous lines, Appendix) and using the proportions in Figure 2A (dotted lines, main text). For better comparison, as the response units are arbitrary, for each observer the perceptual response functions obtained from thresholds are multiplied by a scale factor that minimizes the least square distance to the perceptual response function obtained from proportions.
Figure 8
 
Perceptual response functions obtained using the thresholds in Figure 2B (continuous lines, Appendix) and using the proportions in Figure 2A (dotted lines, main text). For better comparison, as the response units are arbitrary, for each observer the perceptual response functions obtained from thresholds are multiplied by a scale factor that minimizes the least square distance to the perceptual response function obtained from proportions.
To estimate the perceptual response function in this Appendix, we used the thresholds obtained by fitting cumulative normal distributions to the proportion of correct discriminations. Choosing an arbitrary shape to fit independent psychometric functions for each pedestal is, however, arguable because the shape of the perceptual response and the shape of the psychometric function should be related (García-Pérez & Alcalá-Quintana, 2007; Green & Swets, 1966; Gold & Ding, 2013; Kingdom & Prins, 2009; Nachmias & Sansbury, 1974). Fitting independent psychometric functions might not be problematic in contrast discrimination experiments (e.g., Bird, Henning, & Wichmann, 2002) because in comparison with the range of available contrasts, the range of contrasts used to fit a psychometric function for a given pedestal is small. This small range suggests that within the range of contrasts necessary to fit a psychometric function, the perceptual response function might be well approximated by a linear function, and consequently, a cumulative normal distribution might be a good approximation for the psychometric function (signal detection theory coupled with additive noise predicts a cumulative normal distribution; Kingdom & Prins, 2009, Chapter 4). For motion coherence discrimination, however, the range of coherences that we needed to fit a psychometric function for a given pedestal (Figure 2A) was quite large relative to the range of possible coherences (0%–100%). This large range suggests that the perceptual response is not linear within the range over which we fitted the psychometric function and that a cumulative normal distribution is not the best shape for it. For this reason, we think that the method described in the main text in which all the psychometric functions are fitted conjointly using the shape of a hypothetical perceptual response function or transducer is a better method (García-Pérez & Alcalá-Quintana, 2007). 
Figure 1
 
Illustration of the motion discrimination experiment (Experiment 1). Everything depicted in yellow was not displayed in the experiment. In both surround conditions, the dots in the surround were static.
Figure 1
 
Illustration of the motion discrimination experiment (Experiment 1). Everything depicted in yellow was not displayed in the experiment. In both surround conditions, the dots in the surround were static.
Figure 2
 
Motion strength discrimination. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer (column), pedestal coherence (rows), and surround condition (different colors). The curves correspond to cumulative normal distributions. The horizontal colored segments indicated the 75% coherence thresholds. (B) 75% coherence thresholds as a function of the pedestal coherence. The errors bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009, Chapter 4). The black rims indicate whether the thresholds were significantly different from the thresholds for the No surround condition. To assess whether two thresholds were significantly different, we calculated the difference between 1,000 simulated thresholds using parametric bootstrap and evaluate whether the 95% confidence intervals did not contain zero. To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; and errors bars smaller than −.1 and larger than 1.2 were not plotted (for some observers these were very large).
Figure 2
 
Motion strength discrimination. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer (column), pedestal coherence (rows), and surround condition (different colors). The curves correspond to cumulative normal distributions. The horizontal colored segments indicated the 75% coherence thresholds. (B) 75% coherence thresholds as a function of the pedestal coherence. The errors bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009, Chapter 4). The black rims indicate whether the thresholds were significantly different from the thresholds for the No surround condition. To assess whether two thresholds were significantly different, we calculated the difference between 1,000 simulated thresholds using parametric bootstrap and evaluate whether the 95% confidence intervals did not contain zero. To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; and errors bars smaller than −.1 and larger than 1.2 were not plotted (for some observers these were very large).
Figure 3
 
Perceptual response functions. For each observer, we estimated the perceptual response functions up to a coherence equal to 0.5 plus the coherence threshold for the No surround condition for the 0.5 pedestal coherence. We restricted the range of coherences because for coherences larger than that, the estimated response should be unreliable given that we do not measure pedestals larger than 0.5.
Figure 3
 
Perceptual response functions. For each observer, we estimated the perceptual response functions up to a coherence equal to 0.5 plus the coherence threshold for the No surround condition for the 0.5 pedestal coherence. We restricted the range of coherences because for coherences larger than that, the estimated response should be unreliable given that we do not measure pedestals larger than 0.5.
Figure 4
 
Psychometric functions and thresholds fitting all proportions conjointly. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer, pedestal coherence, and surround condition. The curves were obtained using Equation 2 with the MLE parameters. (B) 75% coherence thresholds as a function of the pedestal coherence. The error bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009). To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; for observers MT and RH, the threshold for the 0.5 pedestal for the Synchronous surround condition is not plotted because it was very large; for observer KN, the upper limit of the confidence interval for the 0.5 pedestal for the Synchronous surround condition was constrained to be 0.75 (it was 2).
Figure 4
 
Psychometric functions and thresholds fitting all proportions conjointly. (A) Proportion of correct discriminations as a function of the coherence of the display with the variable coherence for each observer, pedestal coherence, and surround condition. The curves were obtained using Equation 2 with the MLE parameters. (B) 75% coherence thresholds as a function of the pedestal coherence. The error bars correspond to the 95% parametric bootstrap confidence intervals using 1,000 samples (Kingdom & Prins, 2009). To facilitate the visualization of the results, data points and error bars for different conditions are slightly shifted horizontally; for observers MT and RH, the threshold for the 0.5 pedestal for the Synchronous surround condition is not plotted because it was very large; for observer KN, the upper limit of the confidence interval for the 0.5 pedestal for the Synchronous surround condition was constrained to be 0.75 (it was 2).
Figure 5
 
Perceived motion strength (Experiment 1). (A) Proportion of trials in which a display with No surround was perceived as less coherent than a 0.4 coherence display with a synchronous, sustained, or no surround as a function of the coherence of the display with No surround. The curves correspond to MLE cumulative normal distributions with lower asymptote 0 and upper asymptote 1. From the 27 fits, 22 were adequate according to the deviance statistic (using the procedure described in Experiment 1). The dotted lines indicate the points of subjective equality and the grey areas, the confidence intervals of the points of subjective equality for the Synchronous surround condition (using 1,000 parametric bootstrap samples). (B) Perceptual response functions from Figure 3 and the points of subjective equality with its confidence intervals for the Synchronous surround condition in Figure 5A. The black lines indicate the estimated perceived coherence of a 0.4 coherence display with a Synchronous surround using the perceptual response functions.
Figure 5
 
Perceived motion strength (Experiment 1). (A) Proportion of trials in which a display with No surround was perceived as less coherent than a 0.4 coherence display with a synchronous, sustained, or no surround as a function of the coherence of the display with No surround. The curves correspond to MLE cumulative normal distributions with lower asymptote 0 and upper asymptote 1. From the 27 fits, 22 were adequate according to the deviance statistic (using the procedure described in Experiment 1). The dotted lines indicate the points of subjective equality and the grey areas, the confidence intervals of the points of subjective equality for the Synchronous surround condition (using 1,000 parametric bootstrap samples). (B) Perceptual response functions from Figure 3 and the points of subjective equality with its confidence intervals for the Synchronous surround condition in Figure 5A. The black lines indicate the estimated perceived coherence of a 0.4 coherence display with a Synchronous surround using the perceptual response functions.
Figure 6
 
Perceptual response functions in Figure 3 plotted twice with the predictions of the response and coherence gain hypotheses (dotted lines). To obtain the predicted curve from the response gain hypothesis, we multiplied the perceptual response function for the No surround condition by the factor that better fitted—using least squares—the perceptual response for the Synchronous surround (and did the same for the Sustained surround condition). We obtained the predictions according to the coherence gain hypothesis similarly, but using a multiplicative factor for the coherence.
Figure 6
 
Perceptual response functions in Figure 3 plotted twice with the predictions of the response and coherence gain hypotheses (dotted lines). To obtain the predicted curve from the response gain hypothesis, we multiplied the perceptual response function for the No surround condition by the factor that better fitted—using least squares—the perceptual response for the Synchronous surround (and did the same for the Sustained surround condition). We obtained the predictions according to the coherence gain hypothesis similarly, but using a multiplicative factor for the coherence.
Figure 7
 
Simulations using the Simoncelli and Heeger's model (S&H; Simoncelli & Heeger, 1998). (A) The curves correspond to the perceptual response functions in Figure 3. The points show the results of the simulations using S&H. We used the MATLAB implementation of the model available on-line (www.cns.nyu.edu/∼lcv/MTmodel). First, we obtained the response to coherence of an MT neuron tuned to the direction of the stimulus for several values of the semi-saturation level σ (from 0 to 0.3 in steps of 0.01) and the exponent n (from 0.1 to 4 in steps of 0.1) of the second stage of the model (equation 6 in S&H, notice that the default value of the exponent is 2). We did the same for a neuron tuned to the opposite direction of motion. We then subtracted the responses and, by adjusting the scale factor, looked for σ and n that better fitted (least squares) the perceptual response function for the No surround condition. The values are indicated in the table in (B) (No surround condition). We obtained the values of σ and n for the other two conditions by using the scale factor of the No surround condition and looking for the values of σ and n that minimized the least squares distance to the corresponding perceptual response functions.
Figure 7
 
Simulations using the Simoncelli and Heeger's model (S&H; Simoncelli & Heeger, 1998). (A) The curves correspond to the perceptual response functions in Figure 3. The points show the results of the simulations using S&H. We used the MATLAB implementation of the model available on-line (www.cns.nyu.edu/∼lcv/MTmodel). First, we obtained the response to coherence of an MT neuron tuned to the direction of the stimulus for several values of the semi-saturation level σ (from 0 to 0.3 in steps of 0.01) and the exponent n (from 0.1 to 4 in steps of 0.1) of the second stage of the model (equation 6 in S&H, notice that the default value of the exponent is 2). We did the same for a neuron tuned to the opposite direction of motion. We then subtracted the responses and, by adjusting the scale factor, looked for σ and n that better fitted (least squares) the perceptual response function for the No surround condition. The values are indicated in the table in (B) (No surround condition). We obtained the values of σ and n for the other two conditions by using the scale factor of the No surround condition and looking for the values of σ and n that minimized the least squares distance to the corresponding perceptual response functions.
Figure 8
 
Perceptual response functions obtained using the thresholds in Figure 2B (continuous lines, Appendix) and using the proportions in Figure 2A (dotted lines, main text). For better comparison, as the response units are arbitrary, for each observer the perceptual response functions obtained from thresholds are multiplied by a scale factor that minimizes the least square distance to the perceptual response function obtained from proportions.
Figure 8
 
Perceptual response functions obtained using the thresholds in Figure 2B (continuous lines, Appendix) and using the proportions in Figure 2A (dotted lines, main text). For better comparison, as the response units are arbitrary, for each observer the perceptual response functions obtained from thresholds are multiplied by a scale factor that minimizes the least square distance to the perceptual response function obtained from proportions.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×