Open Access
Article  |   August 2017
A new method for mapping perceptual biases across visual space
Author Affiliations
  • Nonie J. Finlayson
    Experimental Psychology, University College London, London, United Kingdom
  • Andriani Papageorgiou
    Experimental Psychology, University College London, London, United Kingdom
  • D. Samuel Schwarzkopf
    Experimental Psychology, University College London, London, United Kingdom
    Institute of Cognitive Neuroscience, University College London, London, United Kingdom
    School of Optometry & Vision Science, University of Auckland, Auckland, New Zealand
Journal of Vision August 2017, Vol.17, 5. doi:10.1167/17.9.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Nonie J. Finlayson, Andriani Papageorgiou, D. Samuel Schwarzkopf; A new method for mapping perceptual biases across visual space. Journal of Vision 2017;17(9):5. doi: 10.1167/17.9.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How we perceive the environment is not stable and seamless. Recent studies found that how a person qualitatively experiences even simple visual stimuli varies dramatically across different locations in the visual field. Here we use a method we developed recently that we call multiple alternatives perceptual search (MAPS) for efficiently mapping such perceptual biases across several locations. This procedure reliably quantifies the spatial pattern of perceptual biases and also of uncertainty and choice. We show that these measurements are strongly correlated with those from traditional psychophysical methods and that exogenous attention can skew biases without affecting overall task performance. Taken together, MAPS is an efficient method to measure how an individual's perceptual experience varies across space.

Introduction
Perceptual biases and illusions can reveal the underlying processing of sensory input in the brain because they disentangle the objective physical stimulus from the phenomenological subjective experience. The nature of the discrepancy between these quantities thus informs about what neural computations give rise to perceptual experience. For instance, the magnitude of the tilt illusion depends on eccentricity and thus on the cortical distance between center and surround in the retinotopic representation of the stimulus in the early visual cortex (Mareschal, Morgan, & Solomon, 2010). We can use such relationships to posit and test mechanistic hypotheses about how perceptual processing operates. 
Similarly, the spatial heterogeneity of perceptual biases across the visual field could indicate heterogeneity in the neural representation of such stimuli. Recent research has demonstrated robust individual perceptual biases with which stimuli appear differently depending on where they are located in the visual field (Afraz, Pashkam, & Cavanagh, 2010; Greenwood, Szinte, Sayim, & Cavanagh, 2017; Moutsiana et al., 2016; Schwarzkopf & Rees, 2013; Szinte & Cavanagh, 2011). If the spatial receptive fields of neurons tuned to a particular stimulus attribute preferentially cover one visual field location, this “under-sampling” must result in spatially uneven encoding of that attribute. Both the uncertainty (discrimination sensitivity) and perceptual biases (the subjective appearance of identical stimuli) vary across visual space. Such perceptual heterogeneity has been demonstrated for complex judgments, such as the age or gender of faces; intermediate features, such as shape; and low-level attributes, such as position, orientation, size, and color. 
Studies of spatial heterogeneity therefore require measuring perceptual functions at multiple locations. However, traditional visual psychophysical methods usually present isolated stimuli, either by using spatial or temporal two-alternative, forced choice designs or by simply asking observers to make a perceptual judgment on a single stimulus, such as whether a grating was oriented left or right of vertical. Doing so for numerous stimulus locations is time-intensive. Moreover, it is arguably an unnatural situation. Many perceptual judgments in ecological, everyday circumstances occur in cluttered environments and may require discriminating similar objects presented simultaneously, such as determining which of the lids in Figure 1A fits onto the container. Numerous experiments have shown that the perceptual quality of a stimulus is modulated when it is part of an ensemble compared to when it appears in isolation (Ariely, 2001; Chong & Treisman, 2003; Parkes, Lund, Angelucci, Solomon, & Morgan, 2001; Walker & Vul, 2014). 
Figure 1
 
(A) Idiosyncratic biases in size perception. Visual objects often appear in the presence of similar objects, for example, searching for the correct size of lid for a container. What is the neural basis for this judgment? (B) The MAPS task (Moutsiana et al., 2016). In each trial, observers fixated on the center of the screen and viewed an array of five circles for 200 ms. The central circle was constant in size, and the others varied across trials. Each frame here represents the stimulus from one trial. The arrow denotes the flow of time. Observers judged which of the circles in the four corners appeared most similar in size to the central one. (C) Analysis of behavioral data from the MAPS task. The behavioral responses in each trial were modeled by an array of four “neural detectors” tuned to stimulus size (expressed as the binary logarithm of the ratio between the target and the reference circle diameters). Tuning was modeled as a Gaussian curve. The detector showing the strongest output to the stimulus (indicated by the red arrows) determined the predicted behavioral response in each trial (here, the top right detector would win). Model fitting minimized the prediction error (in this example, the model predicted the actual behavioral choice correctly for 50% of trials) across the experimental run by adapting the mean and dispersion of each detector. (Panels B and C reused from Moutsiana et al., 2016).
Figure 1
 
(A) Idiosyncratic biases in size perception. Visual objects often appear in the presence of similar objects, for example, searching for the correct size of lid for a container. What is the neural basis for this judgment? (B) The MAPS task (Moutsiana et al., 2016). In each trial, observers fixated on the center of the screen and viewed an array of five circles for 200 ms. The central circle was constant in size, and the others varied across trials. Each frame here represents the stimulus from one trial. The arrow denotes the flow of time. Observers judged which of the circles in the four corners appeared most similar in size to the central one. (C) Analysis of behavioral data from the MAPS task. The behavioral responses in each trial were modeled by an array of four “neural detectors” tuned to stimulus size (expressed as the binary logarithm of the ratio between the target and the reference circle diameters). Tuning was modeled as a Gaussian curve. The detector showing the strongest output to the stimulus (indicated by the red arrows) determined the predicted behavioral response in each trial (here, the top right detector would win). Model fitting minimized the prediction error (in this example, the model predicted the actual behavioral choice correctly for 50% of trials) across the experimental run by adapting the mean and dispersion of each detector. (Panels B and C reused from Moutsiana et al., 2016).
The fact that biases are, by definition, subjective further complicates measuring perceptual biases. Although traditional psychophysics designs with two-alternative choices are suitable for measuring objective performance, such as discrimination thresholds, measurement of subjective biases can easily be skewed by cognitive factors, such as expectation effects or simple response bias (Morgan, Dillenburger, Raphael, & Solomon, 2011; Morgan, Melmoth, & Solomon, 2013). When confronted with two barely discriminable candidate stimuli, observers may have a (possibly subconscious) tendency to choose the one they expect to be the “correct” or expected choice, possibly because they are aware of the hypothesis being tested or because of cognitive factors that are completely unrelated to their actual percept. As such, giving observers false feedback about whether or not their perceptual decision was correct can markedly shift psychometric curves acquired using a two-alternative, forced choice design (Morgan et al., 2011). Several procedures have been developed to counteract such cognitive confounds (Jogan & Stocker, 2014; Morgan et al., 2011; Morgan et al., 2013; Patten & Clifford, 2015). What they all have in common is that, instead of the perceptual judgments used in traditional psychophysics (e.g., which of two stimuli is larger, brighter, more tilted, etc.), the observer's task is to find the candidate stimulus among a set of at least two that is the closest perceptual match to a reference. The reference can be either an explicitly presented stimulus or an implicit one, such as “vertical orientation.” In either case, unlike in the method of constant stimuli, the constant reference stimulus is never a valid choice for completing the task. 
We recently developed a new procedure called multiple alternatives perceptual search (MAPS; Moutsiana et al., 2016) with the aims both to satisfy the need for efficiency and to minimize cognitive confounds when mapping perceptual biases across multiple visual field locations when stimuli are presented as an ensemble. In this task, we instructed observers to identify the stimulus among the set of several stimuli that is perceptually most like a reference stimulus. We then fit a multiparameter model that assumes a tuned detector at each stimulus location to predict the observer's choice on every trial. This model estimates the perceptual bias and uncertainty at every stimulus location. Moutsiana et al. (2016) determined that these measurements of an individual's visual field biases were highly reliable, which demonstrates that there is a stable and real bias in size perception at different visual field locations. Here, we compared the parameter estimates for MAPS with those obtained with traditional psychophysical procedures and tested whether exogenous attentional cueing influences the appearance of stimuli to control for differences found between MAPS and traditional methods. Experiments used size judgments of small circle stimuli presented either in isolation or in the presence of contextual illusions. 
Materials and methods
Participants
The authors and several naïve observers participated in these experiments. All participants were healthy and had normal or corrected-to-normal visual acuity. All participants gave written informed consent, and the UCL Research Ethics Committee approved procedures. Eight participants (two female, two authors, one left-handed) participated in the experiment comparing perceptual biases measured with the method of constant stimuli (MCS; comparing tasks). Five participants (two female, two authors, one left-handed, ages 26–38) participated in a conceptual replication of this experiment comparing the Ebbinghaus illusion strength measured with different tasks (comparing Ebbinghaus). Eighteen participants (eight female, three authors, one left-handed, ages 21–42) participated in the attentional cueing experiment (attentional cueing). 
MAPS
To estimate perceptual biases efficiently at four visual field locations, we developed the MAPS task. This is a matching paradigm using analyses related to reverse correlation or classification image approaches (Abbey & Eckstein, 2002; Li, Levi, & Klein, 2004) that seeks to directly estimate the points of subjective equality while also allowing an inference of uncertainty. 
Stimuli
Participants were seated in a dark, noise-shielded room in front of a computer screen (Samsung 2233RZ) using its native resolution of 1,680 × 1,050 pixels and a refresh rate of 120 Hz. Minimum and maximum luminance values were 0.25 and 230 cd/m2. Viewing distance was 48 cm. Participants used both hands to indicate responses by pressing buttons on a keyboard. All stimuli were generated and displayed using MATLAB (The MathWorks Inc., Natick, MA) and the Psychophysics Toolbox, version 3 (Brainard, 1997). 
The stimuli comprised light gray (54 cd/m2) circle outlines presented on a black background. Each stimulus array consisted of five circles (Figure 1B). One, the reference, was presented in the center of the screen and was always constant in size (diameter: 0.98° visual angle). The remaining four, the candidates, varied in size from trial to trial and independently from each other. They were presented at the four diagonal polar angles at 3.92° eccentricity. 
The independent variable (the stimulus dimension used to manipulate each of the candidates) was the binary logarithm of the ratio of diameters for the target relative to the reference circles. The sizes of three of the candidate stimuli were drawn from a Gaussian distribution centered on zero (the size of the reference), and the fourth candidate was the correct target; i.e., it was set to zero. The standard deviation of the Gaussian distribution was 0.3 log units. 
Procedure
Each trial started with 500 ms during which only a fixation dot (diameter 0.2°) was visible in the middle of the screen. Following this was a presentation of the stimulus array for 200 ms after which the screen returned to the fixation-only screen. We instructed participants to make their response by pressing the F, V, K, or M button on the keyboard corresponding to whichever of the four targets appeared most similar in size to the reference. After their response, a “ripple” effect over the target they had chosen provided feedback about their response. This constituted three 50-ms frames in which a circle increased in diameter from 0.49° in steps of 0.33° and in luminance. 
The color of the fixation dot also changed during these 150 ms to provide feedback about whether the behavioral response was “correct” in that they picked the target stimulus that physically matched the reference stimulus. We only provided feedback on correct trials to reduce the anxiety associated with large numbers of “incorrect” trials that are common in this task: The proportion of trials in which participants chose the “correct” target was typically around 45%–50%, well in excess of chance performance of 25%. 
Experiments were broken up into blocks of 20 trials. After each block, there was a rest break. A message on the screen reminded participants of the task and indicated how many blocks they had already completed. Participants initiated blocks with a button press. In the attentional cueing experiments, participants completed two sessions over two different days. 
Comparing tasks experiment
A MCS task (Laming & Laming, 1992) was conducted to compare perceptual biases measured with MAPS to those measured with the more traditional method (MCS). After participants completed the MAPS task (400 trials total), they then completed the MCS task (960 trials total). The stimuli and trial sequence for MCS were similar to the MAPS experiment. However, the task was to compare the size of one of the four candidate stimuli in the quadrants with the reference and indicate which one was larger by means of a button press. In separate blocks, participants were instructed which of the quadrants they had to judge by a message on the screen before each block commenced. The size of the current target was chosen to be 0, ±0.05, ±0.1, ±0.2, ±0.3, ±0.5, or ±1 log units, and there were 80 trials for each these 13 possible sizes. To approximate the conditions in the MAPS experiments, we used a Gaussian distribution with a standard deviation of 0.3 log units to choose the sizes of the three remaining circles (distracters). There were 80 blocks, and the whole range of possible target sizes (13) appeared in each block in a pseudorandomized order. 
Comparing Ebbinghaus experiment
The Ebbinghaus MAPS and MCS tasks used the same stimuli and procedure as the main task with a few minor differences. To measure the bias under the Ebbinghaus illusion, smaller blue inducer circles surrounded the reference circle (16 circles, diameter 0.10°, distance from reference 0.7°), and larger blue inducer circles surrounded the four candidate circles in the quadrants (four circles, diameter 1.72°, distance from target circles 2.5°). The candidate circles in this task had an eccentricity of 5.88° visual angle to allow room for the Ebbinghaus inducers. We gave participants the same instructions as with the MAPS (400 trials) and MCS (960 trials) tasks, and we told them to ignore the blue circles and judge the size of the white target circles. They completed all Ebbinghaus tasks on the same day. The MCS task took around three to four times longer (30–42 min) for participants to complete than the MAPS task (10–14 min). 
Attentional cueing experiment
In this experiment, a brief attentional cue preceded presentation of the stimulus array. This comprised two small dots (diameter 0.49°) presented at 120% and 83% of the eccentricity along the radial axis between fixation and the cued candidate location. The duration of the cue was 60 ms followed by 40 ms of fixation. There were five experimental conditions: one for cueing each of the four candidates and a baseline condition during which there was no attentional cue. We randomly interleaved these conditions over the course of the experiment. We recruited participants for two sessions on separate days (1,000 trials each session), and in each session, they performed one experimental run comprising 50 blocks. 
Analyses
Correlations
Both within-subject and between-subject correlations were calculated (as in Moutsiana et al., 2016). Within-subject correlations take into account only within-subject variance, comparing individual visual field measures within each participant. These correlations describe the pattern similarity between an individual's visual field results. Between-subject correlations take into account only between-subject variance, averaging across individual visual field values to measure the correlation between subjects' overall scores on different measures. As this is averaged over four separate locations, this can only inform us very generally about each individual's performance. 
Model fitting
To estimate perceptual biases and uncertainties, we fit a model to predict a given participant's behavioral response in each trial (Figure 1C). At each of the four candidate stimulus locations, a Gaussian tuning curved modeled the output of a “neural detector” tuned to stimulus size at that location (expressed as the binary logarithm of the ratio between the candidate and reference circle diameters). The model fitted the peak location (μ) and dispersion (σ) parameters of the Gaussian tuning curves that minimized the prediction error across all trials, and the prediction error is the difference between the behavioral response and the model prediction. We determined the predicted choice by which of the four detectors (tuned to candidate size) was producing the strongest output. The peak location thus estimated the perceptual bias at which size the candidate at a given location appeared the same as the reference. The dispersion estimated the uncertainty in behavioral responses as a reciprocal of the uncertainty. 
Model fitting employed the Nelder–Mead simplex search optimization procedure (Lagarias, Reeds, Wright, & Wright, 1998). For each of the four target locations, we initialized the μ parameter as the mean stimulus value (size offset) whenever that given target location was chosen “incorrectly” (that is, instead of the target). We initialized the σ parameter as the standard deviation across all stimulus values when a given target location was chosen. The final model-fitting procedure however always used all trials regardless of whether the participant chose the “correct” target. We also measured response bias, quantified as the percentage a particular location was chosen irrespective of the stimulus at each location. The model of which of the four candidate locations was chosen on each trial j can be expressed as  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\bf{\alpha}}\)\(\def\bupbeta{\bf{\beta}}\)\(\def\bupgamma{\bf{\gamma}}\)\(\def\bupdelta{\bf{\delta}}\)\(\def\bupvarepsilon{\bf{\varepsilon}}\)\(\def\bupzeta{\bf{\zeta}}\)\(\def\bupeta{\bf{\eta}}\)\(\def\buptheta{\bf{\theta}}\)\(\def\bupiota{\bf{\iota}}\)\(\def\bupkappa{\bf{\kappa}}\)\(\def\buplambda{\bf{\lambda}}\)\(\def\bupmu{\bf{\mu}}\)\(\def\bupnu{\bf{\nu}}\)\(\def\bupxi{\bf{\xi}}\)\(\def\bupomicron{\bf{\micron}}\)\(\def\buppi{\bf{\pi}}\)\(\def\buprho{\bf{\rho}}\)\(\def\bupsigma{\bf{\sigma}}\)\(\def\buptau{\bf{\tau}}\)\(\def\bupupsilon{\bf{\upsilon}}\)\(\def\bupphi{\bf{\phi}}\)\(\def\bupchi{\bf{\chi}}\)\(\def\buppsy{\bf{\psy}}\)\(\def\bupomega{\bf{\omega}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\begin{equation}{s_j} = \arg \max \phi \left( {{{{\rm log}\left[ {{d_j}(i)/{\mu _i}} \right]} \over {{\sigma _i}}}} \right)\end{equation}
where μi and σi are free parameters, controlling the perceptual bias and uncertainty specific to each location i, and ϕ(z) is the probability density function of a standard normal random variable z.  
The MAPS analysis could predict participants' behavioral responses well above chance levels (25%). Model prediction accuracy in the comparing tasks experiment ranged between 40.5% and 70.5%. Importantly, the model-fitting procedure afforded, on average, a subtle improvement of prediction accuracy over a basic model using only the seed parameters for the tuning functions. We also tested a five-parameter model in which we initialized the μ individually as above, but we initialized the σ parameter as the pooled dispersion across the four candidate locations. When comparing the two models, we found no consistent difference in prediction accuracy with neither model showing a prediction advantage over the other. As the eight-parameter model models uncertainty at each location, which could be of interest here, we have used the eight-parameter model below, but it is worth noting that the five-parameter model is a viable alternative when location-specific uncertainty is irrelevant. 
MCS
For the MCS, we estimated the perceptual bias by calculating the proportion of trials for each target stimulus size at which participants reported seeing the target as larger than the reference. We fit a cumulative Gaussian function with three free parameters to these data: (a) the peak of the Gaussian, μ, to estimate the point of subjective equality (perceptual bias); (b) the standard deviation of the Gaussian, σ, to quantify the uncertainty; and (c) the amplitude of the Gaussian, β, to consider lapse rates. This curve-fitting was done using the same optimization procedure as the MAPS model-fitting procedure (Lagarias et al., 1998) to minimize the squared residuals of the fitted model. 
Results and discussion
We developed a new method for mapping the perceptual biases of visual stimuli presented simultaneously at four visual field locations. This affords an efficient measurement of spatial heterogeneity in perceptual experience that also takes into account that stimuli usually do not appear in isolation but in cluttered environments or as part of an ensemble. 
MAPS reliably detects perceptual biases
Recently published data using this same MAPS method (Moutsiana et al., 2016) explored the spatial heterogeneity of perceptual biases across the visual field, using the methods as described above. Peripheral stimuli appeared smaller on average than the central reference, confirming earlier reports (Anstis, 1998; Bedell & Johnson, 1984; von Helmholtz, 1867). Importantly, when the idiosyncratic patterns of perceptual biases within each participant were analyzed, they found individual biases in size perception across the four tested visual field locations, similar to previous reports (Afraz et al., 2010; Schwarzkopf & Rees, 2013). The bias results were found to be reliable across 2 days and even across years of testing between sessions (Moutsiana et al., 2016). 
Results from MAPS are consistent with traditional psychophysics
To test the validity of MAPS and explore if it was consistent with traditional psychophysics, we compared the perceptual biases estimated from MAPS with those measured by the MCS (Figure 2A). A within-subject correlation (calculating the correlation after first subtracting each individual's mean across the four locations) found that the visual field pattern of bias estimates from the MCS correlated strongly with those from MAPS (r = 0.65, p < 0.001). Similar results were found for the between-subjects correlation (taking only the mean across the four locations for each individual) although this was not statistically significant (r = 0.52, p = 0.173). Overall, the magnitude of the perceptual biases were smaller when using MAPS (MMCS = 0.148 < MMAPS = 0.028), t(7) = −9.23, p < 0.001. This confirms that the spatial pattern of biases was very similar regardless of the method used but that the magnitude of biases was reduced for MAPS compared to the traditional MCS. 
Figure 2
 
Correlations of individual perceptual bias obtained using MAPS and MCS. Each participant is shown in a different color. The colored lines depict individual correlations between the biases at the four locations. The black line is the between-subjects correlation taking only the mean across the four locations. (A) The original experiment and (B) the Ebbinghaus illusion stimuli. Insets show example stimuli from each experiment.
Figure 2
 
Correlations of individual perceptual bias obtained using MAPS and MCS. Each participant is shown in a different color. The colored lines depict individual correlations between the biases at the four locations. The black line is the between-subjects correlation taking only the mean across the four locations. (A) The original experiment and (B) the Ebbinghaus illusion stimuli. Insets show example stimuli from each experiment.
We further compared the spatial pattern of uncertainties, finding a significant correlation between MAPS and MCS for between-subjects variance (r = 0.82, p = 0.012), but not for within-subject variance (r = 0.12, p = 0.508). This suggests that although overall performance for a given participant was similar across tasks (a participant who performs well on MAPS also tends to perform well on MCS), the spatial pattern of uncertainties was not consistent across tasks. 
Conceptual replication (Ebbinghaus illusion)
We carried out a conceptual replication of this experiment, but instead of judging the size of isolated circles, we measured the strength of the Ebbinghaus illusion (Figure 2B). As we saw without the illusion, the spatial pattern perceptual biases (illusion strengths) measured with MAPS correlated strongly within subject with the MCS task results (r = 0.56, p = 0.010), but not between subjects (r = 0.10, p = 0.868). The uncertainties were neither significantly correlated for the within-subject variance (r = 0.32, p = 0.167) nor for the between-subjects variance (r = −0.26, p = 0.670). There was no significant difference in bias magnitudes between MAPS and MCS results (MMCS = 0.065 ≈ MMAPS = 0.028), t(4) = −0.53, p = 0.621, possibly due to fewer participants in this experiment. Thus, the results of this experiment were at least qualitatively similar to the first. 
These experiments demonstrate that a MAPS task is an efficient and effective alternative for traditional psychophysics methods for measuring both basic perceptual biases and illusory biases, such as the Ebbinghaus illusion. However, the basic perceptual biases are presumably mechanistically different from the bias induced by perceptual illusions (Moutsiana et al., 2016). Moutsiana et al. (2016) found that although isolated circles and illusion stimuli generally are perceived as smaller relative to the central reference when population receptive fields (pRFs) are large, larger V1 cortical surface area is associated with a weaker illusory modulation of perceived size (Moutsiana et al., 2016; Schwarzkopf & Rees, 2013; Schwarzkopf, Song, & Rees, 2011). 
Spatial attention modulates perceptual biases
A noteworthy result is that the magnitude of biases estimated by MAPS was reduced compared to the MCS. One possibility is that this reflects an actual perceptual effect related to differences in the attentional deployment in the two tasks. Specifically, the more conservative biases observed with MAPS compared to MCS may be related to prior observations that judgments of multiple simultaneous items shifts biases toward the set average (Ariely, 2001; Chong & Treisman, 2003; Parkes et al., 2001; Walker & Vul, 2014): MAPS requires participants to attend to four candidate locations simultaneously whereas for MCS attention is always focused on single targets. This could affect perceptual biases, such as the smaller biases found for MAPS than when using MCS. We tested this possibility by presenting an attentional cue at a given location briefly before stimulus onset in the MAPS task (Figure 3A) and found results that are consistent with this interpretation. Perceptual biases (Figure 3B) were enhanced subtly for the cued location, F(3, 51) = 4.62, p = 0.006, with follow-up tests indicating subtly stronger biases for the cued location than the counterclockwise target location, t(17) = 2.90, p = 0.010, or the clockwise target location, t(17) = 2.20, p = 0.042. However, the uncertainty (Figure 3C) was unaffected by attentional cueing, F(3, 51) = 0.77, p = 0.514. Similarly, the response bias (the frequency with which participants chose a given location; Figure 3D) was also unaltered by cueing, F(3, 51) = 0.06, p = 0.981. Taken together, this suggests that attentional cueing exerted a subtle effect on perceptual biases, enhancing the bias at the cued location and reducing it elsewhere. This was not due to changes in uncertainty or changes in the frequency with which participants chose these candidate locations. 
Figure 3
 
(A) The MAPS attention cueing task. In each trial, the stimuli array was preceded by a brief, uninformative cue at one of the four locations. (B) Effect of an attentional cue on perceptual biases across the visual field. Gray dots represent individual participants,with the mean shown by the black diamond. Bias increased at the cued location and subtly reduced at clockwise and counterclockwise locations. (C) Effect of an attentional cue on uncertainty. (D) Effects of an attentional cue on response bias.
Figure 3
 
(A) The MAPS attention cueing task. In each trial, the stimuli array was preceded by a brief, uninformative cue at one of the four locations. (B) Effect of an attentional cue on perceptual biases across the visual field. Gray dots represent individual participants,with the mean shown by the black diamond. Bias increased at the cued location and subtly reduced at clockwise and counterclockwise locations. (C) Effect of an attentional cue on uncertainty. (D) Effects of an attentional cue on response bias.
Thus attentional deployment does indeed appear to influence perceptual biases. Attention has been shown to enhance relevant information by improving spatial resolution (Anton-Erxleben & Carrasco, 2013), reducing spatial uncertainty (Kay, Weiner, & Grill-Spector, 2015), and mitigating information loss (Sprague, Saproo, & Serences, 2015). Here we show that attention increases perceptual visual field biases, which arguably could be said to be reducing performance. However, as stronger perceptual biases for size judgments have been linked to coarser spatial tuning of the visual cortex (Moutsiana et al., 2016), it follows that improved spatial resolution at attended locations should lead to reduced perceptual biases. We however observed more pronounced perceptual biases at the cued location. Of course, attention may instead (or additionally) affect the process by which later stages of visual processing read out signals from the early visual cortex. 
Attention could also, in part, explain the discrepancy in the magnitudes of perceptual biases between MAPS and MCS. However, the effect of a brief attentional cue on perceptual biases is substantially weaker than the difference in bias magnitudes between tasks. The attentional effect of focusing on one candidate location for a whole block in MCS may not be the same as that of an exerted brief attentional cue. Moreover, even though attention was cued to one candidate location, in the MAPS task participants are nevertheless required to divide their attention across all four candidates. Finally, although the attentional cueing effect is presumably involuntary, the sustained attention in MCS is directed and under the participant's control. Nevertheless, it seems improbable that the difference in bias magnitudes between tasks is solely due to attention. 
It is difficult to disentangle perceptual biases from response or decision biases (Morgan et al., 2011). However, the design of the MAPS task makes it unlikely that decision factors skew the measurement of perceptual bias: Participants have to choose between equivalent candidates, so they must use perceptual experience of the stimuli to make their decision. It also seems unlikely that there should be a location-specific decision criterion or demand effect on this task: There is probably no reason a participant should expect that a stimulus should be larger or smaller in a particular location. This explains why the spatial pattern of biases should be very consistent between these tasks. Nevertheless, additional effects of decision factors cannot be ruled out. Future research needs to probe the role of perceptual versus decisional bias effects in the individual spatial heterogeneity of visual perception. 
Conclusion
We present a new method, MAPS, to measure perceptual biases in individuals. MAPS is a short psychophysics task with comparable results to the MCS with the added benefit of measuring individual visual field biases while taking into account that stimuli usually do not appear in isolation but in cluttered environments or as part of an ensemble. Our experiments thus demonstrate the efficiency of this new method for measuring individual perceptual biases across the visual field. Moreover, our attentional cueing experiment shows that MAPS allows us to probe subtle modulation of perceptual experience that occurs in the absence of any changes in sensitivity or response bias. Finally, although MAPS was designed for efficiently mapping perceptual biases across multiple locations, the task could be adapted for measuring perceptual biases or illusions when location is irrelevant. The nature of this design should help to minimize the confounding influence of cognitive or decision-making factors in inferring a person's subjective perceptual experience. 
Data availability
All anonymized data and materials for this study are available at: http://doi.org/10.17605/OSF.IO/GHXZR
Acknowledgments
This work was supported by an ERC Starting Grant (310829) to DSS. We thank Joshua Solomon for helpful discussions. 
Commercial relationships: none. 
Corresponding author: Nonie J. Finlayson. 
Address: Experimental Psychology, University College London, London, UK 
References
Abbey, C. K., & Eckstein, M. P. (2002). Classification image analysis: Estimation and statistical inference for two-alternative forced-choice experiments. Journal of Vision, 2 (1): 5, 66–78, doi:10.1167/2.1.5. [PubMed] [Article]
Afraz, A., Pashkam, M. V., & Cavanagh, P. (2010). Spatial heterogeneity in the perception of face and form attributes. Current Biology: CB, 20 (23), 2112–2116, doi:10.1016/j.cub.2010.11.017.
Anstis, S. (1998). Picturing peripheral acuity. Perception, 27 (7), 817–825.
Anton-Erxleben, K., & Carrasco, M. (2013). Attentional enhancement of spatial resolution: Linking behavioural and neurophysiological evidence. Nature Reviews Neuroscience, 14 (3), 188–200, doi:10.1038/nrn3443.
Ariely, D. (2001). Seeing sets: Representation by statistical properties. Psychological Science, 12 (2), 157–162, doi:10.1111/1467-9280.00327.
Bedell, H., & Johnson, C. A. (1984). The perceived size of targets in the peripheral and central visual fields. Ophthalmic and Physiological Optics, 4 (2), 123–131.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10 (4), 433–436.
Chong, S. C., & Treisman, A. (2003). Representation of statistical properties. Vision Research, 43 (4), 393–404, doi:10.1016/S0042-6989(02)00596-5.
Greenwood, J. A., Szinte, M., Sayim, B., & Cavanagh, P. (2017). Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision. Proceedings of the National Academy of Sciences, USA, 114 (17), E3573–E3582.
Jogan, M., & Stocker, A. A. (2014). A new two-alternative forced choice method for the unbiased characterization of perceptual bias and discriminability. Journal of Vision, 14 (3): 20, 1–18, doi:10.1167/14.3.20. [PubMed] [Article]
Kay, K. N., Weiner, K. S., & Grill-Spector, K. (2015). Attention reduces spatial uncertainty in human ventral temporal cortex. Current Biology, 25 (5), 595–600, doi:10.1016/j.cub.2014.12.050.
Lagarias, J., Reeds, J., Wright, M., & Wright, P. (1998). Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM Journal of Optimization, 9, 112–147.
Laming, D., & Laming, J. (1992). F. Hegelmaier: On memory for the length of a line. Psychological Research, 54 (4), 233–239.
Li, R. W., Levi, D. M., & Klein, S. A. (2004). Perceptual learning improves efficiency by re-tuning the decision “template” for position discrimination. Nature Neuroscience, 7 (2), 178–183, doi:10.1038/nn1183.
Mareschal, I., Morgan, M. J., & Solomon, J. A. (2010). Cortical distance determines whether flankers cause crowding or the tilt illusion. Journal of Vision, 10 (8): 13, 1–14, doi:10.1167/10.8.13. [PubMed] [Article]
Morgan, M., Dillenburger, B., Raphael, S., & Solomon, J. A. (2011). Observers can voluntarily shift their psychometric functions without losing sensitivity. Attention, Perception, & Psychophysics, 74 (1), 185–193, doi:10.3758/s13414-011-0222-7.
Morgan, M., Melmoth, D., & Solomon, J. A. (2013). Linking hypotheses underlying class A and class B methods. Visual Neuroscience, 30 (5–6), 197–206, doi:10.1017/S095252381300045X.
Moutsiana, C., de Haas, B., Papageorgiou, A., van Dijk, J. A., Balraj, A., Greenwood, J. A., & Schwarzkopf, D. S. (2016). Cortical idiosyncrasies predict the perception of object size. Nature Communications, 7, 12110, doi:10.1038/ncomms12110.
Parkes, L., Lund, J., Angelucci, A., Solomon, J. A., & Morgan, M. (2001). Compulsory averaging of crowded orientation signals in human vision. Nature Neuroscience, 4 (7), 739–744, doi:10.1038/89532.
Patten, M. L., & Clifford, C. W. G. (2015). A bias-free measure of the tilt illusion. Journal of Vision, 15 (15): 8, 1–14, doi:10.1167/15.15.8. [PubMed] [Article]
Schwarzkopf, D. S., & Rees, G. (2013). Subjective size perception depends on central visual cortical magnification in human V1. PloS One, 8 (3), e60550, doi:10.1371/journal.pone.0060550.
Schwarzkopf, D. S., Song, C., & Rees, G. (2011). The surface area of human V1 predicts the subjective experience of object size. Nature Neuroscience, 14 (1), 28–30, doi:10.1038/nn.2706.
Sprague, T. C., Saproo, S., & Serences, J. T. (2015). Visual attention mitigates information loss in small- and large-scale neural codes. Trends in Cognitive Sciences, 19 (4), 215–226, doi:10.1016/j.tics.2015.02.005.
Szinte, M., & Cavanagh, P. (2011). Spatiotopic apparent motion reveals local variations in space constancy. Journal of Vision, 11 (2): 4, doi:10.1167/11.2.4. [PubMed] [Article]
von Helmholtz, H. (1867). LXIII: On integrals of the hydrodynamical equations, which express vortex-motion. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 33 (226), 485–512.
Walker, D., & Vul, E. (2014). Hierarchical encoding makes individuals in a group seem more attractive. Psychological Science, 25 (1), 230–235, doi:10.1177/0956797613497969.
Figure 1
 
(A) Idiosyncratic biases in size perception. Visual objects often appear in the presence of similar objects, for example, searching for the correct size of lid for a container. What is the neural basis for this judgment? (B) The MAPS task (Moutsiana et al., 2016). In each trial, observers fixated on the center of the screen and viewed an array of five circles for 200 ms. The central circle was constant in size, and the others varied across trials. Each frame here represents the stimulus from one trial. The arrow denotes the flow of time. Observers judged which of the circles in the four corners appeared most similar in size to the central one. (C) Analysis of behavioral data from the MAPS task. The behavioral responses in each trial were modeled by an array of four “neural detectors” tuned to stimulus size (expressed as the binary logarithm of the ratio between the target and the reference circle diameters). Tuning was modeled as a Gaussian curve. The detector showing the strongest output to the stimulus (indicated by the red arrows) determined the predicted behavioral response in each trial (here, the top right detector would win). Model fitting minimized the prediction error (in this example, the model predicted the actual behavioral choice correctly for 50% of trials) across the experimental run by adapting the mean and dispersion of each detector. (Panels B and C reused from Moutsiana et al., 2016).
Figure 1
 
(A) Idiosyncratic biases in size perception. Visual objects often appear in the presence of similar objects, for example, searching for the correct size of lid for a container. What is the neural basis for this judgment? (B) The MAPS task (Moutsiana et al., 2016). In each trial, observers fixated on the center of the screen and viewed an array of five circles for 200 ms. The central circle was constant in size, and the others varied across trials. Each frame here represents the stimulus from one trial. The arrow denotes the flow of time. Observers judged which of the circles in the four corners appeared most similar in size to the central one. (C) Analysis of behavioral data from the MAPS task. The behavioral responses in each trial were modeled by an array of four “neural detectors” tuned to stimulus size (expressed as the binary logarithm of the ratio between the target and the reference circle diameters). Tuning was modeled as a Gaussian curve. The detector showing the strongest output to the stimulus (indicated by the red arrows) determined the predicted behavioral response in each trial (here, the top right detector would win). Model fitting minimized the prediction error (in this example, the model predicted the actual behavioral choice correctly for 50% of trials) across the experimental run by adapting the mean and dispersion of each detector. (Panels B and C reused from Moutsiana et al., 2016).
Figure 2
 
Correlations of individual perceptual bias obtained using MAPS and MCS. Each participant is shown in a different color. The colored lines depict individual correlations between the biases at the four locations. The black line is the between-subjects correlation taking only the mean across the four locations. (A) The original experiment and (B) the Ebbinghaus illusion stimuli. Insets show example stimuli from each experiment.
Figure 2
 
Correlations of individual perceptual bias obtained using MAPS and MCS. Each participant is shown in a different color. The colored lines depict individual correlations between the biases at the four locations. The black line is the between-subjects correlation taking only the mean across the four locations. (A) The original experiment and (B) the Ebbinghaus illusion stimuli. Insets show example stimuli from each experiment.
Figure 3
 
(A) The MAPS attention cueing task. In each trial, the stimuli array was preceded by a brief, uninformative cue at one of the four locations. (B) Effect of an attentional cue on perceptual biases across the visual field. Gray dots represent individual participants,with the mean shown by the black diamond. Bias increased at the cued location and subtly reduced at clockwise and counterclockwise locations. (C) Effect of an attentional cue on uncertainty. (D) Effects of an attentional cue on response bias.
Figure 3
 
(A) The MAPS attention cueing task. In each trial, the stimuli array was preceded by a brief, uninformative cue at one of the four locations. (B) Effect of an attentional cue on perceptual biases across the visual field. Gray dots represent individual participants,with the mean shown by the black diamond. Bias increased at the cued location and subtly reduced at clockwise and counterclockwise locations. (C) Effect of an attentional cue on uncertainty. (D) Effects of an attentional cue on response bias.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×