Free
Article  |   August 2014
Distinguishing bias from sensitivity effects in multialternative detection tasks
Author Affiliations
Journal of Vision August 2014, Vol.14, 16. doi:https://doi.org/10.1167/14.9.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Devarajan Sridharan, Nicholas A. Steinmetz, Tirin Moore, Eric I. Knudsen; Distinguishing bias from sensitivity effects in multialternative detection tasks. Journal of Vision 2014;14(9):16. https://doi.org/10.1167/14.9.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making.

Introduction
Decisions in the real world involve making a categorical judgment or choice based on careful evaluation of noisy sensory evidence. In addition to sensory evidence, behavioral biases contribute importantly to the decision-making process (Gold, Law, Connolly, & Bennur, 2008; Gold & Shadlen, 2007; Macmillan & Creelman, 2005). Biases may reflect an innate preference for a specific choice that manifests, for instance, as an idiosyncratic tendency for selecting one choice among many equally likely alternatives (Gold et al., 2008; Klein, 2001). Conversely, biases may be rapidly and reversibly induced with specific task manipulations. For instance, cueing the location of an upcoming stimulus, either explicitly with a spatial cue or implicitly by temporarily increasing the frequency of presentation at a particular location, can result in the observer (human or animal) developing a bias for selecting that location over other locations in the time span of a few trials (Carpenter & Williams, 1995; Hanks, Mazurek, Kiani, Hopp, & Shadlen, 2011; Mulder, Wagenmakers, Ratcliff, Boekel, & Forstmann, 2012). Systematic biases for specific choices (“choice biases”) confound the ability to evaluate the observer's sensitivity to sensory evidence. Hence, in studies of human and animal behavior, much effort is invested in the careful development of experimental designs and training protocols that minimize or train away biases although this approach may not always be practical. 
Theoretical frameworks provide a complementary approach to accounting for choice bias: They quantify it. Such frameworks are based on a testable model of the decision-making process and permit principled, quantitative estimation of the contribution of choice bias to the observer's responses. Among such theoretical frameworks, signal detection theory (SDT) is a simple, but powerful, decision-making framework that accounts for choice bias in binary choice tasks, such as the two-alternative forced choice (2-AFC) or Yes/No detection tasks (Green & Swets, 1966; Macmillan & Creelman, 2005). 
In binary choice Yes/No tasks, the experimenter seeks to measure an observer's perceptual sensitivity to detect a target stimulus at a particular location or to detect a target stimulus feature in the display. The observer is presented with a series of behavioral trials: The stimulus (or stimulus feature) is presented at a given location on a random subset of these trials and is absent in others. When the observer detects the stimulus, she/he reports it with a “Yes” response; otherwise, she/he reports a “No” response. 
SDT models the observer's perceptual decision in this binary choice (simple) detection task as the outcome of an inherently noisy process. In the SDT framework for the binary choice (Yes/No) task, the observer decides between the two, mutually exclusive events (was the stimulus present or not?) by weighing the relative strength of evidence for each. The decision is based on a latent random variable, the decision variable, whose mean depends on the strength of the stimulus and whose variance arises from the noisiness of the sensory evidence across trials (Green & Swets, 1966). In trials in which the decision variable exceeds a cutoff value, the observer reports having detected the stimulus (“Yes”). 
The cutoff value or “choice criterion” represents the observer's bias for choosing to report detection over no detection. When the observer is highly biased toward the “Yes” choice, she/he adopts a low value for the choice criterion, which manifests as a tendency to report having detected the stimulus even when no stimulus was presented (a high rate of “false alarms”). Conversely, when the observer is highly biased toward the “No” choice, she/he adopts a high criterion, which manifests as a conservative tendency to not report detection even in trials when the stimulus was presented (a high rate of “misses”). Having accounted for bias, the observer's “perceptual sensitivity” to detect the stimulus, an indicator of the strength of the perceived signal, is analytically estimated from the proportion of false alarms and misses based on assumptions about the nature of the decision variable distribution (Green & Swets, 1966). 
Now, consider the following scenario: An experimenter seeks to measure an observer's perceptual sensitivity for detecting a target stimulus at not one but multiple (two or more) locations within a single experimental session (Figure 1A). Such multialternative tasks are widely used in studies investigating the neural basis of perception, attention, or decision-making to determine whether the observer's sensitivity to detect a stimulus differs between a cued (or microstimulated or inactivated) location and other locations (Cavanaugh & Wurtz, 2004; Cohen & Maunsell, 2009; Ray & Maunsell, 2010; Sridharan, Ramamurthy, & Knudsen, 2013; Zenon & Krauzlis, 2012). The task design in such studies extends the conventional binary choice Yes/No detection task by presenting the target stimulus at different locations (cued vs. uncued) across interleaved trials, in addition to incorporating trials in which no target stimulus is presented (“catch” trials). The observer reports the location at which she/he perceived the stimulus, for instance, with a saccadic eye movement to that location (Figure 1A, top sequence). Such a response, termed a “Go” response, is analogous to the “Yes” response in a binary choice detection task except that in the multialternative task the observer is rewarded for making a “Go” response to the specific location at which the stimulus was presented. In case no stimulus was presented (catch trials), the observer is rewarded for not making a Go response to any location (Figure 1A, bottom sequence). The latter response alternative, termed a “NoGo” response, is analogous to the “No” response in the binary choice detection task (Figure 1A, lower). 
Figure 1
 
Multialternative detection task. (A) 2-ADC task. The observer initiates a trial by fixating on a zeroing dot. In some trials (“stimulus” trials, upper sequence), a target stimulus (here, a grating) is briefly presented at one of two potential locations (dashed black circles) on the screen. The observer is rewarded for detecting and indicating the location of the target with a saccade (blue line, “Go” response) to the appropriate response box (dashed yellow circles). In other trials (“catch” trials, lower sequence), no target is presented for a prolonged period following fixation. In these trials, the observer is rewarded for maintaining fixation on the zeroing dot (“NoGo” response) following the appearance of the response boxes. (B) m-ADC task. Following fixation of a central dot, the observer is presented with m (here, m = 4) oriented gratings. At a random time following stimulus onset, the display goes blank briefly (a few hundred milliseconds). Then, the four stimuli reappear. In some proportion of the trials, one of the four gratings has changes in orientation (change trials), and in the remaining trials, none of the stimuli changes (catch trials). The observer is rewarded for saccading to the location of the change (change trials) or for maintaining fixation in trials when no change occurred (catch trials).
Figure 1
 
Multialternative detection task. (A) 2-ADC task. The observer initiates a trial by fixating on a zeroing dot. In some trials (“stimulus” trials, upper sequence), a target stimulus (here, a grating) is briefly presented at one of two potential locations (dashed black circles) on the screen. The observer is rewarded for detecting and indicating the location of the target with a saccade (blue line, “Go” response) to the appropriate response box (dashed yellow circles). In other trials (“catch” trials, lower sequence), no target is presented for a prolonged period following fixation. In these trials, the observer is rewarded for maintaining fixation on the zeroing dot (“NoGo” response) following the appearance of the response boxes. (B) m-ADC task. Following fixation of a central dot, the observer is presented with m (here, m = 4) oriented gratings. At a random time following stimulus onset, the display goes blank briefly (a few hundred milliseconds). Then, the four stimuli reappear. In some proportion of the trials, one of the four gratings has changes in orientation (change trials), and in the remaining trials, none of the stimuli changes (catch trials). The observer is rewarded for saccading to the location of the change (change trials) or for maintaining fixation in trials when no change occurred (catch trials).
We term such multialternative tasks that extend the Yes/No detection task to measure detection performance at multiple locations within a single experimental session a “multialternative detection task” (Middleton & Meter, 1955). Despite the considerable success of conventional binary choice signal detection models in accounting for choice bias in simple detection (Yes/No) tasks, they cannot be applied to multialternative detection tasks without fundamental modifications (see next section; also DeCarlo, 2012; Macmillan & Creelman, 2005, pp. 250–251). 
Here, we propose the first analytical formulation for accounting for bias in multialternative detection tasks. We formulate the model in a multidimensional signal-detection framework and present numerical approaches for estimating perceptual sensitivity and choice bias from measured response probabilities. We demonstrate analytically that the model is identifiable and that the specification of the decision rule in the model is optimal in terms of maximizing success in such detection tasks. Finally, we validate the model empirically by successfully fitting previously published data from detection and discrimination tasks (García-Pérez & Alcalá-Quintana, 2010, 2011a) and identify alternate, or additional, sensory factors that could account for the observers' behavior in these tasks. Our model provides a powerful tool for quantifying the relative contributions of bias and sensitivity for neuroscience studies of attention and decision-making that employ multialternative tasks. 
Results
The multialternative detection task: Motivation for a multidimensional model
We present a multidimensional signal detection model that decouples choice bias from perceptual sensitivity in multialternative detection tasks (with catch trials). To facilitate development of the model, we choose a particular kind of simple detection task, a multiple alternative spatial detection task (Figure 1A), in which the observer must detect and report the location of a briefly flashed target stimulus that can occur at one (or none) of several potential spatial locations. The theory is also applicable to at least two other kinds of task designs: (a) spatial change detection tasks that require the observer to detect and report the location at which a change occurred in a stimulus feature, such as a change in orientation from a standard value (Figure 1B), and (b) feature-based detection tasks that require the observer to detect and identify the occurrence of stimuli with particular features (e.g., colors, directions of motion, tones of a particular pitch). The former task has been commonly employed in studies of visual attention (Cavanaugh & Wurtz, 2004; Cohen & Maunsell, 2009; Ray & Maunsell, 2010). For brevity, we will refer to such tasks as m-ADC tasks (the acronym stands for multialternative detection/change-detection tasks). 
We motivate the development of the multidimensional model for the two-alternative detection/change-detection (2-ADC) task, by demonstrating why multiple, independent, one-dimensional binary choice models incorrectly specify or fail to fully specify behavior in this task. First, consider a binary choice (Yes/No) spatial detection task (Figure 2A). In this task, the stimulus is either presented at a location or not at all (catch). Conventional SDT models the binary (Yes/No) decision as a process of selecting one of two hypotheses (N: No stimulus or S: Stimulus present) based on noisy sensory evidence. The decision variable (Ψ) that encodes this sensory evidence is modeled as a Gaussian random variable with unit variance. The mean of the decision variable is specified as zero when no stimulus is presented and takes on a nonzero value, d, when a non-null stimulus is presented (Figure 2A). d, also termed the “perceptual sensitivity,” is determined by, and increases with the strength of the presented stimulus. In a given trial, the observer chooses S if the decision variable exceeds a particular cutoff value, the “criterion” or c; such a specification permits optimizing a variety of objective functions, including maximizing success (proportion correct) in such tasks (for a detailed discussion, see Green & Swets, 1966, section 1.7). The well-known 2 × 2 stimulus–response contingency table for this type of task is shown in Table S1A (Supplemental Data)
Figure 2
 
Signal detection models for the multialternative detection task. (A) A simple detection (Yes/No) task modeled with a binary choice (one-dimensional) signal detection model. Black Gaussian: decision variable distribution when no stimulus was presented, p(Ψ|N); red Gaussian: decision variable distribution when a stimulus was presented, p(Ψ|S). Red shading: Hit rate; hatched region: False-alarm rate; d: perceptual sensitivity for detection; c: choice criterion for a Yes response. (B) Performance in a 2-ADC task modeled with two one-dimensional binary choice models. Top row: Behavior modeled as a two-stage decision with a binary one-dimensional model for each stage. In the first stage, the observer decides if a stimulus was presented at all (N vs. S1 or S2), based on the value of a decision variable (Ψ) as in the conventional Yes/No task. In the next stage, the observer decides whether the stimulus was presented at location 1 or location 2 based on the value of a different decision variable (Ψ*) as in the conventional 2-AFC task (see text for details). Bottom row: Behavior modeled with two binary choice (Yes/No) one-dimensional models, one at each potential target location. Decisions are based on independent decision variables (Ψ1, Ψ2), sensitivities (d1, d2), and criteria (c1, c2) at each location. This is a mis-specified model for the 2-ADC task (see text for details). Hatched region: False-alarm rate; gray shading: miss rate. (C) Two-dimensional signal-detection model for the 2-ADC task. The decision is based on a bivariate decision variable Ψ whose components (Ψ1 and Ψ2) encode sensory evidence at each stimulus location and are represented along orthogonal axes in a two-dimensional decision space. Decision variable components are independently distributed Gaussians. Black circle: contour of the joint distribution of the decision variable components for no stimulus at either location (noise distribution). Red and blue circles: contour of the joint distribution of the decision variable components for a stimulus at location 1 or location 2, respectively (signal distributions). Linear decision boundaries (thick black lines) demarcate the domains of decision space for each potential response or choice; these belong to the family of optimal decision surfaces for this model (see text for details). The integral of the decision variable distribution within each region represents the probability of the corresponding response: NoGo (Y = 0, gray), Go response to location 1 (Y = 1, red) or to location 2 (Y = 2, blue). Marginal distributions of each decision variable component are shown alongside each axis.
Figure 2
 
Signal detection models for the multialternative detection task. (A) A simple detection (Yes/No) task modeled with a binary choice (one-dimensional) signal detection model. Black Gaussian: decision variable distribution when no stimulus was presented, p(Ψ|N); red Gaussian: decision variable distribution when a stimulus was presented, p(Ψ|S). Red shading: Hit rate; hatched region: False-alarm rate; d: perceptual sensitivity for detection; c: choice criterion for a Yes response. (B) Performance in a 2-ADC task modeled with two one-dimensional binary choice models. Top row: Behavior modeled as a two-stage decision with a binary one-dimensional model for each stage. In the first stage, the observer decides if a stimulus was presented at all (N vs. S1 or S2), based on the value of a decision variable (Ψ) as in the conventional Yes/No task. In the next stage, the observer decides whether the stimulus was presented at location 1 or location 2 based on the value of a different decision variable (Ψ*) as in the conventional 2-AFC task (see text for details). Bottom row: Behavior modeled with two binary choice (Yes/No) one-dimensional models, one at each potential target location. Decisions are based on independent decision variables (Ψ1, Ψ2), sensitivities (d1, d2), and criteria (c1, c2) at each location. This is a mis-specified model for the 2-ADC task (see text for details). Hatched region: False-alarm rate; gray shading: miss rate. (C) Two-dimensional signal-detection model for the 2-ADC task. The decision is based on a bivariate decision variable Ψ whose components (Ψ1 and Ψ2) encode sensory evidence at each stimulus location and are represented along orthogonal axes in a two-dimensional decision space. Decision variable components are independently distributed Gaussians. Black circle: contour of the joint distribution of the decision variable components for no stimulus at either location (noise distribution). Red and blue circles: contour of the joint distribution of the decision variable components for a stimulus at location 1 or location 2, respectively (signal distributions). Linear decision boundaries (thick black lines) demarcate the domains of decision space for each potential response or choice; these belong to the family of optimal decision surfaces for this model (see text for details). The integral of the decision variable distribution within each region represents the probability of the corresponding response: NoGo (Y = 0, gray), Go response to location 1 (Y = 1, red) or to location 2 (Y = 2, blue). Marginal distributions of each decision variable component are shown alongside each axis.
Based on the hit rate (HR)—the proportion of trials in which the observer correctly reported a detection when a stimulus was presented—and the false-alarm rate (FA)—the proportion of trials in which the observer incorrectly reported a detection when no stimulus was presented—SDT provides a simple, one-dimensional formalism for estimating d and c as, respectively, = Φ−1(HR) − Φ−1(FA) and ĉ = −Φ−1(FA) (where Φ−1 represents the probit function, the inverse cumulative distribution function associated with the standard normal distribution). As mentioned in the Introduction, c is a measure of the observer's bias for reporting a Yes versus a No response. 
Consider next the 2-ADC task in which the stimulus can be presented at one of two locations in addition to not being presented at all (Figure 1A). The 3 × 3 contingency table for this task is shown in Table S1B (Supplemental Data). For this task, the decision must be made among three hypotheses: S1, stimulus at location 1; S2, stimulus at location 2; or N, no stimulus at either location. 
Let us propose that the observer adopts the following two-stage strategy in performing this task. In the first stage, the observer decides whether a stimulus is presented at all (at either location) or not, i.e., the observer chooses between giving a Go (Yes) or NoGo (No) response based on the relative strengths of evidence for the hypotheses N versus S1 or S2 (Figure 2B, top left) as in a conventional Yes/No task. In the second stage, the observer decides whether a stimulus was presented at location 1 or location 2, i.e., between giving a Go response to location 1 versus 2 based on the relative strengths of evidence for the hypotheses S1 versus S2 (Figure 2B, top right) as in a conventional 2-AFC task. The two binary-choice one-dimensional models that capture this decision process are shown in Figure 2B (top). 
Does such a model fully specify all stimulus–response contingencies for the 2-ADC task? During catch (N) trials, the observer is free to give Go responses (false alarms) to either location 1 or location 2 (Table S1B, last row). However, this model does not specify these response contingencies individually; rather, it only specifies the aggregate of the observer's false alarms (Go responses) to both locations during catch trials (Figure 2B, top left, hatched). Conversely, the observer may give different proportions of “miss” (NoGo) responses when stimuli are presented at location 1 versus at location 2 (Table S1B, last column). Again, this model does not specify responses to these contingencies individually but only specifies the aggregate of the observer's miss rates to stimuli presented at either location (Figure 2B, top left, gray shading). 
It is tempting, then, to conceive of a model in which the observer solves the task as independent Yes/No tasks with an independent binary-choice model (independent decision variable distributions and independent d and c) at each location (Figure 2B, bottom row; Yeshurun, Carrasco, & Maloney, 2008). This model does indeed specify, separately, the FAs (during catch trials) for each location (Figure 2B, bottom row, hatched) as well as individual miss rates for each stimulus event (Figure 2B, bottom row, gray shading). However, such a model is not sufficient to model behavior in this task. For example, the model specifies that, in each trial, the observer gives a Go response to a location at which the decision variable (Ψ1 or Ψ2) exceeds the criterion. But what if the decision variables (Ψ1 and Ψ2) were to exceed their respective criteria (c1 and c2) at both locations in a particular trial? Go responses cannot be made to more than one location in a given trial. It is possible that, under these conditions, observers respond with a random “guess” at one of the two locations. Whatever the case, this model is insufficient, and a more elaborate framework is required, for instance, to model the observer's guessing strategy. 
These examples illustrate the reasons for why independent, one dimensional binary choice signal detection models are insufficient for modeling behavior in m-ADC tasks. 
A two-dimensional signal detection model for the 2-ADC task
We develop a multidimensional model, first, for a two-alternative (change) detection task (Figure 1) and, in the next section, generalize the model to a task with several alternatives (m > 2). We illustrate the model with a stimulus-detection task, such as the one shown in Figure 1A. However, the model is applicable, with a simple translation of the origin of the coordinate axes (see next), to change detection tasks, such as the one shown in Figure 1B. We describe the model verbally below and then provide an analytical formulation. 
Our two-dimensional signal detection model specifies a bivariate decision variable Ψ, whose components encode sensory evidence at each location, k, along orthogonal decision variable axes (also called “perceptual dimensions”) in a two-dimensional decision space (also called a “perceptual space”; Figure 2C). When no stimulus is presented (catch trials), the distribution of Ψ, given by the joint distribution of the two decision variable components, is centered at the origin with equal variance along each axis (Figure 2C, black; noise distribution). A stimulus presented at a particular location results in a “signal” distribution whose mean lies along the decision axis for that location (Figure 2C, red or blue). The value of this mean of the signal distribution at each location, k, determined by the strength of the stimulus at that location and measured in units of noise standard deviation along the corresponding dimension, is defined as the perceptual sensitivity (dk). 
The model posits that, while choosing a response, the observer employs an independent choice criterion (ck) for each location: In each trial, a response is made to the location at which the decision variable component exceeds the (respective) choice criterion. A difference in criteria between the two locations gives rise to a choice bias (relative preference) for one location over the other. If decision variable components at both locations exceed their respective criteria, the response is made to the location at which the difference between the decision variable component and the corresponding choice criterion was the greatest. If no decision variable component exceeds its respective criterion, the observer gives a NoGo response (Figure 2C, gray shaded region). In this model, the response probability for each stimulus event is the proportion (integral) of the corresponding joint distribution within the respective region. Thus, this two-dimensional 2-ADC model fully specifies each stimulus-response contingency for the 2-ADC task (Table S1B, Supplemental Data). 
2-ADC model formulation
We formulate a model for the 2-ADC task, first, considering the case in which a stimulus of a single strength is presented at each location. In the next section, we extend this formulation to a multialternative task in which, additionally, the stimulus at each location is free to vary in strength. We build upon a recently developed latent variable formulation (DeCarlo, 2012) that involves specifying a structural model of the observer's perceptual sensitivity for detecting the presented stimulus and a decision rule that models the effect of choice bias on the observer's response. In the Discussion, we analyze the assumptions inherent in this formulation and discuss potential extensions. 
We denote the observer's response with the variable Y: Y = i indicates that the observer chose to respond at location i (Go response) whereas Y = 0 indicates that the observer gave a NoGo response. Similarly, we denote the stimulus event with the variable X whose components Xi denote where the event occurred: Xi = 1 indicates that a stimulus was presented at location i. We further stipulate that no more than one stimulus be presented in a given trial, a common practice in psychophysics tasks of perception and attention (see Discussion). Thus, ||X||1 = Display FormulaImage not available = 1 (stimulus trial) or 0 (catch trial). 
The structural model for the 2-ADC task posits independently distributed decision variables Ψi for each of the two locations and specifies how these distributions change with each stimulus event:  where Ψi denotes the decision variable that encodes sensory evidence at location i (i ∈ {1, 2}), εi is a random variable that represents the distribution of Ψi when Xi = 0, and di is the perceptual sensitivity, an indicator of the strength of the perceived signal when a stimulus was presented at location i (elaborated below). 
The joint distribution of Ψ1 and Ψ2 when a stimulus was presented (stimulus trials, ||X||1 = 1) is termed a “signal” distribution whereas the joint distribution of the Ψi when no stimulus was presented (catch trials, ||X||1 = 0) is termed the “noise” distribution; the latter distribution is identical with the joint distribution of the εi. In line with conventional SDT for a binary choice stimulus-detection task, we assume that the noise distribution along each dimension is unit normal, i.e., εi εi ~ 𝒩(0,1). We note that the assumption of Gaussian distributions is not necessary for the model and the results presented here (except for the demonstration of model optimality). 
di represents the change in the expected value of Ψi when a stimulus is presented at location i versus when no stimulus is presented; in other words, di = Ei|Xi = 1) – Ei|Xi = 0). di, measured in the units of noise standard deviation (unity in conventional SDT), determines the amount of the overlap (or lack thereof) between the “signal” distribution when a stimulus was present at location i (Xi = 1), and the noise distribution. Hence, di is termed the perceptual sensitivity associated with detecting a stimulus at location i and is determined by the strength of the stimulus at that location. 
In line with conventional SDT, the 2-ADC structural model posits that a stimulus alters the mean of each Ψi (additively) without altering its variance or higher moments. Thus, the distribution of each Ψi is Gaussian with unit variance. If the Ψi distributions have unequal variances across the different locations, the 2-ADC structural model with unit normal distributions (Equation 1) can be readily recovered by scaling each Ψi by its respective standard deviation. 
In a more compact, but entirely equivalent, formulation, each Ψi can be considered a component of a bivariate random variable (Ψ) represented in a two-dimensional, Cartesian decision space (such as that shown in Figure 2C). Henceforth, in describing the model we will interchangeably refer to the Ψi as “decision variables” or “decision variable components” with the understanding that in either case these represent the univariate (scalar) component variables that constitute the bivariate (vector) decision variable (Ψ). 
The 2-ADC decision rule extends the one-dimensional SDT decision rule by specifying two choice criteria, one for each location:     
Thus, the observer makes a response at location i when the value of the decision variable Ψi exceeds choice criterion ci. If the values of Ψi exceed the criterion at both locations, then the observer responds to the location with the larger difference between the decision variable and the (respective) criterion value (larger Ψici). On the other hand, if Ψi values fall below the choice criterion at every location, then the observer makes a NoGo response. 
The decision rule is depicted in Figure 2C (thick black lines). In a later section, we demonstrate how this rule can be derived from optimal decision theory (for the more general m-alternative case). These choice criteria ci (Figure 2C) constitute an SDT measure of bias. The relative value of the criteria between locations indicates the magnitude of the bias: A lower choice criterion at a location corresponds to a greater choice bias for that location. The analytical formulation of the 2-ADC decision rule is, arguably, more complex than that of related models that incorporate NoGo responses (Ashby & Townsend, 1986; García-Pérez & Alcalá-Quintana, 2010). Consequently, the partitioning of decision space and the analytical treatment of bias represent fundamentally novel aspects of the 2-ADC model. 
In order to measure the contribution of bias to behavioral responses, an analytical relationship must be formulated between the criteria, sensitivities, and response probabilities. The structural model and decision rule permit establishing such a relationship. 
Here, we summarize the dependence of response probabilities on sensitivities and criteria; the detailed derivation is provided in the Methods
The following system of equations constitutes the 2-ADC model:    where p(Y = i|X) represents the conditional probability of a Go response to location i (i ∈ {1, 2}) for each stimulus event X; p(Y = 0|X) represents the conditional probability of a NoGo response for each stimulus event; and ϕ and Φ represent, respectively, the probability density and the cumulative distribution functions of the unit normal distribution. 
These equations represent the response probabilities for the nine stimulus–response contingencies shown in Table S1B (Supplemental Data). Only six of these probabilities (two in each row of the table) are independent. The three other response probabilities (one in each row) are not free to vary as all responses (Go and NoGo) are mutually exclusive and exhaustive. Thus, in the 2-ADC model, there is an excess of independent observations (six) relative to the number of parameters (four: {d1, d2, c1, c2}) with two degrees of freedom to test the goodness of fit of the model. 
Consider the effect of varying sensitivities and criteria on response probabilities in the 2-ADC model. Each of the nine response probabilities in the model (Equation 3) is a function of the four parameters: the criterion and sensitivity at each of the two locations. Thus, each response probability constitutes a surface in four-dimensional parameter space ({di, ci}, i ∈ {1, 2}). To facilitate representation, we examined a pair of two-dimensional subspaces by varying the criteria holding the sensitivities constant (parameter values in Table S2A, Supplemental Data) and vice versa. In line with conventional SDT, noise was assumed to be normally distributed with zero-mean and unit variance. The task specification requires that no more than one stimulus be presented in a given trial. This permits us to employ the following notational shorthand for the response probabilities: p(Y = i|Xj = 1) = Display FormulaImage not available , where the superscript denotes the response location and the subscript denote the stimulus location. 
Figure S1A (Supplemental Data) illustrates the effect of varying criterion ci at each location (i ∈ {1, 2}) on the response probabilities at a particular location, say, location 1. The following general trends are apparent from the figure: A higher choice criterion at a location i (lower bias toward location i) reduces the probability of response at that location ( Display FormulaImage not available ) and enhances the probability of response at the opposite location ( Display FormulaImage not available , ji) regardless of where the stimulus was presented (i, j, k ∈ {1, 2}). Also apparent is the effect of sensitivity (di) on response probabilities: Greater sensitivity to a stimulus at a location enhances the HR at that location (Figure S1B, red) and reduces the probability of a false alarm (incorrect response) at the opposite location (Figure S1B, blue). 
A formulation identical to Equation 3 suffices to model behavior in change-detection tasks, such as the one shown in Figure 1B. Whereas in a stimulus-detection task Ei|Xi = 0) = 0 so that di is simply equal to Ei|Xi = 1), in a change-detection task Ei|Xi = 0) = Display FormulaImage not available , where Display FormulaImage not available denotes the mean of the decision variable distribution for the standard stimulus at location i so that di = Ei|Xi = 1) – Display FormulaImage not available In other words, for the change-detection task, the di and ci must be measured with the origin of the coordinates at the center of the decision variable distribution for the standard stimulus (Figure 2C, black distribution). For simplicity of illustration, the formulation henceforth will be based solely on the stimulus-detection task (e.g., Figure 1A) with the understanding that the same logic and analogous equations are readily applied to change-detection tasks with an appropriate translation of the origin of the coordinate axes. 
Generalization to the m-ADC task
The m-ADC task permits more than two Go response alternatives along with the NoGo response alternative. We formulate the model for this task incorporating, in addition, the potential for stimuli of various strengths to be presented at each location. 
m-ADC model formulation
The formulation of the m-ADC model is conceptually similar to that of the 2-ADC model. The key difference is that perceptual sensitivity d is now defined as a function of stimulus strength: Stronger, more salient stimuli are more reliably detected because the respective signal distribution is further removed from the noise distribution (higher d), resulting in less overlap between the signal and noise distributions. The psychophysical function describes the variation of perceptual sensitivity, d, with stimulus strength. Here, we relate the psychophysical function to the psychometric function, which describes the variation in the observer's response proportions p with stimulus strength in the m-ADC model. 
In order to account for the variation of perceptual sensitivity (d) with stimulus strength, we specify the m-ADC structural model as follows:  where ξi (i ∈ {1, 2, … m}) represents the stimulus strength at location i (e.g., contrast in Figure 1A or orientation change magnitude in Figure 1B), the psychophysical function di(ξi) describes variation of sensitivity at location i with the stimulus strength at that location, and εi ~ 𝒩(0,1). For ease of illustration, we choose ξi to represent the contrast of the stimulus. In this exemplar case, our theory relates response probabilities to the well-known psychophysical function of stimulus contrast. 
As with the 2-ADC model, each Ψi in the m-ADC model can be considered an independent component of a multivariate (random) decision variable (Ψ) represented in a multidimensional decision space. In addition, the assumption of orthogonality (independence) among the Ψi implies that the covariance matrix of this decision variable is a diagonal matrix. 
The m-ADC decision rule is defined as follows:   
Thus, the observer gives a Go response to the location at which the value of the decision variable exceeds the choice criterion, and at which the difference between the decision variable value and the corresponding choice criterion is maximal. If the value of the decision variable does not exceed the choice criterion at any location, the observer gives a NoGo response. 
We posit that the observer employs a fixed criterion, ci, at each location i that is independent of (does not vary with) stimulus strength. Such an assumption is plausible for task designs in which stimulus strength is varied pseudorandomly across trials so that the observer cannot adjust her/his criterion systematically based on foreknowledge of stimulus strength. 
As before, the structural model and decision rule permit establishing the relationship between sensitivity, criteria, and m-ADC response probabilities (derived in the Methods).   where = (ξ1, ξ2, … ξm) denotes a stimulus event with its ith component representing the contrast of the stimulus presented at location i, p(Y = i|) represents the psychometric function, the conditional probability of Go responses to location i for each stimulus event (), and p(Y = 0|) represents the psychometric function of a NoGo response for each stimulus event. For the m-ADC task, as for the 2-ADC task, we specify that the stimulus is presented at no more than one location in a given trial so that, at most, one ξi is nonzero. 
The m-ADC model contains nS m2 + m independent observations (assuming nS stimulus levels at each location) from which the m + nS m parameters, corresponding to the m criteria, and m sensitivities for each of the nS stimulus strengths must be estimated. Even in the case of only a single stimulus level at each location (nS = 1), there are at least m2m degrees of freedom to evaluate goodness of fit for all m ≥ 2. 
It is often of interest to understand how an experimental manipulation, such as cueing a particular location for attention, affects the underlying psychophysical function (d(ξ)): Does the manipulation scale, shift, or change the slope of the psychophysical function (Herrmann, Montaser-Kouhsari, Carrasco, & Heeger, 2010; Lee & Maunsell, 2009; Reynolds & Heeger, 2009)? A parametric form of the psychophysical function, which provides an analytical relationship between sensitivity d, and stimulus contrast, ξ, facilitates such an analysis. Sigmoidal functions, such as the hyperbolic ratio function, as well as linear or power functions are all candidate psychophysical functions. 
For illustration, we choose the three-parameter hyperbolic ratio (or Naka-Rushton) function: d(ξ) = dmax (ξn) / (ξn + (ξ50)n). The parameters of this function, dmax, ξ50, and n (which we call psychophysical parameters) correspond to the asymptotic value, contrast at 50% of asymptotic value, and slope of the psychophysical function, respectively. Altering each parameter in turn scales (dmax), shifts (ξ50), or changes the slope (n) of the psychophysical function. 
With the psychophysical function thus parameterized, the number of parameters reduces to 4m corresponding to the three psychophysical parameters and one criterion at each of the m locations. Thus, the number of degrees of freedom are nSm2npm where np is the number of parameters characterizing the psychophysical function (np = 3 for the hyperbolic ratio function). 
Figure S2 (Supplemental Data) depicts the effect of varying each psychophysical parameter and choice criterion on the psychometric functions at location 1, p(Y = i|ξ1) (parameter values in Table S3A, Supplemental Data). For values of the parameters that do not saturate the response probabilities, the effect of varying the psychophysical parameters dmax, ξ50, and n on the psychometric functions (Figure S2A through C) is similar to the effect of the respective parameter on the psychophysical function, d(ξ), viz. scaling, shift, and slope change (Figure S2A through C, insets). On the other hand, altering each response criterion (ci), which, by definition, has no impact on the psychophysical function, alters the psychometric function in complex ways: The effects include apparent scaling, shifting, and/or slope changes (Figure S2D through E). However, the proportion of responses increases (across all ξ) with decreasing criterion at that location and with increasing criterion at the opposite location, consistent with the monotonic trends noted before (Figure S1A, Supplemental Data). 
Parameter estimation, identifiability, and optimality
In the 2-AFC task, perceptual sensitivity and choice criteria are readily estimated analytically as these quantities occur as linear terms of the argument of an invertible probit function (Green & Swets, 1966). Moreover, the specification of a criterion (or cutoff value) in the 2-AFC model is Bayes optimal in terms of maximizing reward or the proportion of correct responses (Luce, 1963). 
On the other hand, the potential for a stimulus event at one of multiple locations (m > 2) and catch trials renders the m-ADC model multidimensional and raises several challenges. First, in this multidimensional SDT model, the system of Equation 6 is not readily inverted (analytically) to yield model parameters. Thus, given a set of experimentally observed m-ADC response probabilities (e.g., contingency table, Table S1B), is it possible to obtain estimates of the underlying perceptual sensitivities and choice criteria that generate these response probabilities? Second, can one guarantee model identifiability so that a given set of response probabilities can be produced by only one set of parameters in the model? Finally, can one show that the specification of independent criteria at each location (linear, intersecting decision surfaces; Figure 2C) constitutes an optimal decision rule? We addressed the first of these challenges (parameter estimation) by developing and extending numerical approaches noted in a recent study (DeCarlo, 2012), described next. We addressed the remaining two challenges (demonstrating model identifiability and optimality) with analytical approaches, described subsequently. 
Parameter estimation
We employed numerical (maximum likelihood and Bayesian) methods to estimate model parameters (sensitivities and criteria) from the response probabilities. The additional degrees of freedom in the m-ADC model permit the possibility that no single set of parameters satisfies all of the equations, rendering it necessary to employ optimization approaches. 
We demonstrate parameter recovery with the 2-ADC model by providing simulated data as input to the numerical algorithms in lieu of experimental data; the procedure can be readily extended to the m-ADC case. 
First, we verified that parameters could be estimated in a simulated task with a single stimulus level at each location. Simulated response counts (N = 4,000 trials from 20 experimental blocks) were generated, based on probabilities computed from Equation 3 with a prespecified set of criteria and sensitivities (Table S2A, Supplemental Data). The stimulus–response contingency table for these response counts is shown in Table S2B (details provided in the Methods). 
With these simulated data, we attempted to recover the underlying sensitivities and criteria based on a maximum likelihood estimation (MLE) approach. For various initial guesses (Figure 3 through B, colored diamonds), the MLE algorithm recovered identical estimates of the four parameters (Figure 3A through B, black circle) that closely matched the original parameters used in the simulation (Table S2C, Supplemental Data). Similar results were obtained with a Bayesian estimation (Markov-Chain Monte Carlo, MCMC) approach (Figure S3, Supplemental Data; details in Methods). Thus, the sensitivities and criteria could be readily estimated from simulated responses in the 2-ADC model based on numerical approaches. In addition, the algorithms reliably converged onto an identical set of sensitivities and criteria in parameter space (Figures 3A through B and S3), suggesting that the model is identifiable. 
Figure 3
 
Estimating sensitivities and criteria from simulated responses. (A–B) maximum likelihood estimation (MLE) of the perceptual sensitivity (A) and choice criterion (B) at each location from simulated response counts for a two-alternative detection task (Table S2B, Supplemental Data). Beginning with an initial guess for each parameter, the algorithm uses a line-search method to identify the sensitivities and criteria that maximize the likelihood of the simulated response counts. For various initial guesses (colored diamonds-s), the MLE algorithm converged reliably onto identical sensitivity and criterion values at each location (black circles/dashed gray lines). (C) Psychometric functions of the probability of response at location 1 as a function of the contrast of a stimulus presented at location 1 (red circles) or at location 2 (blue circles). Error bars: Standard deviation across simulated runs (N = 100). Solid curves: Psychometric functions based on fitting a model that incorporated bias. Dashed curves: Fits with a model that did not incorporate bias. (D) Same as in (C) but for the response probability at location 2. (E) Same as in (C) but with data and fits pooled across locations as “correct” (hit, black) and “incorrect” (misidentification, green) responses.
Figure 3
 
Estimating sensitivities and criteria from simulated responses. (A–B) maximum likelihood estimation (MLE) of the perceptual sensitivity (A) and choice criterion (B) at each location from simulated response counts for a two-alternative detection task (Table S2B, Supplemental Data). Beginning with an initial guess for each parameter, the algorithm uses a line-search method to identify the sensitivities and criteria that maximize the likelihood of the simulated response counts. For various initial guesses (colored diamonds-s), the MLE algorithm converged reliably onto identical sensitivity and criterion values at each location (black circles/dashed gray lines). (C) Psychometric functions of the probability of response at location 1 as a function of the contrast of a stimulus presented at location 1 (red circles) or at location 2 (blue circles). Error bars: Standard deviation across simulated runs (N = 100). Solid curves: Psychometric functions based on fitting a model that incorporated bias. Dashed curves: Fits with a model that did not incorporate bias. (D) Same as in (C) but for the response probability at location 2. (E) Same as in (C) but with data and fits pooled across locations as “correct” (hit, black) and “incorrect” (misidentification, green) responses.
Next, we verified that these parameters could be recovered for a simulated task with many stimulus levels at each location. Specifically, we sought to test if the parameters of the psychophysical function could be reliably recovered from the psychometric function in a one-shot estimation procedure. Details on generating the simulated psychometric functions are provided in the Methods, and the parameters used in these simulations are presented in Table S3A (Supplemental Data). Psychophysical functions at the two locations were assumed to be identical whereas a greater choice bias was assigned to location 1 (c1 < c2). 
Psychophysical parameters and criteria were reliably recovered with the MLE procedure (Table S3B, Supplemental Data). Psychometric functions computed from the recovered parameters fit the data with virtually no error (Figure 3C and D, solid curves). 
Fitting the data with a model that excluded bias revealed important insights. Following MLE with such a model (Figure S4A, Supplemental Data; similar to the model described in Macmillan & Creelman, 2005, p. 258), which enforces a symmetric decision boundary (uniform ci = c), we plotted the reconstructed psychometric functions both separately for each location as well as with the responses pooled (as “correct” vs. “incorrect”) across locations. When data were plotted separately for each location, the reconstructed psychometric functions (Figure 3C and D, dashed curves) deviated systematically from the original data (Figure 3C and D, circles). In addition, the model systematically overestimated the psychophysical function at location 1, the location of greater bias (Figure S4B, dashed red curve), and systematically underestimated it at the other location (Figure S4B, dashed blue curve). On the other hand, when data were pooled across locations as the proportion of correct (hit) and incorrect (misidentification) responses, the reconstructed psychometric functions closely fit the data (Figure 3E). The significance of these observations is discussed later (see Discussion). 
Identifiability of the m-ADC model
Is the m-ADC model identifiable so that for a given a set of parameters θ (sensitivities and criteria) that produce a set of response probabilities Display FormulaImage not available there is no other parameter set θ* that also produces the same probabilities? In the previous section, we demonstrated that, for various initial values of parameter guesses, numerical approaches reliably recover an identical set of 2-ADC model parameters, suggesting that the 2-ADC model is identifiable. However, the model contains nonlinear integral equations, and we must entertain the possibility that multiple parameter configurations may be consistent with a given set of response probabilities, especially in the m-alternative case (m > 2). 
Establishing the concavity of the likelihood function is a powerful approach for demonstrating model identifiability. Figure 4A through D shows the likelihood function of the 2-ADC model, a function of four parameters (2 di and 2 ci). We depict this four-dimensional likelihood function in a pair of two-dimensional subspaces by holding ci constant and varying di or vice versa (Figure 4A and B, parameter values in Table S2A, Supplemental Data). In the domain of parameter values shown in the figure, the likelihood function appears to be concave (Figure 4A through D) indicating a single, global minimum corresponding to a unique set of underlying parameters. However, demonstrating, analytically, the concavity of the likelihood function appears to be intractable even for the 2-ADC model (see Appendix C, Supplemental Data). 
Figure 4
 
2-ADC model identifiability. (A) Contour plot of the 2-ADC multinomial log-likelihood as a function of the sensitivities (d1, d2) at the two locations. (B) Contour plot of the 2-ADC multinomial log-likelihood as a function of the criteria (c1, c2). The concavity of the function is apparent throughout the domain of parameters shown. (C) The variation of log-likelihood with sensitivity at each location for fixed values of the other parameters (sensitivity at the other location and the two criteria, cross section through the dashed white lines of panels A–B). Dashed gray lines: values of the parameters that maximize the log-likelihood function; red data: location 1; blue data: location 2. (D) Same as (C) but variation with the criterion at each location for fixed values of the other parameters (criterion at the other location and the two sensitivities). (E) Probability of response during catch trials to location 1 (left), location 2 (middle), or NoGo (right) as a function of the choice criterion at each location. Colored lines: The contour traversing all possible pairs of criteria consistent with a specific value of each response probability; red: probability of a Go response to location 1; blue: probability of a Go response to location 2; green: probability of a NoGo response. (F) The three contours (red, blue, green) intersect at a single point indicating that exactly one set of criteria is consistent with a given set of response probabilities. Arrows: Specific values of NoGo and Go response probabilities at each location and the unique pair of criteria that is consistent with this specific set of response probabilities.
Figure 4
 
2-ADC model identifiability. (A) Contour plot of the 2-ADC multinomial log-likelihood as a function of the sensitivities (d1, d2) at the two locations. (B) Contour plot of the 2-ADC multinomial log-likelihood as a function of the criteria (c1, c2). The concavity of the function is apparent throughout the domain of parameters shown. (C) The variation of log-likelihood with sensitivity at each location for fixed values of the other parameters (sensitivity at the other location and the two criteria, cross section through the dashed white lines of panels A–B). Dashed gray lines: values of the parameters that maximize the log-likelihood function; red data: location 1; blue data: location 2. (D) Same as (C) but variation with the criterion at each location for fixed values of the other parameters (criterion at the other location and the two sensitivities). (E) Probability of response during catch trials to location 1 (left), location 2 (middle), or NoGo (right) as a function of the choice criterion at each location. Colored lines: The contour traversing all possible pairs of criteria consistent with a specific value of each response probability; red: probability of a Go response to location 1; blue: probability of a Go response to location 2; green: probability of a NoGo response. (F) The three contours (red, blue, green) intersect at a single point indicating that exactly one set of criteria is consistent with a given set of response probabilities. Arrows: Specific values of NoGo and Go response probabilities at each location and the unique pair of criteria that is consistent with this specific set of response probabilities.
Here we demonstrate model identifiability with logical reasoning (for the two-alternative case) and with mathematical induction (for the m-alternative case). A sketch of the proof is provided below, and the detailed formulation is provided in Appendices A and B (Supplemental Data). We illustrate the identifiability of the 2-ADC model by reasoning in two steps. First, we demonstrate that a given set of response probabilities during catch trials Display FormulaImage not available , i ∈ {0, 1, 2} are produced by exactly one pair of criterion values (c1, c2). Next, we demonstrate that a given set of response probabilities during stimulus trials Display FormulaImage not available , i, j ∈ {1, 2} are produced by no more than one set of sensitivity values (d1, d2). We develop a geometric intuition by varying the criterion and sensitivity parameters and examining the effects on the probabilities of each response (Figure 5A through C). 
Figure 5
 
Model identifiability and optimality. (A–C): Identifiability of the 2-ADC model. (A) Two-dimensional decision space for the 2-ADC model during catch trials, partitioned into three decision regions—NoGo response (gray) or Go response to location 1 (red) or location 2 (blue)—by one set of criteria (c1, c2). Dashed circle: Contour of the noise distribution. Thick solid lines: Decision boundaries. Other conventions are as in Figure 2C. (B) 2-ADC decision space during catch trials partitioned with an alternate set of criteria ( Image not available , Image not available ). These criterion values were chosen to keep the NoGo response probability the same as in (A). Thick, dashed lines: The decision boundaries associated with criteria (c1, c2) in (A). Other conventions are as in (A). (C) 2-ADC decision space with increasing perceptual sensitivity to a stimulus at location 1 (increasing d1). Red circles: Contours of the signal distribution. Gray circle: Contour of the noise distribution. Response probabilities in each decision region vary monotonically with increasing perceptual sensitivity along either dimension. Other conventions are as in (A). (D) Optimal decision surfaces in the 2-ADC decision space. Dashed circles: Contours of the decision variable distributions. Thick dashed lines: Optimal decision boundaries when prior probabilities of all stimulus events are equal. Solid circles: Contours of the posterior; Thick solid lines: Optimal decision boundaries when the prior probability of a stimulus presentation at location 1 is higher than the probability of a catch trial. The marginal distributions of the signal and noise distributions along dimension 1 are shown below (same conventions); horizontal green line: the value for the contours shown in the top panel (see text for details on the various probability notations).
Figure 5
 
Model identifiability and optimality. (A–C): Identifiability of the 2-ADC model. (A) Two-dimensional decision space for the 2-ADC model during catch trials, partitioned into three decision regions—NoGo response (gray) or Go response to location 1 (red) or location 2 (blue)—by one set of criteria (c1, c2). Dashed circle: Contour of the noise distribution. Thick solid lines: Decision boundaries. Other conventions are as in Figure 2C. (B) 2-ADC decision space during catch trials partitioned with an alternate set of criteria ( Image not available , Image not available ). These criterion values were chosen to keep the NoGo response probability the same as in (A). Thick, dashed lines: The decision boundaries associated with criteria (c1, c2) in (A). Other conventions are as in (A). (C) 2-ADC decision space with increasing perceptual sensitivity to a stimulus at location 1 (increasing d1). Red circles: Contours of the signal distribution. Gray circle: Contour of the noise distribution. Response probabilities in each decision region vary monotonically with increasing perceptual sensitivity along either dimension. Other conventions are as in (A). (D) Optimal decision surfaces in the 2-ADC decision space. Dashed circles: Contours of the decision variable distributions. Thick dashed lines: Optimal decision boundaries when prior probabilities of all stimulus events are equal. Solid circles: Contours of the posterior; Thick solid lines: Optimal decision boundaries when the prior probability of a stimulus presentation at location 1 is higher than the probability of a catch trial. The marginal distributions of the signal and noise distributions along dimension 1 are shown below (same conventions); horizontal green line: the value for the contours shown in the top panel (see text for details on the various probability notations).
First, consider a set of 2-ADC response probabilities to each location produced by a set of criteria (c1, c2) (Figure 5A, solid lines) during catch trials (note that in catch trials, d1 = d2 = 0, by definition). These response probabilities correspond to the area under the joint distribution of Ψ1 and Ψ2 in each of the three (shaded) regions (Figure 5A). Let us assume that another set of (distinct) criteria ( Display FormulaImage not available , Display FormulaImage not available ) (Figure 5B, solid lines) also produces the same set of response probabilities. We need to show that such an alternate set of criteria does not exist; in other words Display FormulaImage not available = c1 and Display FormulaImage not available = c2
Without loss of generality, let Display FormulaImage not available be less than c1. As is apparent from Figure 5A and B, the smaller criterion at location 1 would result in a smaller NoGo response probability ( Display FormulaImage not available , Figure 5B, gray region) unless the criterion at location 2, Display FormulaImage not available were greater than c2. However, a smaller criterion at location 1 and a larger criterion at location 2 would result in an increase in the response probability to location 1 ( Display FormulaImage not available , expansion of the red region, Figure 5A vs. 5B) and a decrease in the response probability to location 2 ( Display FormulaImage not available , shrinking of the blue region, Figure 5A vs. 5B). The alternate possibility, when Display FormulaImage not available > c1, would result in the opposite scenario: to maintain the probability of a NoGo response, Display FormulaImage not available < c2, resulting in a decrease in the response probability to location 1 and an increase to location 2. Thus, the only way the alternate set of criteria could produce the same response probabilities is if the two sets of criteria were identical, i.e., Display FormulaImage not available = c1 and Display FormulaImage not available = c2
To further illustrate this graphically, consider the response probabilities in the 2-ADC parameter space of criteria during catch trials (d1 = d2 = 0). The sets of all possible pairs of choice criteria that could determine the probability of each type of response during catch trials (locus of variation of c1 and c2 for specific values of Display FormulaImage not available ) are shown in Figures 4E and F (colored contours: i = 1, red; i = 2, blue; i = 0, green). Note that the three contours intersect at exactly one point in the c1c2 plane (Figure 4F), indicating that exactly one pair of criteria is consistent with these response probabilities. 
Next, once the criteria are identified, we examine the effect of varying the sensitivity to a stimulus at each location (say, location 1, Figure 5C) on the set of response probabilities. It is clear that this relationship is monotonic for each response probability. For example, gradually increasing the sensitivity to detect a stimulus at location 1 (Figure 5C; dashed circles) increases the response probability to that location ( Display FormulaImage not available ) and decreases the probabilities of the other two responses Display FormulaImage not available , Display FormulaImage not available (Figure S1B, Supplemental Data). The monotonicity of this relationship implies that a given set of response probabilities can be produced by exactly one set of sensitivity values (d1, d2). 
The analytical proof, presented in Appendix A.1 (Supplemental Data), formally demonstrates identifiability of the 2-ADC model. Demonstrating identifiability for the m-ADC model, for any general m, is more involved; the proof, based on mathematical induction, is presented in Appendix A.2 (Supplemental Data)
Optimality of m-ADC decision boundaries
We have demonstrated that the m-ADC model is identifiable and that its parameters can be reliably estimated. We have, however, specified the model and the decision rule, ad hoc, without any justification for why an observer in the real world would adopt such a decision rule. In this section, we show that the m-ADC decision boundaries (Equation 5) belong to a family of optimal decision surfaces for maximizing success rates in multialternative detection tasks. As before, we provide a geometric intuition for the result with the 2-ADC model and derive the result formally for the m-ADC model in the methods and Appendix D (Supplemental Data)
Consider, the 2-ADC model in Figure 5D with three possible events: two stimulus events, one at each location, and a no-stimulus (or catch) event. We first discuss the case in which each of these events is equally likely to occur. First, consider one stimulus event (stimulus at location 1) and the no-stimulus (catch) event. In order to maximize success with distinguishing between these two events, what would an ideal observer do? Figure 5D shows the decision variable distributions for each of these events (dashed circles) in two-dimensional decision space, and the corresponding marginal distributions (dashed Gaussians) are shown along dimension 1 (the Ψ1-axis). Note that the two distributions cross over at the point Ψ1 = c1 (dashed vertical line). In a given trial, let the decision variable take some value Ψt. In order to maximize success, it is intuitively clear that the optimal (Bayesian) strategy would be to report the event corresponding to the distribution that is most likely (greatest likelihood) to have produced this value Ψt. The ideal observer would choose to report a stimulus at location 1 (Go response) if the component of Ψt, Display FormulaImage not available exceeds the criterion c1 because the marginal probability for the stimulus event at location 1 (p(Ψ1 | Stim 1)) exceeds that for the catch event (p(Ψ1 | Catch)) for all Ψ1 > c1. Similarly, the observer would choose to report a catch event (NoGo response) if Display FormulaImage not available < c1. Thus, the linear decision boundary Ψ1 = c1 (dashed, thick vertical line) forms an optimal decision surface for distinguishing between these two events. 
A parallel argument can be made for optimal boundaries for deciding between the other two pairs of events: a stimulus at location 2 versus no stimulus (dashed, thick horizontal line, Ψ2 = c2) and a stimulus at location 1 versus at location 2 (dashed, thick oblique line, Ψ1 – Ψ2 = C), where C is a constant. The value of C can be determined by noting that the three decision boundaries intersect at a point (see Appendix D.2 for proof). Thus, C = c1c2, and the decision boundary is given by Ψ1 – Ψ2 = c1c2 or (with a slight rearrangement) Ψ1c1 = Ψ2c2
In summary, the ideal observer decides to report one of the three events by comparing the value of Ψ1 to c1 (dashed, thick vertical line), Ψ2 to c2 (dashed, thick horizontal line), and Ψ1c1 to Ψ2c2 (dashed, thick oblique line). These are identical with the decision boundaries specified in the 2-ADC model. 
Next, we consider the case in which the three events are not equally likely to occur. In this case, it is intuitively clear that the ideal observer must take into account not only the different decision variable distributions for each event, but also how likely it is for each event to occur during the experiment. For example, if it is known that one event (e.g., stimulus at location 2) never occurs, it must be discounted when making a response. 
In our model, for ease of illustration, let the prior probability of a stimulus event at location 1 (p(Stim 1)) be greater than the prior probability of a catch trial (p(Catch)), with the prior probability of a stimulus event at location 2 unaltered. In this case, the ideal observer decides between the two events after weighting (multiplying) each decision variable distribution by its respective prior probability, i.e., based on the posterior (solid circles and Gaussians). As is apparent from the figure, the optimal decision boundary after incorporating priors (Ψ1= Display FormulaImage not available ) is shifted toward the origin, indicating a greater bias toward reporting a stimulus event at location 1 and a lower bias for reporting a catch event. Thus, in order to maximize success, the ideal observer compares the relative values of the posterior (posterior odds ratio) for each event based on linear decision boundaries as specified in the 2-ADC model. These arguments may be extended for a m-ADC model (with more than two alternatives) to show that, in general, hyperplanes in m-dimensions constitute optimal decision boundaries for maximizing success (Methods). 
Finally, it is possible that the benefits of a successful report are not the same for each stimulus event. Thus, an ideal observer must also take into account the relative benefits when making her/his final decision. The m-ADC decision rule models optimal decision-making in this more general scenario as well. Specifically, the rule maximizes average benefit (or utility) and minimizes average cost (or risk) when the cost of an erroneous response is identical across all stimulus events (Methods). The formal proofs are provided in the Methods and Appendix D (Supplemental Data)
Empirical validation
Can the m-ADC model explain data from human multialternative behavioral tasks that include NoGo responses? And if so, how does its performance compare with that of existing models that incorporate such responses? The indecision model is a ternary choice model that has been widely applied to explain behaviors in two-alternative nonforced choice (2-ANFC) tasks that incorporate NoGo (or “undecided”) responses and, optionally, catch trials (García-Pérez & Alcalá-Quintana, 2010, 2013). 
A schematic of the indecision model is shown in Figure 6A. The indecision model and the 2-ADC model (Figure 2C) differ in how each partitions decision space. The partitioning scheme reveals a key distinction between the decision strategies that an observer would adopt in each model: With the indecision model, the observer would be expected to give a NoGo response when unsure about which location or interval contained the target stimulus (to report “uncertainty”) whereas with the 2-ADC model, the observer would be expected to give a NoGo response when neither location or interval appeared to contain the target stimulus (to report “absence”). In addition, the partitioning scheme renders the indecision model one-dimensional, following a linear transformation of decision variables (Figure 8C) whereas the 2-ADC model is necessarily two-dimensional (Figure 8A). Other key distinctions are highlighted in the Discussion section. 
Figure 6
 
Empirical validation and model comparison: Target detection task. (A) Schematic of the “indecision model” for a two-interval (or two-alternative) nonforced choice task. The indecision model partitions decision space differently from the 2-ADC model and specifies that the observer provides a NoGo response when “uncertain,” i.e., when sensory evidence is equivocal for a target stimulus in either interval (gray diagonal band). μ and δ are sensitivity and criterion parameters in this model; δ defines the extent of the NoGo response region. Other conventions are as in Figure 2C. (B) Estimates of sensitivity for the target-detection task from the indecision model (x-axis) and the 2-ADC model (y-axis). Error bars: parameter standard errors based on MLE. Dashed oblique line: Line of identical sensitivities. Data points represent individual observers (N = 17). (C) Schematic of the indecision model with bias, with different criteria (δ1δ2) for a Go response to each interval. (D) Estimates of bias (difference between the values of the criterion for interval 1 and the criterion for interval 2) from the indecision model (x-axis) and the 2-ADC model (y-axis). Dashed lines: Lines of zero bias. Other conventions are as in panel B. (E) Estimates of sensitivity from the 2-ADC (white circles) and indecision (gray circles) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel B. (F) Estimates of bias from the 2-ADC (white squares) and indecision (gray squares) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel D. (Inset) Distribution of differences in BIC values between the two models (indecision − 2-ADC); values to the left of the dashed vertical line indicate a lower BIC value for the indecision model and values to the right a lower BIC value for the 2-ADC model.
Figure 6
 
Empirical validation and model comparison: Target detection task. (A) Schematic of the “indecision model” for a two-interval (or two-alternative) nonforced choice task. The indecision model partitions decision space differently from the 2-ADC model and specifies that the observer provides a NoGo response when “uncertain,” i.e., when sensory evidence is equivocal for a target stimulus in either interval (gray diagonal band). μ and δ are sensitivity and criterion parameters in this model; δ defines the extent of the NoGo response region. Other conventions are as in Figure 2C. (B) Estimates of sensitivity for the target-detection task from the indecision model (x-axis) and the 2-ADC model (y-axis). Error bars: parameter standard errors based on MLE. Dashed oblique line: Line of identical sensitivities. Data points represent individual observers (N = 17). (C) Schematic of the indecision model with bias, with different criteria (δ1δ2) for a Go response to each interval. (D) Estimates of bias (difference between the values of the criterion for interval 1 and the criterion for interval 2) from the indecision model (x-axis) and the 2-ADC model (y-axis). Dashed lines: Lines of zero bias. Other conventions are as in panel B. (E) Estimates of sensitivity from the 2-ADC (white circles) and indecision (gray circles) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel B. (F) Estimates of bias from the 2-ADC (white squares) and indecision (gray squares) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel D. (Inset) Distribution of differences in BIC values between the two models (indecision − 2-ADC); values to the left of the dashed vertical line indicate a lower BIC value for the indecision model and values to the right a lower BIC value for the 2-ADC model.
In order to test the empirical validity of the 2-ADC model, we fit the model to data from two published experiments: a detection task and a discrimination task. The data from each task were previously shown to be well fit by the indecision model (García-Pérez & Alcalá-Quintana, 2010, 2011a, 2013). In each case, results from fitting the 2-ADC model produced alternative interpretations of the data, offering insights to alternate mechanisms of perceptual decision-making that might underlie each task. 
Experiment 1. Target detection: Sensory versus nonsensory origins of misidentified responses
We first fit the 2-ADC model to data from a target-detection task that was based on a temporal, two-interval nonforced choice paradigm. In this experiment, a target stimulus (Gabor patch) was presented in one of two approximately half second temporal intervals, and in a fraction of trials, no target was presented in either interval (catch trials). Observers indicated the interval in which they detected the target by pressing one of two keys. In addition, they could give a NoGo response (or a “guess” response, according to the authors' terminology) by pressing a third key if they could not tell in which interval the target had occurred. Thus, each observer's NoGo response could indicate “absence” (not able to detect the target, as in a 2-ADC model), “uncertainty” (not sure in which interval the target had occurred, as in an indecision model), or both. Eighteen observers each performed 600 trials of this task: 200 trials with a target in each of the two intervals and 200 catch trials. Other details regarding the stimuli and acquisition protocols can be found in García-Pérez and Alcalá-Quintana (2010). 
Comparing the 2-ADC and indecision models
We fit the data from the original study (García-Pérez & Alcalá-Quintana, 2010; their table 1) with a three-parameter 2-ADC model with two criteria (c1, c2), one for detection in each temporal interval, and one sensitivity parameter (d). We assumed that detection sensitivities for the two intervals were identical. We also fit the data with the indecision model (which also assumes equal detection sensitivities, as in the original study) with the following three parameters (Figure 6A): the sensitivity (μ) of detection during either target interval, a criterion (δ) that delineates an indifference zone such that the observer indicates a guess response if – δ ≤ Ψ2 – Ψ1δ, and a finger error term (λ). The finger-error term models unintentional response (motor) errors (“hitting an unintended response key by mistake,” García-Pérez & Alcalá-Quintana, 2010, p. 880). The indecision model in the original study assumed a noise standard deviation of Display FormulaImage not available for the decision variable (difference of sensory effects) distribution (García-Pérez & Alcalá-Quintana, 2010, pp. 878–879). As sensitivity and criteria in the 2-ADC model are measured in units of noise standard deviation, all parameter estimates of the 2-ADC model were scaled by Display FormulaImage not available to permit comparison with indecision model parameter estimates. Model fitting and parameter estimation were performed with the MLE approach (Methods). 
The 2-ADC model generally outperformed the indecision model in fitting the data. The 2-ADC model successfully fit performance for 16 of the 18 observers at the 0.05 level (median p value = 0.60, randomization test; the model failed for observers #4 and #15). On the other hand, the three-parameter indecision model fit performance for 14 of the 18 observers (the model failed for observers #4, #6, #11, and #13), replicating the findings of the original study. The goodness-of-fit G-statistic distribution across observers was not significantly different between the two models (median ± std: 1.67 ± 1.58 for the 2-ADC model and 1.54 ± 1.53 for the indecision model; p = 0.31, Wilcoxon signed rank test, n = 17, excluding observer #4's data, which were not well fit by either model). Finally, the estimates of sensitivity, and its standard error, derived from the 2-ADC model were similar to those derived from the indecision model for each observer (Figure 6B; p = 0.80, paired Wilcoxon signed rank test). 
In summary, the three-parameter 2-ADC model fit the data for a greater proportion of the observers while yielding sensitivity estimates and goodness-of-fit scores that were similar to those from the three-parameter indecision model. 
Comparison with an indecision model that incorporates bias
We surmised that the failure of the indecision model to the fit the data for four of the 18 observers was because the model did not take into account choice bias (unequal criteria) for detecting stimuli in the first versus the second intervals. The data support this hypothesis: For these observers, the rate of “undecided” responses was markedly different when the target was presented in the first versus second intervals (columns 7 versus 10 of Table 1, García-Pérez & Alcalá-Quintana, 2010). With equal detection sensitivities (μ) for the two intervals, the criteria for the two intervals must have been unequal in order to explain this differential pattern of guess (or NoGo) responses. Hence, we extended the indecision model to incorporate different criteria δ1 and δ2 for each interval (Figure 6C); a similar extension has been proposed recently (García-Pérez & Alcalá-Quintana, 2013). Such an indecision model “with bias” is described by four parameters: μ, δ1, δ2, λ
Table 1
 
Comparison of the 2-ADC, 2-ADCX, and indecision models in the length-discrimination task.
Table 1
 
Comparison of the 2-ADC, 2-ADCX, and indecision models in the length-discrimination task.
Observer #1 Observer #2
Model 2-ADC 2-ADCX Indecision 2-ADC 2-ADCX Indecision
# parameters 4 5 5 4 5 5
βs (horizontal) 0.482 0.399 0.554 0.348 0.305 0.388
βt (vertical) 0.489 0.404 0.561 0.366 0.321 0.409
PSE (mm) 102.62 102.64 102.61 98.74 98.82 98.71
cA or δA 1.307 1.218 1.272 1.198 1.146 1.189
cB or δB 0.961 0.866 0.608 0.671 0.611 0.155
α n/a −0.385 n/a n/a −0.227 n/a
λ n/a n/a 0.007 n/a n/a 0.012
AICc 10,676 10,649 10,646 11,010 11,001 11,002
BIC 10,697 10,676 10,672 11,031 11,028 11,029
ΔAICcindecision 30 3 0 8 −1 0
ΔBICindecision 25 4 0 2 −1 0
Once choice bias was incorporated, the indecision model was able to successfully fit the data from all observers but one (observer #4) at the 0.05 level. The values of the two criterion estimates were different for many of the observers, including for the four aforementioned observers. As a result, the goodness-of-fit G-statistic improved overall (median ± std: 0.96 ± 0.79), indicating that incorporating bias is beneficial when modeling behavior in this task. The 2-ADC model already incorporates bias by allowing for unequal criteria for the two intervals. Estimates of bias (measured as the difference between criterion values across the two intervals) in the indecision model (δ1δ2) closely correlated with 2-ADC model estimates (c1c2) for each observer (Figure 6D; correlation R2 = 0.89, p < 0.001; as before, data from observer #4 were excluded from this, and subsequent, analyses). 
The indecision model with bias has one more parameter than the 2-ADC model. Hence, we compared model quality with the (corrected) Akaike and Bayesian Information Criteria (AICc, BIC; Burnham & Anderson, 2002), which take into account the tradeoff between model complexity and goodness of fit: The model with the smaller value of AICc or BIC is favored. Because the AICc (or BIC) value is based on the logarithm of the maximum likelihood, AICc (or BIC) values that are smaller even by a few (K) units for one model represent an exponential increase in the relative likelihood (eK/2) of that model. 
AICc scores were comparable across models, and the median difference in AICc scores across observers (ΔAICc2-ADC−indecision = 1) was not significantly different from zero (p = 0.18, paired test). On the other hand, BIC scores were significantly lower for the 2-ADC model compared to the indecision model (ΔBIC2-ADC−indecision = −3, p = 0.016, paired test), favoring the 2-ADC model over the indecision model for explaining these data. 
Comparison with an indecision model that excludes finger errors
An AICc or BIC score favoring one model over another could result from fewer parameters, better fit, or both. Although the 2-ADC and indecision models have corresponding sensitivity and criterion parameters (d, c1, c2 vs. μ, δ1, δ2), the indecision model contains an extra parameter to model finger errors, λ. λ has been referred to as a “non-sensory” parameter (García-Pérez & Alcalá-Quintana, 2010; their supporting information, p. 4) that, as mentioned previously, is thought to reflect inadvertent motor errors when reporting responses. The inclusion of this parameter in the model is usually justified, not by its relevance for explaining perceptual confusion but for controlling for motor/finger errors in order to obtain more accurate estimates of the sensitivity and criterion parameters (García-Pérez & Alcalá-Quintana, 2010, 2013). 
We fit the data with the indecision model excluding the finger-error parameter and compared the fit of this three-parameter indecision model (μ, δ1, δ2) with the fit of the three-parameter 2-ADC model (d, c1, c2). Because the number of parameters were identical in the two models, any differences in AICc or BIC scores must reflect, specifically, differences in goodness of fit. As finger errors were estimated to be less than ∼5% for most of the observers (García-Pérez & Alcalá-Quintana, 2010, their table 1), we expected to obtain marginally poorer fits and perhaps slightly different values for the sensitivities and criteria when this parameter was excluded from the model. 
Surprisingly, with the finger-error parameter excluded, the indecision model failed to fit behavioral data (at the 0.05 level) for two thirds (12/18) of the observers (median p value = 0.001, randomization test). Sensitivity and bias parameter estimates from the indecision model differed significantly depending on whether finger errors were or were not included in the fit (Figure 6E and F, gray data; p < 0.001, Wilcoxon signed rank test). In particular, sensitivity was systematically underestimated across subjects when the finger-error parameter was excluded. In contrast, parameter estimates from the 2-ADC model were virtually identical regardless of whether the finger-error parameter was included or not (Figure 6E and F, white data; p > 0.9). 
In addition, comparing the AICc and BIC scores for the two models revealed evidence overwhelmingly in favor of the 2-ADC model (Figure 6F, inset): median AICc and BIC values were significantly lower for the 2-ADC model compared with the indecision model (ΔAICc2-ADC−indecision = −13; ΔBIC2-ADC−indecision = −13; p < 0.001, paired Wilcoxon signed rank test). Conversely, incorporating a finger-error parameter into the 2-ADC model did not improve the goodness of fit (increase less than 0.05%): λ estimates were vanishingly small (less than 0.1%) across nearly all (16/18) observers and were less than 2% for the other two observers. 
These results indicate that the finger-error parameter is not simply a desirable, but is rather necessary, component of the indecision model for fitting these data. On the other hand, the 2-ADC model successfully explained the behavior of nearly all (16/18) observers without incorporating a parameter for finger errors. This finding highlights key caveats with attributing all misidentified responses to a “non-sensory” parameter, such as λ (Discussion). 
Experiment 2. Length discrimination: Modeling competitive interactions
Next, we fit the m-ADC model to data from a length-discrimination task that was based on a 2-ANFC paradigm. In this experiment, observers were presented with two orthogonal lines, one vertical and one horizontal in a L or a Γ (inverted L) configuration. Observers had to indicate whether the vertical line was perceived as being longer or shorter than the horizontal line by pressing different keys. In addition, observers could press a third key to express their “indecision” (NoGo). Psychometric functions of performance were obtained with the length of the horizontal line (standard stimulus) fixed at a reference value (104 pixels) while the length of the vertical line (test stimulus) was varied pseudorandomly across eight values (94–110 pixels). Two observers performed 1,600 trials each (100 trials at each of eight stimulus levels for each of the two configurations). Other details regarding the task and stimulus protocols can be found in García-Pérez and Alcalá-Quintana (2011a)
In the following analyses, we adhere to the notation that was developed for each model by its respective authors. Thus, the sensitivity and criterion parameters in the 2-ADC model are referred to as di and ci, respectively, whereas the analogous parameters in the indecision model are referred to as μi and δi, respectively. 
We modified the 2-ADC model to explain behavior in this discrimination task as schematized in Figure 7A (described in detail in Appendix E, Supplemental Data). As shown, the key conceptual difference was to limit the domain of the NoGo response to a bounded region defined by the criteria, cA and cB (in the conventional 2-ADC model, the NoGo response domain is unbounded on two sides). Such a modification was necessary for this task because the observer must not only indicate whether the test stimulus is of a different length from the standard, but must also indicate whether it is longer or shorter. Thus, the decision rule for the NoGo response is –cB ≤ ΨAcA ∩ –cA ≤ ΨBcB, where ΨA and ΨB denote the decision variable for each location, above and below, respectively (Figure 7A). Note that this decision rule is different from that of the conventional detection model (which would be ΨAcA ∩ ΨBcB) and permits modeling data from two-alternative discrimination tasks that incorporate a NoGo response. The model equations relating sensitivity and criteria to response probabilities are derived in Appendix E (Supplemental Data)
Figure 7
 
Empirical validation and model comparison: Length discrimination task. (A) Schematic of a 2-ADC model for a discrimination task (in this case, a length-discrimination task). Two criteria, cA and cB, partition the decision space into three response regions: stimulus above longer (Above > Below, red), stimulus above shorter (Above < Below, blue), and equal perceived length (NoGo or unsure, gray). The key difference with the standard 2-ADC model is that the NoGo decision region is bounded from all sides. X-axis: increasing lengths of the stimulus above. Y-axis: increasing lengths of stimulus below. Origin: point of subjective equality (PSE) of the test and standard stimuli. Other conventions are as in Figure 2C. (B) Fit of the 2-ADC model (solid lines) for each of the two observers in the length-discrimination task. Closed circles and thick lines: proportion of vertical (test) > horizontal (standard) responses and model fits. Open circles and thin lines: proportion of NoGo (unsure) responses and model fits. Dashed lines: Fits of the indecision model. X-axis: length of the vertical stimulus. Arrow: Point of objective equality (104 pixels). Dashed vertical line: PSE. (C) Same as in (A) but schematic of the 2-ADC model that incorporates an interaction term (α) among the decision variables (2-ADCX model, see text for details). Dot-dashed lines: Trajectories of the mean of the decision variable distributions with mutual (competitive) interactions. (D) Same as in (B) but fit of the 2-ADCX model.
Figure 7
 
Empirical validation and model comparison: Length discrimination task. (A) Schematic of a 2-ADC model for a discrimination task (in this case, a length-discrimination task). Two criteria, cA and cB, partition the decision space into three response regions: stimulus above longer (Above > Below, red), stimulus above shorter (Above < Below, blue), and equal perceived length (NoGo or unsure, gray). The key difference with the standard 2-ADC model is that the NoGo decision region is bounded from all sides. X-axis: increasing lengths of the stimulus above. Y-axis: increasing lengths of stimulus below. Origin: point of subjective equality (PSE) of the test and standard stimuli. Other conventions are as in Figure 2C. (B) Fit of the 2-ADC model (solid lines) for each of the two observers in the length-discrimination task. Closed circles and thick lines: proportion of vertical (test) > horizontal (standard) responses and model fits. Open circles and thin lines: proportion of NoGo (unsure) responses and model fits. Dashed lines: Fits of the indecision model. X-axis: length of the vertical stimulus. Arrow: Point of objective equality (104 pixels). Dashed vertical line: PSE. (C) Same as in (A) but schematic of the 2-ADC model that incorporates an interaction term (α) among the decision variables (2-ADCX model, see text for details). Dot-dashed lines: Trajectories of the mean of the decision variable distributions with mutual (competitive) interactions. (D) Same as in (B) but fit of the 2-ADCX model.
In line with empirical observations, psychophysical functions of sensitivity (perceived length) were defined as being linearly related to the physical length of the stimulus. Thus, dz(x) = βzx for the 2-ADC model, or μz(x) = βzx for the indecision model (z = {s, t}), where s and t refer to the standard (horizontal) and test (vertical) stimuli, respectively (García-Pérez & Alcalá-Quintana, 2013). Due to the linearity of the psychophysical function for both test and standard stimuli, the point of subjective equality (PSE) of the test to the standard was calculated as PSE = βs xs/βt, where xs is the length of the standard stimulus (104 pixels). Thus, the 2-ADC model incorporated four parameters (βs, βt, cA, cB) whereas the indecision model incorporated five parameters (βs, βt, δA, δB, λ), λ being the finger-error term discussed previously. 
Fits to the psychometric function of the 2-ADC model (based on MLE) for each observer are shown in Figure 7B; fits of the indecision model are shown overlaid as dashed lines (García-Pérez & Alcalá-Quintana, 2013, their figure 5). The estimated parameters of each model are shown in Table 1 (columns headed 2-ADC and Indecision). The 2-ADC model fits indicated that both observers perceived vertical lines to be longer than horizontal lines of the same physical length (βt > βs, the vertical-horizontal illusion), in line with results from the indecision model. Although the estimated values of βs and βt were different between models, their relative magnitudes (ratios) and, hence, the PSE for each observer were nearly identical across models. In addition, both models indicated that the observers exhibited a bias for reporting the vertical line as longer when it was below compared to when it was above the horizontal line (cB < cA; δB < δA). Despite these similarities, the indecision model fared substantially better than the 2-ADC model in fitting data for both observers (Figure 7B, dashed lines vs. solid lines). In addition, AICc and BIC scores were markedly lower for the indecision model; the differences in AICc or BIC scores relative to the indecision model were much larger for observer #1 compared to observer #2 (Table 1). What might explain the poorer fits of the 2-ADC model to these data? 
We tested whether incorporating a finger-error parameter into the 2-ADC model would improve its fits to the data. This was not the case: Finger-error terms were uniformly estimated to be vanishingly small (≪ 0.001%) for both observers as was the case in the previous experiment. In contrast, the finger-error parameter was critical for fitting these data with the indecision model: Removing this parameter from the indecision model caused fits to be substantially poorer (AICc and BIC scores were worse by ∼8–10). 
Examining the deviation of the fits of the 2-ADC model from the data revealed a systematic pattern: The proportions of vertical test stimuli judged to be longer than the horizontal standard were systematically overestimated by the model when the test stimuli were shorter than the PSE value (Figure 6B, dashed vertical line) and were systematically underestimated when the test stimuli were longer than the PSE. These trends were most apparent in the data from observer #1 (Figure 7B, left). 
We hypothesized that these patterns of performance could be explained by competitive interactions between the vertical and horizontal percepts. The justification for modeling such interactions in this task is the following: In this task, observers were not asked to simply report the perceived length of a vertical line but rather to compare its length against the length of a horizontal line presented simultaneously on the display and to report the longer stimulus. It is plausible that, in order to make this judgment of relative length, mutually competitive interactions occurred between the perceptual (neural) representations that encoded horizontal versus vertical orientations. 
We modeled these putative interactions with an additional parameter α as shown in Figure 7C (model equations derived in Appendix E, Supplemental Data). In the estimation process, we did not specify the sign of α so that a positive value would indicate a facilitatory interaction whereas a negative value would indicate a competitive interaction between test and standard stimuli. We refer to this 2-ADC model, which incorporates an interaction term, as the 2-ADCX model. This model, like the indecision model, incorporates five parameters (βs, βt, cA, cB, α). 
MLE estimation with the 2-ADCX model revealed a substantial improvement in the quality of the fits to responses from both observers (Figure 7D; Table 1, AICc and BIC values). Moreover, fits based on the 2-ADCX model and the indecision model were virtually indistinguishable (Figure 7D, solid lines vs. dashed lines). Overall, the 2-ADCX model fit observer #2's responses somewhat better than the indecision model and was only slightly poorer at fitting observer #1's responses (Table 1). The parameter estimates from the 2-ADCX model provided evidence for competitive interactions between standard (horizontal) and test (vertical) stimuli for both observers (α < 0) in this task. Such interactions could not be readily identified with the indecision model (Discussion). In the Discussion, we elaborate on the relative advantages of each model for analyzing behavior in nonforced choice detection and discrimination tasks. 
Discussion
With the growing use of multialternative tasks for investigating the neural basis of perceptual and cognitive phenomena, the need for developing new analytical models and theoretical frameworks for analyzing such tasks is being increasingly recognized (Churchland & Ditterich, 2012; Niwa & Ditterich, 2008). In this study, we have developed a theoretical model that decouples the effects of choice bias from those of perceptual sensitivity in multialternative detection tasks. We demonstrated an optimal, one-to-one mapping of the model parameters to the response probabilities and presented numerical methods for estimating model parameters reliably. Finally, we have demonstrated ways in which the model may be readily extended to discrimination tasks that permit NoGo responses. 
Our model is able to decouple bias from sensitivity effects in m-ADC tasks, first, by treating responses at each possible location separately and independently as opposed to the conventional practice of aggregating data across locations by simply classifying responses as “correct” or “incorrect.” Psychometric functions based on such aggregated responses could incorrectly suggest that behavioral data are adequately fit with a bias-free model even when substantial choice bias exists in the data (Figure 3E). Second, the model allows for a distinct response category for NoGo responses, which permits decoupling uncertainty-associated response biases from other perceptual or decisional biases (discussed subsequently). 
Our model is particularly relevant for tackling a central question in neuroscience studies of perception and attention: Does improved performance at the cued location (or for the cued feature) arise from higher perceptual sensitivity at that location (or for that feature) or from a greater choice bias favoring the cued location (or feature) (Cohen & Maunsell, 2009; McPeek & Keller, 2004; Ray & Maunsell, 2010; Zenon & Krauzlis, 2012)? Neuroscience studies of spatial attention and decision-making commonly employ multialternative detection tasks based on the method of constant stimuli (constant-stimulus design): Neural responses can be highly variable, and in order to obtain reliable estimates of the neurometric function, the same stimulus must be repeated many (tens to hundreds of) times. Hence, neural (and consequently, behavioral) responses are measured at a fixed set of stimulus strengths determined a priori. The m-ADC model (Equation 6) provides a powerful tool for estimating an animal's perceptual sensitivity while accounting for choice bias in such multialternative constant-stimulus designs. 
In any behavioral model, demonstrating model identifiability is necessary to interpret the behavioral significance of absolute (or relative) parameter values (Brunton, Botvinick, & Brody, 2013). The m-ADC model is among the most parsimonious class of analytical models for multialternative detection as a result of several key assumptions (discussed next). This parsimony permitted us to analytically demonstrate the one-to-one mapping of the sensitivity and bias parameters to response probabilities in multialternative tasks with any number of alternatives (models of arbitrarily high dimensions). Such an analytical demonstration is considerably more challenging, and often never accomplished, for more complex models. 
Assumptions and extensions of the m-ADC model
The m-ADC model is founded on several assumptions: (a) decision variables are independent and represented along orthogonal dimensions, (b) signal and noise distributions have equal variance, (c) decision boundaries are linear (or planar), and (d) decision boundaries (criteria) do not vary over time (or trials). In the following, we discuss which of these assumptions are reasonably justified and which can be addressed by extending the model. 
The assumption of independent decision variable distributions that are represented along orthogonal dimensions (independent channels) has been tested in the two-dimensional case and found valid for stimulus attributes that are widely different perceptually (for instance, stimuli that are widely separated in space or frequency) (Tanner, 1956). However, it is possible that the Ψi − s are not independent, and perceptual sensitivities do not vary along orthogonal dimensions: Signal covariation may arise from facilitative or competitive interactions that operate across locations. Thus, decision variable distributions at different locations could be correlated, or, equivalently, decision variable axes could be separated by angles different from 90°. In this case, the covariance matrix of Ψ is no longer diagonal. The 2-ADCX model (Figure 7C) incorporates such interactions for the two-alternative task. This model could be extended to the multialternative case as well. 
Equal variance for the signal and noise distributions is a fundamental assumption of conventional signal-detection models. Such an assumption permits defining a monotonic relationship between the likelihood ratio and the sensory evidence that simplifies the optimal decision rule (a single cut point or criterion, Macmillan & Creelman, 2005, pp. 67–69). In addition, previous studies have demonstrated, both analytically and empirically, that models in which the mean and variance of the decision variable distribution change with the stimulus level (e.g., Poisson decision variables) are essentially unidentifiable (Katkov, Tsodyks, & Sagi, 2006). 
We have demonstrated that, for additive Gaussian signal and noise distributions, planar hypersurfaces (hyperplanes), as defined by the choice criteria in the model, constitute a family of optimal decision surfaces. A subset of decision surfaces in the current model (of the form Ψi – Ψj = cicj) is optimal only if the values of sensitivity (d) are identical across locations (di = di, Figure 8A, left). In certain experiments, such as when a particular spatial location is cued for attention, it is possible that the sensitivities at different locations (e.g., cued vs. uncued) could be significantly different. The model may then be extended with a modified decision rule to capture optimal decision-making in this more general scenario of unequal sensitivities (Figure 8A, right). 
Figure 8
 
Relationship to previous models. (A) Schematic of a 2-ADC model. There are three potential stimulus events—stimulus at location 1, at location 2, or no stimulus (catch)—with their associated decision variable distributions (red, blue, and black contours, respectively). The decision rule partitions decision space into three response regions, including a NoGo response. Thick lines: decision boundaries. Left and right panels: Decision variable distributions and optimal decision surfaces for equal (left) or unequal (right) sensitivities for the different stimulus events. (B) Schematic of a 2-AFC model. The decision rule partitions decision space into two response regions. Lower inset: The model is readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (C) Schematic of an indecision model. The decision rule partitions decision space into three response regions, including a NoGo response. Lower inset: This model is also readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (D) Schematic of a GRT model. In addition to the three stimulus events (as with the other models), a fourth compound stimulus event (purple circle) occurs in trials in which target stimuli (or changes) occur at both locations in a given trial. The decision rule partitions decision space into four response regions, including NoGo (or neither) and “Both” responses (2 × 2, complete identification design). For the configuration shown, the GRT model is reducible to two independent one-dimensional models (shown alongside each axis).
Figure 8
 
Relationship to previous models. (A) Schematic of a 2-ADC model. There are three potential stimulus events—stimulus at location 1, at location 2, or no stimulus (catch)—with their associated decision variable distributions (red, blue, and black contours, respectively). The decision rule partitions decision space into three response regions, including a NoGo response. Thick lines: decision boundaries. Left and right panels: Decision variable distributions and optimal decision surfaces for equal (left) or unequal (right) sensitivities for the different stimulus events. (B) Schematic of a 2-AFC model. The decision rule partitions decision space into two response regions. Lower inset: The model is readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (C) Schematic of an indecision model. The decision rule partitions decision space into three response regions, including a NoGo response. Lower inset: This model is also readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (D) Schematic of a GRT model. In addition to the three stimulus events (as with the other models), a fourth compound stimulus event (purple circle) occurs in trials in which target stimuli (or changes) occur at both locations in a given trial. The decision rule partitions decision space into four response regions, including NoGo (or neither) and “Both” responses (2 × 2, complete identification design). For the configuration shown, the GRT model is reducible to two independent one-dimensional models (shown alongside each axis).
In its present form, our model does not take into account criterion variation over time. For commonly used 2-AFC tasks, changes in criteria have an effect equivalent to increasing the variance of the decision variable distribution, failing to account for which results in an underestimation of sensitivity (Benjamin, Diaz, & Wee, 2009; DeCarlo, 2010). The effect of such criterion variation can be modeled by incorporating the distribution of criteria into the latent variable formulation, in future extensions of the model. 
Our model does not take into account nonzero lapse rates. Such lapse rates may arise due to a variety of factors, including lapses of attention or motor errors. Such a finger-error parameter was superfluous when fitting behavioral data with the 2-ADC model (discussed subsequently). In general, however, lapse rates can be incorporated into this framework by specifying a two-stage model with the first stage capturing lapse-free performance with the m-ADC model and the second stage accounting for lapses in performance. 
Finally, although not an assumption of the model, our task specification requires that no more than one stimulus be presented in a given trial. A particular advantage of this task specification is that potential second- and higher-order interaction terms (of the form Xi Xj, Xi Xj Xk, …) in the structural model vanish automatically (as at least one Xi = 0). Tasks that violate this requirement and incorporate compound stimuli (e.g., stimuli presented at more than one location or more than one stimulus feature presented in a given trial) fall under the purview of the General Recognition Theory (GRT, Ashby, 1992) framework (discussed next). 
Relationship to previous signal detection models
Relationship to forced choice models
Our model represents a theoretical approach to account for bias effects in multialternative detection tasks. An alternate, and equally important, approach involves developing behavioral paradigms and measurement protocols that minimize bias. One such paradigm—the two-interval forced choice (2-IFC) or two-alternative forced choice (2-AFC) task—has been a popular choice for psychophysics measurements across a variety of fields (Yeshurun et al., 2008). In a 2-IFC task, such as the target-detection task described previously, the stimulus (or change) can be presented in one of two nonoverlapping temporal intervals, and the observer is rewarded for reporting the interval (first vs. second) in which the stimulus (or change) occurred. It is commonly held that such tasks provide “unbiased” estimates of sensitivity because there is no inherent difference in the cost of making an error among the intervals. The same argument has been applied to the 2-AFC task involving two stimulus alternatives, in which there is no difference in the cost of making an error among the two alternatives. 
However, recent research demonstrates that the problem of choice bias is highly relevant for two- (and multi-) interval forced choice designs as well (Macmillan & Creelman, 2005; Yeshurun et al., 2008). Yeshurun et al. (2008), after reanalyzing 17 past experiments of the 2-IFC design, conclude categorically that there is “little evidence supporting the claims that 2-IFC is unbiased” (p. 1837). Moreover, Macmillan and Creelman (2005), in their discussion of bias in interval identification designs, cite, among other examples, a study by Johnson, Watson, and Kelly (1984) who “observed that p(c) [percent correct] was higher for the third interval of a [three-interval] design than for the first. Such a result could arise from either bias or sensitivity changes across intervals” (Macmillan & Creelman, 2005, pp. 251). Thus, empirical evidence demonstrates that accounting for bias is fundamental to models of multi-interval or multialternative forced choice designs as well. 
A recent study developed a signal detection model with bias for multialternative (and multi-interval) forced choice (m-AFC) tasks (DeCarlo, 2012). How does the m-ADC model differ from the m-AFC model that incorporates bias? 
As illustrated in Figure 8A and B, for the two-alternative case, the key distinction is in the way each model partitions decision space. The 2-ADC model permits one more response category, the NoGo response, than the 2-AFC task. As the figure demonstrates, the 2-AFC model can be reduced to an equivalent, one-dimensional model by a linear transformation of the decision variables (differencing), and, in fact, previous studies have almost always employed such a one-dimensional formulation. On the other hand, no such transformation can reduce the 2-ADC model to an equivalent one-dimensional model; this is because the 2-ADC decision rule is fundamentally two-dimensional. The m-ADC model is a more general version of the m-AFC model with bias, and this relationship is formally derived in Appendix F (Supplemental Data). Hence, the analytical results we have proved above for the m-ADC model (identifiability, optimality) are all valid for the m-AFC model (with bias) as well. 
What is the advantage of incorporating a NoGo response in multialternative designs? The m-ADC decision rule specifies that the observer exercises the NoGo response option when there is insufficient evidence for the target stimulus at any location. Thus, NoGo responses in a multialternative detection task provide the natural analog of “No” responses in a simple detection (Yes/No) task. When catch trials are included in a multialternative task design, we have shown, analytically, that employing the m-ADC decision rule with NoGo responses maximizes success. Previous studies have shown that response biases can be induced when observers are required to guess based on equivocal sensory evidence in a forced choice task: Incentivizing a NoGo response when the observer is uncertain of the correct (Go) choice prevents conflating such uncertainty-associated response biases with existing perceptual or decisional biases (García-Pérez & Alcalá-Quintana, 2013; see also next section on “Relationship to the indecision model”). On the other hand, care must be taken not to overincentivize the NoGo response so that the observer does not adopt a strategy of never giving a Go response when weak stimuli are presented; such a strategy could render it difficult to reliably estimate perceptual sensitivity for such stimuli. Thus, the appropriate incentive (reward) for a NoGo response depends, crucially, on a balance of these two factors. 
What is the advantage of incorporating catch trials in multialternative designs? It has been commonly proposed that choice biases occur in trials that are difficult for the observer when sensory evidence is weak or equivocal, for example, at low target stimulus contrasts (Fechner, 1860/1966; Jogan & Stocker, 2014). Our results (Figure 3C and D) demonstrate that, even when choice bias occurs for stimuli of all strengths, the effects of choice bias are simply most apparent at the weaker stimulus strengths (deviation between data and bias-free model for low ξ). At higher strengths, the psychometric functions of models with and without bias approach each other closely (e.g., Figure 3C and D solid vs. dashed curves at ξ > 0.5). Thus, it may be difficult, in practice, to identify the occurrence of choice bias with stimuli of high strengths alone. Catch trials, which are essentially zero stimulus strength trials, provide an elegant and efficient means to identify and account for choice bias in such multialternative tasks. In addition, the inclusion of catch trials reduces the standard error of parameter estimates (García-Pérez & Alcalá-Quintana, 2011b). 
Finally, there is an important advantage to incorporating catch trials and NoGo responses in studies that seek to measure differences in perceptual sensitivity at two different (e.g., cued vs. uncued) locations with a target stimulus of a fixed strength (e.g., at a threshold value identified in preliminary experiments): With a single, non-null, stimulus level at each location and a binary forced choice response, the sensitivities at the two locations become unidentifiable. The geometric intuition for this is as follows: Because there are an infinite number of orthogonal axes (whose origins lie on a Thales circle) that pass through the centers of the signal distributions, the value of perceptual sensitivity at each location cannot be identified; such an identification becomes possible if the origin is fixed based on a referent distribution provided by the catch trials. The relevance for attention tasks is that, although it is possible to measure the animal's accuracy at discriminating between target stimuli at the cued versus uncued locations, it is impossible to determine if the sensitivity for detecting targets (or changes) is different between the cued and uncued locations based on a single value of stimulus strength (or change magnitude) at each location. 
Relationship to the indecision model
The indecision model (García-Pérez & Alcalá-Quintana, 2011b), formerly known as the “difference model with guessing” (García-Pérez & Alcalá-Quintana, 2010), is suitable for analyzing data from ternary choice tasks that permit a NoGo response. Tasks that incorporate such NoGo (or “Don't Know”) responses are often referred to as unforced choice or nonforced choice tasks (Fechner, 1860/1966; García-Pérez & Alcalá-Quintana, 2010; Kaernbach, 2001; Watson, Kellogg, Kawanishi, & Lucas, 1973). The indecision model has been formulated (and applied) to model behaviors in 2-ANFC tasks (García-Pérez & Alcalá-Quintana, 2010, 2011b, 2013). 
We have depicted the indecision model in a two-dimensional decision space to facilitate comparison with the 2-ADC model, specifically to highlight how the two models differ in the way they partition decision space. Nevertheless, the indecision model (along with its decision rule) is essentially one-dimensional and can be readily rendered as such with an appropriate linear transformation of decision variables (ΨS = Ψ1 – Ψ2, Figure 8C). Indeed, previous studies have exclusively employed such a one-dimensional formulation of the indecision model (e.g., García-Pérez & Alcalá-Quintana, 2010, 2011b, 2013), and the same one-dimensional formulation was employed in our analyses that replicated the results of previous studies (Figures 6 and 7). Our m-ADC model is the first and only model, to our knowledge, that can be applied to data from unforced choice tasks with any number (three or more) of alternatives, based on a multidimensional formulation. 
The difference in the two partitioning schemes (Figure 8A vs. C) also highlights an important distinction between the NoGo response alternative in the indecision versus 2-ADC models. In the indecision model, the decision rule specifies that a NoGo response is made when sensory evidence is equivocal about the presence of the target stimulus at either location (i.e., when the observer is uncertain). On the other hand, in the 2-ADC model, the decision rule specifies that a NoGo response is made when sensory evidence is sufficiently strong for no target stimulus at either location. In future experiments, it may be instructive to ask observers to separately report each type of NoGo response. Nevertheless, these differences demonstrate that the m-ADC decision rule is a more natural choice for modeling behavior in multialternative detection tasks in which NoGo responses should reflect the observer's decision that no stimulus was presented rather than her/his uncertainty about where a stimulus was presented. 
To illustrate this point further, consider an observer who performs a 2-ADC task and is aware that the target stimulus is presented at no more than one location in a given trial. If the magnitude of sensory evidence is high at both locations in a given trial, the observer would infer the presence of a stimulus somewhere on the display although she/he may not be able to localize it accurately. In this case, it is reasonable to propose that the observer would report the location at which the signal was the strongest or exceeded the criterion by the greatest magnitude (e.g., Figure 8A, left, blue/Go response region near the upper right corner of decision space): This is the 2-ADC decision rule. In contrast, the indecision model specifies that, in this case, when sensory evidence is comparable across the two locations, the observer would give a NoGo response (e.g., Figure 8C, gray/NoGo response region near the upper right corner of decision space). 
We compared the performance of our 2-ADC model to that of the indecision model, using previously published data in 2-ANFC tasks (García-Pérez & Alcalá-Quintana, 2010, 2011a). Our results revealed that behavior in these tasks could be fit with the 2-ADC model (or its extension, the 2-ADCX model) with comparable goodness of fit to the indecision model (Table 1, Figure 7D). In addition to establishing the empirical validity of the 2-ADC model, these results suggest that the observers' behavioral strategies (2-ADC vs. indecision), by and large, could not be distinguished with these tasks. In many cases, the 2-ADC model outperformed the indecision model based on information criterion scores (AICc or BIC; e.g., Figure 6F, inset, or Table 1); however, in a few cases, such as that of observer #15 in the target-detection experiment, the 2-ADC model fared relatively poorly. Under what conditions might each model fare better or worse in fitting behavior for individual observers? 
As mentioned previously, in the task of García-Pérez and Alcalá-Quintana (2010), observers were instructed to press the NoGo response key when they could not tell in which interval the target had occurred. Despite these clear instructions, observers could have construed the instructions in at least one of two ways: to give a NoGo response when they were unsure in which interval the target occurred (indecision model) or to give a NoGo response when they were certain that no interval contained the target (2-ADC model). In this task, no targets were presented in a third of all trials: This high proportion of catch trials increases the chances that observers followed a 2-ADC decision rule to maximize success (Methods, Appendix D). In such ternary response tasks, it is also possible that observers follow a “hybrid” model, giving NoGo responses both when uncertain of the interval and when certain of not having detected a stimulus in either interval. In this case, performance would be more in accordance with the 2-ADC model at low stimulus strengths when the decision variable distributions approach the “absence” NoGo response region (Figure 8A, gray shading). On the other hand, performance would follow the indecision model at suprathreshold strengths, with NoGo responses indicating “uncertainty.” 
In general, the suitability of each model would also be determined by the payoff matrix (relative costs of errors and benefits for correct responses) for each task. Thus, the indecision model is likely to be suited to fitting data from tasks in which there is a high cost for misidentification (reporting the wrong stimulus location or interval; Table S1B, Supplemental Data) and a relatively low cost for misses. In such a circumstance, a rational observer would refrain from “guessing” when sensory evidence is equivocal for the two stimulus events and would rather give a NoGo response because a miss is less severely penalized. On the other hand, the 2-ADC model is better suited to model the vast majority of detection tasks used in neuroscience studies of attention and decision-making. In these tasks, the cost of a misidentification (Go response to the wrong location in a Go trial) and the cost of a miss (NoGo response in a Go trial) are essentially the same (usually withheld reward). As we have demonstrated, the 2-ADC model is optimal for fitting data from multialternative detection tasks under these conditions. 
Fitting each model to the data raised important issues regarding the differential contribution of sensory versus motor factors to misidentified responses, particularly, with the validity of interpreting the “finger-error” parameter λ as “nonsensory.” λ has been described as being “irrelevant” to the core indecision model (García-Pérez & Alcalá-Quintana, 2010, their supporting information, p. 4) but is useful for obtaining more accurate estimates of the other model parameters (García-Pérez & Alcalá-Quintana, 2010). Yet we found that behavioral performance for more than two thirds of the observers could not be adequately fit without including this parameter in the indecision model. In contrast, a finger-error term was not necessary for the 2-ADC model: Incorporating this parameter did not result in any improvements in model fits, and the term was uniformly truncated to vanishingly small values for most observers in both tasks. 
Finger errors provide a necessary and convenient motoric explanation for misidentified responses in the indecision model. On the other hand, such misidentified responses readily arise from sensory factors in the 2-ADC model. The reason for this key difference is the following: In the indecision model, the two Go response domains do not share a common decision boundary—they are separated by the “indecision” (or indifference) zone (Figure 8C, gray)—whereas in the 2-ADC model, the two Go response domains do share a common boundary (Figure 8A), and misidentified responses can readily arise from sensory noise fluctuations. Our results highlight the potential for sensory confusion to be misinterpreted as motor errors, a possibility that has been overlooked previously. 
Did misidentified responses arise predominantly from sensory factors (according to the 2-ADC model) or motor factors (according to the indecision model)? Because the 2-ADC model without finger errors fit the data as well as or, often, even marginally better than the indecision model with finger errors, either explanation is possible, and the explanations are not mutually exclusive. To resolve this issue, it would be worthwhile to obtain an independent measure of finger errors from each observer (e.g., with a postexperiment questionnaire) in future experiments. Additional independent evidence (e.g., with neural recordings) could also help resolve which factor contributed predominantly to these erroneous responses. 
In fitting the observers' behavior in Experiment #2 (the length-discrimination task), we have demonstrated two key ways in which the 2-ADC model can be extended. First, we showed that by modifying the decision rule, the model could be readily adapted to model behavior in discrimination tasks. Next, we showed that by introducing an additional parameter, we could readily model interactions (competitive or facilitative) among the decision variable components that encode the strengths of the different percepts. This extended 2-ADCX model fit each observer's data as well as did the indecision model (Figure 7D) of equal complexity (five parameters in each model). 
The 2-ADCX model provided evidence for competitive interactions between horizontal and vertical percepts. Such interactions can be modeled only with a two-dimensional formulation (Figure 7C). The indecision model, in its current one-dimensional formulation, cannot model such interactions although future extensions of the model to two (and higher) dimensions may permit such modeling. In the one-dimensional indecision model, competitive interactions would manifest as an inflated value of βt (vice versa for facilitative interactions) as can be inferred from Figure 7C. Indeed we noticed that estimates of βt were consistently higher for the indecision model compared to the 2-ADC or 2-ADCX models (Table 1). This result highlights a key advantage of the two-dimensional formulation of the 2-ADC model: It readily enables modeling interactions that occur among the decision variable components. 
Relationship to general recognition theory and choice theory
Our m-ADC model follows a rich literature on multidimensional (or multichannel) signal detection models within the framework of GRT (Ashby, 1992). In the psychoacoustic and vision literature, GRT models have been widely applied in tasks involving the detection of multiple signals in noise (Ashby, 1992; Ashby & Townsend, 1986). These models are relevant for tasks that implement a feature complete identification design (Macmillan & Creelman, 2005, p. 260). This task design involves discriminating four potential stimulus events (Figure 8D): noise alone (black), each stimulus alone (red or blue), or the compound stimulus (purple, “Both”; Yeshurun et al., 2008). Such a four-way (2 × 2) discrimination simplifies the optimal decision rule (orthogonal pairs of lines) for Gaussian signals and noise (Figure 8D, thick black lines) (Ashby & Townsend, 1986). 
In m-ADC tasks, the stimulus (or change) occurs at no more than one location in a given trial; the last stimulus event (compound, “Both”) of a GRT design is never presented. Thus, the GRT model and decision rule do not apply to m-ADC tasks. In addition, in the absence of interactions among the decision variables, the GRT model can be reduced to two (or multiple) independent one-dimensional formulations (Figure 8D, shown alongside each margin). As mentioned previously, the m-ADC model cannot be thus reduced, essentially, because the decision manifold is irreducibly multidimensional. 
A variety of models for dealing with bias in multialternative tasks have been formulated within the framework of Luce's choice theory (Luce, 1963), and standard methods in textbooks of behavioral analysis account for bias with a choice theory model (Macmillan & Creelman, 2005, p. 250). However, previous studies have favored SDT over choice theory for explaining behavioral data: Although choice theory constrains decision variables to follow a double exponential distribution (or logistic distribution for the binary choice case), SDT provides a more general framework in which the decision variable distribution can be specified based on empirical observations (Treisman & Faulkner, 1985). Moreover, evidence from behavioral data favor the normal distribution associated with the conventional signal detection model (Treisman & Faulkner, 1985). 
Surprisingly, few attempts have been made to deal with bias in multialternative detection tasks within the signal detection framework. Early attempts at two-dimensional “detection and recognition” or (“detection and identification”) models (Swets & Birdsall, 1956; Tanner, 1956), although conceptually similar to the m-ADC model, were geometrically formulated. Later studies attempted to develop a mathematical formalism for these models by treating the decision variable as a random vector (Thomas & Olzak, 1992) akin to the multivariate decision variable in the m-ADC model. These models were formulated for double judgment (detection and identification) tasks. The importance of accounting for bias to avoid spurious conclusions in multidimensional models for such double judgment tasks has been discussed by others (Klein, 1985). However, these early formulations were based on 2 × 2 complete identification designs, i.e., those that incorporate the compound stimulus (Figure 8D) (Olzak & Thomas, 1981; Thomas & Olzak, 1992). 
On the other hand, psychophysical tasks of detection and attention, such as those presented in this study (Figure 1) and elsewhere (Cavanaugh & Wurtz, 2004; Cohen & Maunsell, 2009), do not fit the conventional GRT framework. Hence, although the mathematical formalism in previous multidimensional GRT models resembles the m-ADC model, the decision rule is fundamentally different. The m-ADC model solves the important open problem of accounting for bias in multialternative detection tasks by incorporating a novel, asymmetric decision rule (unequal criteria) in a multidimensional signal detection framework. 
Conclusion
Behavior emerges from a combination of various factors: perceptual, motivational, decisional, and the like. Parsing the respective contributions of each factor to behavior is currently best accomplished by recourse to theoretical frameworks, such as SDT (Carandini & Churchland, 2013). The m-ADC model developed in this study provides a rigorous framework for distinguishing aspects of behavior that arise from changes in perceptual sensitivity from those that arise from changes in choice bias in multialternative detection tasks. Future work will involve extending this model to incorporate the influence of executive and cognitive processes, such as attention, on sensitivity and bias as well as validating and refining the model to describe behavior in other tasks of perceptual decision-making. 
Methods
Linking sensitivities and criteria to 2-ADC response probabilities
In the 2-ADC model, the probability of response at each location (Y = i) for each stimulus event (X) can be derived from the structural model (Equation 1) and decision rule (Equation 2). We illustrate the case for p(Y = 1|X). The other cases may be similarly derived. 
The probability of response at location 1 is the combined probability that the decision variable at location 1 exceeds the choice criterion at that location and that its magnitude (over its choice criterion) is the larger of the two locations.  which, upon substitution of the structural model, gives   
We condition the above probability on a given value of ε1 = e1.  where 𝓗(x) is the Heaviside function, and F2 represents the cumulative distribution function (CDF) of ε2
The conditional probability for a response at location 1 is found by integrating over the distribution of e1.  where f1 represents the probability density function of ε1
The Heaviside function may be dropped from the integrand by defining the lower bound of the integral at c1d1 X1. In other words,   
Similarly, the conditional probability of a response at location 2 is given by   
In conventional SDT, the noise distribution is assumed to be a unit variance Gaussian (unit normal) distribution. Thus f1 = f2 = ϕ and F1 = F2 = Φ where ϕ and Φ are respectively the probability density and cumulative distribution functions of the unit normal distribution. 
Finally, the conditional probability of a NoGo response, p(Y = 0|X), can be calculated by observing that the NoGo decision region in Figure 2C is simply a quadrant of two-dimensional decision space. Because Ψ1 and Ψ2 are independent, this can be readily shown to be   
It can be easily verified that p(Y = 0|X) = 1 – p(Y = 1|X) – p(Y = 2|X). 
These equations together constitute the 2-ADC model system (reproduced in the results as Equation system 3).    where we have replaced e1 and e2 with the variable e because e1 and e2 are simply dummy variables of integration. 
Linking sensitivities and criteria to m-ADC response probabilities
We derive the psychometric function at location i (probabilities of response at location Y = i, as a function of stimulus strength ξk at each location k), based on the structural model (Equation 4) and decision rule (Equation 5) in the m-ADC model:   
Upon substitution of the structural model, this gives:   
Similar to the 2-ADC case, we condition the above probability on a given value of εi = ei.  where 𝓗(x) is the Heaviside function, and Fk represents the cumulative distribution function of the decision variable distribution at location k, Ψk. In deriving this expression, we have used the fact that the Ψk distributions are mutually independent, such that their joint probability density factors into the product of the individual densities. 
The probability of a response at location i is then found by integrating over the probability density of ei.   
Analogous to the two-dimensional case, the conditional probability of a NoGo response, p(Y = 0|), can be calculated by observing that the NoGo decision region in the m-ADC model is simply a orthant (2m-tant) in m-dimensional decision space. Again, based on the independence of the Ψi-s this can be shown to be   
Again, in line with conventional SDT, we assume unit normal noise distributions. Thus, fi = ϕ and Fi = Φ. Replacing the dummy variables of integration ei with e, we have    
This constitutes the m-ADC model system of equations relating the psychometric function of each response (Go or NoGo) to the psychophysical function (dj(ξj)) and criterion cj at each location, j (reproduced in the results as Equation system 6). 
MLE and MCMC estimation of sensitivities and criteria in the m-ADC model
Simulations and parameter estimation
2-ADC responses were simulated as follows: Response probabilities were computed from Equation system 3 based on the set of criteria and sensitivities specified in Table S2A (Supplemental Data). We denote the response probabilities as Display FormulaImage not available . Response counts for each stimulus–response contingency were generated with random sampling from a multinomial distribution defined by the Display FormulaImage not available . This procedure was repeated for 20 simulated experimental blocks (or “runs”) with 100 trials for each of the two stimulus events and 200 catch trials per run (a total of N = 4,000 trials in 20 blocks). The resulting total response counts, Display FormulaImage not available (Table S2B, Supplemental Data) were provided as input to numerical optimization algorithms for parameter recovery. 
We employed two approaches: (a) maximum likelihood estimation (MLE) with a line search (ML-LS) algorithm or (b) Bayesian estimation based on a Markov Chain Monte Carlo (MCMC) approach with the Metropolis algorithm (Methods). The ML-LS algorithm is an efficient approach for MLE but could converge onto a local extremum of the objective function. The MCMC algorithm, although comparatively slower, has a component of stochastic sampling (Methods) and, hence, a better chance of finding global minima. In addition, the MCMC approach provides a full posterior distribution over parameter values that is useful for testing for significant differences across experimental conditions. 
Both the ML-LS (Figure 3A and B) and MCMC algorithms (Figure S3A and B, Supplemental Data) converged reliably onto identical values of the four parameters ({di, ci}, i ∈ {1, 2}) for various initial guesses (Table S2C, Supplemental Data). In these figures, the search trajectory in four-dimensional parameter space is depicted as two two-dimensional trajectories, one for each pair of criterion and sensitivity parameters. 
The MCMC algorithm required an initial burn-in period (about 500 iterations, Figure S3C) to converge to a stable parameter set; the chi-square error value reduced and the log-likelihood value increased systematically over successive iterations (Figure S3D). The posterior distribution was generated with the parameter values from the last 1,000 iterations, well after the burn-in period of the MCMC algorithm (Figure S3E, Methods). Error estimates of the parameters were also highly similar between the two estimation approaches (Table S2C). 
2-ADC psychometric functions (Figure 3C and D) were simulated as follows: Response probabilities were computed from Equation system 6 with a hyperbolic ratio psychophysical function based on the set of parameters specified in Table S3A (Supplemental Data). The simulated psychometric function was sampled at six equally spaced values of contrast (ξk ∈ [0, 100]) with 50% catch trials and 25% stimulus trials at each of the two locations; this process was repeated for 100 simulated experimental blocks (1,000 trials per contrast value for each simulation). As before, we denote these by Display FormulaImage not available (ξk) (Figure 3C and D circles, error bars denote standard deviations across simulation blocks), corresponding to the probability of response at location r when a stimulus is presented at location s with contrast ξk (k = 1–6). 
In each case, we evaluated response probabilities (Equations 3 and 6) with numerical integration. As the normal distribution has infinite support, the integrands on the right-hand sides of these equations should be integrated to an upper limit at plus infinity, a numerically intractable bound. We used Gauss-Kronrod quadrature (as implemented in the quadgk function in Matlab) in order to evaluate these integrals. 
Algorithm implementation
The ML-LS algorithm was implemented by minimizing the negative of the log-likelihood function with an unconstrained minimization algorithm (fminunc, in Matlab's Optimization Toolbox). The optimization algorithm also yields a numerical approximation to the Hessian matrix. Standard errors based on ML-LS estimation were derived as the square root of the diagonal elements of the inverse of this Hessian matrix. 
Our algorithm for MLE differs from the previously published algorithm for the related m-AFC task (DeCarlo, 2012), in which each response variable was modeled with an independent Bernoulli distribution (DeCarlo, 2012) whereas we model the responses to each stimulus event as arising from a trinomial distribution (for the 2-ADC model) or, in general, a multinomial distribution (for the m-ADC model); parameter estimates are based on maximizing the trinomial/multinomial likelihood function. 
The MCMC algorithm (Metropolis sampling) was custom implemented in Matlab for estimating sensitivity and criteria from simulated response counts Display FormulaImage not available (denoting the number of responses to location r for a stimulus at location s). In the following, Ns denotes the total number of trials for each stimulus event s, and the symbol di is used as a general notation either for sensitivity di when estimation was performed at a single value of stimulus strength or for the collection of psychophysical parameters (dmax, n, ξ50)i when estimation was performed with the entire psychometric function. 
The MCMC algorithm proceeds with the following steps: (a) Generate an initial guess for the parameters ( Display FormulaImage not available , Display FormulaImage not available ) (the superscript r denotes a reference set). Designate this as the reference parameter set. Determine response probabilities from Equation system 3 based on this set. We denote these probabilities by Display FormulaImage not available (b) Compute the likelihood value 𝓛r, assuming that responses Display FormulaImage not available follow a multinomial distribution with parameters Ns, Display FormulaImage not available . (c) Generate a new guess for the parameters ( Display FormulaImage not available , Display FormulaImage not available ) based on a transition probability distribution for the parameters. (d) Determine response probabilities and the associated likelihood value 𝓛n based on the new guess. (e) Compute a likelihood ratio based on the older and newer guesses: 𝓛R = 𝓛n / 𝓛r. (f) Accept the new guess for the parameters with a probability a, which depends on the magnitude of the likelihood ratio, a = min(𝓛R,1). Once accepted, the new set of parameters becomes the reference set, and the likelihood value based on the last set of accepted parameters is used as the reference value (𝓛r). (g) Repeat steps (c) through (f) until convergence. 
We used Metropolis sampling of parameter space based on a symmetric, multivariate transition probability distribution (Gaussian with standard deviation σ = 0.02 in each dimension). The MCMC simulation proceeded until the algorithm converged on a specific set of parameters di, ci, i ∈ {1, 2} in four-dimensional space. The algorithm was determined to have converged when the value of 𝓛 and the chi-square error function changed by less than 2% over at least 100 consecutive iterations. The burn-in period was generally achieved within about 500 iterations (e.g., Figure S3C and D). Posterior distributions were computed based on parameter values between iterations 1,000 and 2,000. Standard errors for the parameters and 95% credible intervals reported (Table S2C, Supplemental Data) were based on the standard deviation and the [2.5–97.5] percentile of the posterior distributions. 
In the numerical estimation, the parameters {di, ci} were permitted to take both positive and negative values (unconstrained optimization); no constraint was placed on their sign or magnitude. However, negative values of sensitivity parameters (di) lack physical meaning. We repeated the estimation by constraining sensitivity parameters to take only positive values (with the constrained optimization function fmincon in Matlab or with a custom-implemented MCMC Metropolis-Hastings algorithm); this analysis yielded sensitivity estimates that matched those obtained with the unconstrained optimization approaches. 
Although a detailed analysis of the accuracy with which parameters can be recovered is pending, parameters were reliably estimated with both simulated and real behavioral data; parameter values and standard errors were comparable to those estimated with the indecision model (Figures 6B and D). Matlab code for m-ADC model parameter estimation (MLE and MCMC algorithms) can be downloaded at the following location: http://purl.stanford.edu/mc140xy0456
Optimal decision surfaces in the m-ADC model are hyperplanes in m-dimensional decision space
For maximizing success or, more generally, when the benefit or cost of making an erroneous response is the same for all stimulus–response contingencies, optimal decision surfaces for additive signals and noise are isosurfaces of the posterior odds ratio (surfaces of constant posterior odds ratio). We demonstrate this result in Appendix D.1 (Supplemental Data). Here, we derive the equations for isosurfaces of the (log) posterior odds ratio and show that these are identical to the decision boundaries in the m-ADC model. 
For the m-ADC model, the decision variable (signal and noise) distributions at each location (Equation 4) can be expressed as components of a multivariate (m-dimensional) Gaussian random variable Ψ = [Ψ12,…Ψm] with a diagonal (identity) covariance matrix. The equation of such a multivariate Gaussian variable Ψ with mean () = [d(ξ1), d(ξ2), … d(ξm)] and covariance matrix = (ii = 1, ij = 0, i, j ∈ {1, …, m}, i ≠ j), can be written as  where 𝒩m is the m-dimensional Gaussian density function, and A is a normalization constant in order for 𝒩m to be a probability density (A = 1/ Display FormulaImage not available ). Here, for simplicity of notation, we drop the subscript from di and posit that the psychophysical function is the same at all locations, although the results hold even without this assumption. 
During catch trials, when no stimulus is presented, (||()||1 = 0), the decision variable (noise) distribution is given by   
During stimulus trials, when a stimulus is presented at location j with strength ξj, the decision variable (signal) distribution is given by   
Thus, the log-likelihood ratio of a stimulus at location j (with strength ξj) versus no stimulus anywhere (catch) is given by   
The posterior odds ratio is obtained by multiplying the prior odds ratio with the likelihood ratio. The prior odds of a stimulus presentation at location j with strength ξj versus no stimulus is denoted by pξj/p0 = p(ξj = 1, ξk = 0 ∀kj)/p(||||1 = 0)). Thus, the log-posterior odds is obtained by adding log( Display FormulaImage not available /p0) to the log-likelihood ratio.   
Optimal decision surfaces for reporting a stimulus of strength ξj at location j versus no stimulus are surfaces of constant Λj0 (see Appendix D.1, Supplemental Data):      
Thus, these optimal surfaces are hyperplanes of constant Ψj. The specification of a cutoff criterion at Ψj = cj, as in the m-ADC model, corresponds to the observer employing a decision boundary from among this family of optimal decision surfaces. The precise choice of cj would depend on the cost/utility of choosing each alternative (βj0, see Appendix D.1) and the prior odds ratio as well as the perceptual sensitivity to that stimulus (d(ξj)). Specifically, when the prior odds, relative costs, and stimulus strength at each location remain constant across trials, the optimal value of cj also remains constant across trials. 
Next, we calculate the log-likelihood ratio for a stimulus of strength ξi at location i versus a stimulus of strength ξj at location j. This is given by   
As before, the log-posterior odds ratio is given by   
Optimal decision surfaces for reporting a stimulus at location i versus a stimulus at location j are surfaces of constant Λij (see Appendix D.1, Supplemental Data):      
Thus, these optimal decision surfaces are hyperplanes of constant Ψi d(ξi) – Ψj d(ξj) = Bij
To determine the value of this constant, we demonstrate the following result: Optimal decision surfaces defined by Equations 7 and 8 intersect at a point (proved in Appendix D.2, Supplemental Data). Even without a formal demonstration, it is apparent that if these don't intersect at point, then the decision space could contain domains in which the optimal decision is not uniquely specified. 
Given this, each of the decision surfaces defined by Equation 8 must pass through the point of intersection of the optimal decision surfaces defined in Equation 7, given by (Ψi, Ψj) = (ci, cj). Hence, the constant Bij = ci d(ξi) – cj d(ξj) and the optimal decision hyperplane are given by Ψi d(ξi) – Ψj d(ξj) = ci d(ξi) – cj d(ξj). 
Specifically, when d(ξi) = d(ξj) = d, i.e., the perceptual sensitivities at the two locations are equal (and constant), these decision surfaces are planes of constant Ψi – Ψj = cicj. Thus, in this case, the decision surfaces in the m-ADC model (constant Ψi – Ψj) belong to the family of optimal decision surfaces for detecting a stimulus at location i versus at location j
We summarize below the equations for each optimal decision boundary under the conditions described above:   
These are hyperplanes in m-dimensional decision space as specified by the m-ADC decision rule (Equation 5). 
Supplementary Materials
Acknowledgments
We would like to thank Stanley Klein, Lynn Olzak, and Lawrence DeCarlo for useful pointers and Alireza Soltani, Peiran Gao, and Marc Zirnsak for helpful discussions. We also thank Miguel García-Pérez and an anonymous reviewer for their careful reading and detailed comments on previous versions of this manuscript. This research was supported by a Stanford School of Medicine Dean's Postdoctoral Fellowship (DS), Stanford MBC IGERT Fellowship, NSF Graduate Research Fellowship (NAS), NIH Grant EY014924 (TM), and NIH Grant EY024243 (EIK). 
Commercial relationships: none. 
Corresponding author: Devarajan Sridharan. 
Email: dsridhar@stanford.edu. 
Address: Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA. 
References
Ashby F. G. (1992). Multidimensional models of perception and cognition. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Ashby F. G. Townsend J. T. (1986). Varieties of perceptual independence. Psychological Review, 93 (2), 154–179. [CrossRef] [PubMed]
Benjamin A. S. Diaz M. Wee S. (2009). Signal detection with criterion noise: Applications to recognition memory. Psychological Review, 116 (1), 84–115. [CrossRef] [PubMed]
Brunton B. W. Botvinick M. M. Brody C. D. (2013). Rats and humans can optimally accumulate evidence for decision-making. Science, 340 (6128), 95–98. [CrossRef] [PubMed]
Burnham K. P. Anderson D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach. New York: Springer-Verlag.
Carandini M. Churchland A. K. (2013). Probing perceptual decisions in rodents. Nature Neuroscience, 16 (7), 824–831. [CrossRef] [PubMed]
Carpenter R. H. Williams M. L. (1995). Neural computation of log likelihood in control of saccadic eye movements. Nature, 377 (6544), 59–62. [CrossRef] [PubMed]
Cavanaugh J. Wurtz R. H. (2004). Subcortical modulation of attention counters change blindness. Journal of Neuroscience, 24 (50), 11236–11243. [CrossRef] [PubMed]
Churchland A. K. Ditterich J. (2012). New advances in understanding decisions among multiple alternatives. Current Opinion in Neurobiology, 22 (6), 920–926. [CrossRef] [PubMed]
Cohen M. R. Maunsell J. H. (2009). Attention improves performance primarily by reducing interneuronal correlations. Nature Neuroscience, 12 (12), 1594–1600. [CrossRef] [PubMed]
DeCarlo L. T. (2012). On a signal detection approach to m-alternative forced choice with bias, with maximum likelihood and Bayesian approaches to estimation. Journal of Mathematical Psychology, 56 (3), 196–207. [CrossRef]
DeCarlo L. T. (2010). On the statistical and theoretical basis of signal detection theory and extensions: Unequal variance, random coefficient, and mixture models. Journal of Mathematical Psychology, 54, 304–313. [CrossRef]
Fechner G. (1966). Elements of psychophysics. New York: Holt, Rinehart and Winston. (Original work published 1860).
García-Pérez M. A. Alcalá-Quintana R. (2010). The difference model with guessing explains interval bias in two-alternative forced-choice detection procedures. Journal of Sensory Studies, 25 (6), 876–898. [CrossRef]
García-Pérez M. A. Alcalá-Quintana R. (2011a). Improving the estimation of psychometric functions in 2AFC discrimination tasks. Frontiers in Psychology, 2 (96), 1–9.
García-Pérez M. A. Alcalá-Quintana R. (2011b). Interval bias in 2AFC detection tasks: Sorting out the artifacts. Attention, Perception, & Psychophysics, 73 (7), 2332–2352. [CrossRef]
García-Pérez M. A. Alcalá-Quintana R. (2013). Shifts of the psychometric function: Distinguishing bias from perceptual effects. The Quarterly Journal of Experimental Psychology, 66 (2), 319–337. [CrossRef] [PubMed]
Gold J. I. Law C. T. Connolly P. Bennur S. (2008). The relative influences of priors and sensory evidence on an oculomotor decision variable during perceptual learning. Journal of Neurophysiology, 100 (5), 2653–2668. [CrossRef] [PubMed]
Gold J. I. Shadlen M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574. [CrossRef] [PubMed]
Green D. M. Swets J. A. (1966). Signal detection theory and psychophysics. New York: John Wiley and Sons.
Hanks T. D. Mazurek M. E. Kiani R. Hopp E. Shadlen M. N. (2011). Elapsed decision time affects the weighting of prior probability in a perceptual decision task. Journal of Neuroscience, 31 (17), 6339–6352. [CrossRef] [PubMed]
Herrmann K. Montaser-Kouhsari L. Carrasco M. Heeger D. J. (2010). When size matters: Attention affects performance by contrast or response gain. Nature Neuroscience, 13 (12), 1554–1559. [CrossRef] [PubMed]
Jogan M. Stocker A. A. (2014). A new two-alternative forced choice method for the unbiased characterization of perceptual bias and discriminability. Journal of Vision, 14 (3): 20, 1–18, http://www.journalofvision.org/content/14/3/20, doi:10.1167/14.3.20. [PubMed] [Article]
Kaernbach C. (2001). Adaptive threshold estimation with unforced-choice tasks. Perception & Psychophysics, 63 (8), 1377–1388. [CrossRef] [PubMed]
Katkov M. Tsodyks M. Sagi D. (2006). Singularities in the inverse modeling of 2AFC contrast discrimination data. Vision Research, 46 (1–2), 259–266. [CrossRef] [PubMed]
Klein S. A. (1985). Double-judgment psychophysics: Problems and solutions. Journal of the Optical Society of America A, 2 (9), 1560–1585. [CrossRef]
Klein S. A. (2001). Measuring, estimating, and understanding the psychometric function: A commentary. Perception and Psychophysics, 63 (8), 1421–1455. [CrossRef] [PubMed]
Lee J. Maunsell J. H. (2009). A normalization model of attentional modulation of single unit responses. Plos One, 4 (2), e4651. [CrossRef] [PubMed]
Luce R. D. (1963). Detection and recognition. In Luce R. D. Bush R. R. Galanter E. (Eds.), Handbook of mathematical psychology, vol. 1 (pp. 103–189). New York: Wiley.
Macmillan N. A. Creelman D. C. (2005). Detection theory: A user's guide. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
McPeek R. M. Keller E. L. (2004). Deficits in saccade target selection after inactivation of superior colliculus. Nature Neuroscience, 7 (7), 757–763. [CrossRef] [PubMed]
Middleton D. Meter D. (1955). On optimum multiple-alternative detection of signals in noise. IRE Transactions on Information Theory, 1 (2), 1–9. [CrossRef]
Mulder M. J. Wagenmakers E. J. Ratcliff R. Boekel W. Forstmann B. U. (2012). Bias in the brain: A diffusion model analysis of prior probability and potential payoff. Journal of Neuroscience, 32 (7), 2335–2343. [CrossRef] [PubMed]
Niwa M. Ditterich J. (2008). Perceptual decisions between multiple directions of visual motion. Journal of Neuroscience, 28 (17), 4435–4445. [CrossRef] [PubMed]
Olzak L. A. Thomas J. P. (1981). Gratings: Why frequency discrimination is sometimes better than detection. Journal of the Optical Society of America A, 71 (1), 64–70. [CrossRef]
Ray S. Maunsell J. H. (2010). Differences in gamma frequencies across visual cortex restrict their possible use in computation. Neuron, 67 (5), 885–896. [CrossRef] [PubMed]
Reynolds J. H. Heeger D. J. (2009). The normalization model of attention. Neuron, 61 (2), 168–185. [CrossRef] [PubMed]
Sridharan D. Ramamurthy D. L. Knudsen E. I. (2013). Spatial probability dynamically modulates visual target detection in chickens. Plos One, 8 (5), e64136. [CrossRef] [PubMed]
Swets J. Birdsall T. G. (1956). The human use of information III: Decision-making in signal detection and recognition situations involving multiple alternatives. IRE Transactions on Information Theory, 2 (3), 138–165. [CrossRef]
Tanner J. W. P. (1956). Theory of recognition. Journal of the Acoustical Society of America, 28, 882–888. [CrossRef]
Thomas J. P. Olzak L. A. (1992). Simultaneous detection and identification. In Ashby F. G. (Ed.), Multidimensional models of perception and cognition, (pp. 253–278). Hillsdale, New Jersey: Lawrence Erlbaum Associates, Inc.
Treisman M. Faulkner A. (1985). On the choice between choice theory and signal detection theory. The Quarterly Journal of Experimental Psychology, 37A, 387–405. [CrossRef]
Watson C. Kellogg S. Kawanishi D. Lucas P. (1973). The uncertain response in detection-oriented psychophysics. Journal of Experimental Psychology, 99 (2), 180–185. [CrossRef]
Yeshurun Y. Carrasco M. Maloney L. T. (2008). Bias and sensitivity in two-interval forced choice procedures: Tests of the difference model. Vision Research, 48 (17), 1837–1851. [CrossRef] [PubMed]
Zenon A. Krauzlis R. J. (2012). Attention deficits without cortical neuronal deficits. Nature, 489 (7416), 434–437. [CrossRef] [PubMed]
Figure 1
 
Multialternative detection task. (A) 2-ADC task. The observer initiates a trial by fixating on a zeroing dot. In some trials (“stimulus” trials, upper sequence), a target stimulus (here, a grating) is briefly presented at one of two potential locations (dashed black circles) on the screen. The observer is rewarded for detecting and indicating the location of the target with a saccade (blue line, “Go” response) to the appropriate response box (dashed yellow circles). In other trials (“catch” trials, lower sequence), no target is presented for a prolonged period following fixation. In these trials, the observer is rewarded for maintaining fixation on the zeroing dot (“NoGo” response) following the appearance of the response boxes. (B) m-ADC task. Following fixation of a central dot, the observer is presented with m (here, m = 4) oriented gratings. At a random time following stimulus onset, the display goes blank briefly (a few hundred milliseconds). Then, the four stimuli reappear. In some proportion of the trials, one of the four gratings has changes in orientation (change trials), and in the remaining trials, none of the stimuli changes (catch trials). The observer is rewarded for saccading to the location of the change (change trials) or for maintaining fixation in trials when no change occurred (catch trials).
Figure 1
 
Multialternative detection task. (A) 2-ADC task. The observer initiates a trial by fixating on a zeroing dot. In some trials (“stimulus” trials, upper sequence), a target stimulus (here, a grating) is briefly presented at one of two potential locations (dashed black circles) on the screen. The observer is rewarded for detecting and indicating the location of the target with a saccade (blue line, “Go” response) to the appropriate response box (dashed yellow circles). In other trials (“catch” trials, lower sequence), no target is presented for a prolonged period following fixation. In these trials, the observer is rewarded for maintaining fixation on the zeroing dot (“NoGo” response) following the appearance of the response boxes. (B) m-ADC task. Following fixation of a central dot, the observer is presented with m (here, m = 4) oriented gratings. At a random time following stimulus onset, the display goes blank briefly (a few hundred milliseconds). Then, the four stimuli reappear. In some proportion of the trials, one of the four gratings has changes in orientation (change trials), and in the remaining trials, none of the stimuli changes (catch trials). The observer is rewarded for saccading to the location of the change (change trials) or for maintaining fixation in trials when no change occurred (catch trials).
Figure 2
 
Signal detection models for the multialternative detection task. (A) A simple detection (Yes/No) task modeled with a binary choice (one-dimensional) signal detection model. Black Gaussian: decision variable distribution when no stimulus was presented, p(Ψ|N); red Gaussian: decision variable distribution when a stimulus was presented, p(Ψ|S). Red shading: Hit rate; hatched region: False-alarm rate; d: perceptual sensitivity for detection; c: choice criterion for a Yes response. (B) Performance in a 2-ADC task modeled with two one-dimensional binary choice models. Top row: Behavior modeled as a two-stage decision with a binary one-dimensional model for each stage. In the first stage, the observer decides if a stimulus was presented at all (N vs. S1 or S2), based on the value of a decision variable (Ψ) as in the conventional Yes/No task. In the next stage, the observer decides whether the stimulus was presented at location 1 or location 2 based on the value of a different decision variable (Ψ*) as in the conventional 2-AFC task (see text for details). Bottom row: Behavior modeled with two binary choice (Yes/No) one-dimensional models, one at each potential target location. Decisions are based on independent decision variables (Ψ1, Ψ2), sensitivities (d1, d2), and criteria (c1, c2) at each location. This is a mis-specified model for the 2-ADC task (see text for details). Hatched region: False-alarm rate; gray shading: miss rate. (C) Two-dimensional signal-detection model for the 2-ADC task. The decision is based on a bivariate decision variable Ψ whose components (Ψ1 and Ψ2) encode sensory evidence at each stimulus location and are represented along orthogonal axes in a two-dimensional decision space. Decision variable components are independently distributed Gaussians. Black circle: contour of the joint distribution of the decision variable components for no stimulus at either location (noise distribution). Red and blue circles: contour of the joint distribution of the decision variable components for a stimulus at location 1 or location 2, respectively (signal distributions). Linear decision boundaries (thick black lines) demarcate the domains of decision space for each potential response or choice; these belong to the family of optimal decision surfaces for this model (see text for details). The integral of the decision variable distribution within each region represents the probability of the corresponding response: NoGo (Y = 0, gray), Go response to location 1 (Y = 1, red) or to location 2 (Y = 2, blue). Marginal distributions of each decision variable component are shown alongside each axis.
Figure 2
 
Signal detection models for the multialternative detection task. (A) A simple detection (Yes/No) task modeled with a binary choice (one-dimensional) signal detection model. Black Gaussian: decision variable distribution when no stimulus was presented, p(Ψ|N); red Gaussian: decision variable distribution when a stimulus was presented, p(Ψ|S). Red shading: Hit rate; hatched region: False-alarm rate; d: perceptual sensitivity for detection; c: choice criterion for a Yes response. (B) Performance in a 2-ADC task modeled with two one-dimensional binary choice models. Top row: Behavior modeled as a two-stage decision with a binary one-dimensional model for each stage. In the first stage, the observer decides if a stimulus was presented at all (N vs. S1 or S2), based on the value of a decision variable (Ψ) as in the conventional Yes/No task. In the next stage, the observer decides whether the stimulus was presented at location 1 or location 2 based on the value of a different decision variable (Ψ*) as in the conventional 2-AFC task (see text for details). Bottom row: Behavior modeled with two binary choice (Yes/No) one-dimensional models, one at each potential target location. Decisions are based on independent decision variables (Ψ1, Ψ2), sensitivities (d1, d2), and criteria (c1, c2) at each location. This is a mis-specified model for the 2-ADC task (see text for details). Hatched region: False-alarm rate; gray shading: miss rate. (C) Two-dimensional signal-detection model for the 2-ADC task. The decision is based on a bivariate decision variable Ψ whose components (Ψ1 and Ψ2) encode sensory evidence at each stimulus location and are represented along orthogonal axes in a two-dimensional decision space. Decision variable components are independently distributed Gaussians. Black circle: contour of the joint distribution of the decision variable components for no stimulus at either location (noise distribution). Red and blue circles: contour of the joint distribution of the decision variable components for a stimulus at location 1 or location 2, respectively (signal distributions). Linear decision boundaries (thick black lines) demarcate the domains of decision space for each potential response or choice; these belong to the family of optimal decision surfaces for this model (see text for details). The integral of the decision variable distribution within each region represents the probability of the corresponding response: NoGo (Y = 0, gray), Go response to location 1 (Y = 1, red) or to location 2 (Y = 2, blue). Marginal distributions of each decision variable component are shown alongside each axis.
Figure 3
 
Estimating sensitivities and criteria from simulated responses. (A–B) maximum likelihood estimation (MLE) of the perceptual sensitivity (A) and choice criterion (B) at each location from simulated response counts for a two-alternative detection task (Table S2B, Supplemental Data). Beginning with an initial guess for each parameter, the algorithm uses a line-search method to identify the sensitivities and criteria that maximize the likelihood of the simulated response counts. For various initial guesses (colored diamonds-s), the MLE algorithm converged reliably onto identical sensitivity and criterion values at each location (black circles/dashed gray lines). (C) Psychometric functions of the probability of response at location 1 as a function of the contrast of a stimulus presented at location 1 (red circles) or at location 2 (blue circles). Error bars: Standard deviation across simulated runs (N = 100). Solid curves: Psychometric functions based on fitting a model that incorporated bias. Dashed curves: Fits with a model that did not incorporate bias. (D) Same as in (C) but for the response probability at location 2. (E) Same as in (C) but with data and fits pooled across locations as “correct” (hit, black) and “incorrect” (misidentification, green) responses.
Figure 3
 
Estimating sensitivities and criteria from simulated responses. (A–B) maximum likelihood estimation (MLE) of the perceptual sensitivity (A) and choice criterion (B) at each location from simulated response counts for a two-alternative detection task (Table S2B, Supplemental Data). Beginning with an initial guess for each parameter, the algorithm uses a line-search method to identify the sensitivities and criteria that maximize the likelihood of the simulated response counts. For various initial guesses (colored diamonds-s), the MLE algorithm converged reliably onto identical sensitivity and criterion values at each location (black circles/dashed gray lines). (C) Psychometric functions of the probability of response at location 1 as a function of the contrast of a stimulus presented at location 1 (red circles) or at location 2 (blue circles). Error bars: Standard deviation across simulated runs (N = 100). Solid curves: Psychometric functions based on fitting a model that incorporated bias. Dashed curves: Fits with a model that did not incorporate bias. (D) Same as in (C) but for the response probability at location 2. (E) Same as in (C) but with data and fits pooled across locations as “correct” (hit, black) and “incorrect” (misidentification, green) responses.
Figure 4
 
2-ADC model identifiability. (A) Contour plot of the 2-ADC multinomial log-likelihood as a function of the sensitivities (d1, d2) at the two locations. (B) Contour plot of the 2-ADC multinomial log-likelihood as a function of the criteria (c1, c2). The concavity of the function is apparent throughout the domain of parameters shown. (C) The variation of log-likelihood with sensitivity at each location for fixed values of the other parameters (sensitivity at the other location and the two criteria, cross section through the dashed white lines of panels A–B). Dashed gray lines: values of the parameters that maximize the log-likelihood function; red data: location 1; blue data: location 2. (D) Same as (C) but variation with the criterion at each location for fixed values of the other parameters (criterion at the other location and the two sensitivities). (E) Probability of response during catch trials to location 1 (left), location 2 (middle), or NoGo (right) as a function of the choice criterion at each location. Colored lines: The contour traversing all possible pairs of criteria consistent with a specific value of each response probability; red: probability of a Go response to location 1; blue: probability of a Go response to location 2; green: probability of a NoGo response. (F) The three contours (red, blue, green) intersect at a single point indicating that exactly one set of criteria is consistent with a given set of response probabilities. Arrows: Specific values of NoGo and Go response probabilities at each location and the unique pair of criteria that is consistent with this specific set of response probabilities.
Figure 4
 
2-ADC model identifiability. (A) Contour plot of the 2-ADC multinomial log-likelihood as a function of the sensitivities (d1, d2) at the two locations. (B) Contour plot of the 2-ADC multinomial log-likelihood as a function of the criteria (c1, c2). The concavity of the function is apparent throughout the domain of parameters shown. (C) The variation of log-likelihood with sensitivity at each location for fixed values of the other parameters (sensitivity at the other location and the two criteria, cross section through the dashed white lines of panels A–B). Dashed gray lines: values of the parameters that maximize the log-likelihood function; red data: location 1; blue data: location 2. (D) Same as (C) but variation with the criterion at each location for fixed values of the other parameters (criterion at the other location and the two sensitivities). (E) Probability of response during catch trials to location 1 (left), location 2 (middle), or NoGo (right) as a function of the choice criterion at each location. Colored lines: The contour traversing all possible pairs of criteria consistent with a specific value of each response probability; red: probability of a Go response to location 1; blue: probability of a Go response to location 2; green: probability of a NoGo response. (F) The three contours (red, blue, green) intersect at a single point indicating that exactly one set of criteria is consistent with a given set of response probabilities. Arrows: Specific values of NoGo and Go response probabilities at each location and the unique pair of criteria that is consistent with this specific set of response probabilities.
Figure 5
 
Model identifiability and optimality. (A–C): Identifiability of the 2-ADC model. (A) Two-dimensional decision space for the 2-ADC model during catch trials, partitioned into three decision regions—NoGo response (gray) or Go response to location 1 (red) or location 2 (blue)—by one set of criteria (c1, c2). Dashed circle: Contour of the noise distribution. Thick solid lines: Decision boundaries. Other conventions are as in Figure 2C. (B) 2-ADC decision space during catch trials partitioned with an alternate set of criteria ( Image not available , Image not available ). These criterion values were chosen to keep the NoGo response probability the same as in (A). Thick, dashed lines: The decision boundaries associated with criteria (c1, c2) in (A). Other conventions are as in (A). (C) 2-ADC decision space with increasing perceptual sensitivity to a stimulus at location 1 (increasing d1). Red circles: Contours of the signal distribution. Gray circle: Contour of the noise distribution. Response probabilities in each decision region vary monotonically with increasing perceptual sensitivity along either dimension. Other conventions are as in (A). (D) Optimal decision surfaces in the 2-ADC decision space. Dashed circles: Contours of the decision variable distributions. Thick dashed lines: Optimal decision boundaries when prior probabilities of all stimulus events are equal. Solid circles: Contours of the posterior; Thick solid lines: Optimal decision boundaries when the prior probability of a stimulus presentation at location 1 is higher than the probability of a catch trial. The marginal distributions of the signal and noise distributions along dimension 1 are shown below (same conventions); horizontal green line: the value for the contours shown in the top panel (see text for details on the various probability notations).
Figure 5
 
Model identifiability and optimality. (A–C): Identifiability of the 2-ADC model. (A) Two-dimensional decision space for the 2-ADC model during catch trials, partitioned into three decision regions—NoGo response (gray) or Go response to location 1 (red) or location 2 (blue)—by one set of criteria (c1, c2). Dashed circle: Contour of the noise distribution. Thick solid lines: Decision boundaries. Other conventions are as in Figure 2C. (B) 2-ADC decision space during catch trials partitioned with an alternate set of criteria ( Image not available , Image not available ). These criterion values were chosen to keep the NoGo response probability the same as in (A). Thick, dashed lines: The decision boundaries associated with criteria (c1, c2) in (A). Other conventions are as in (A). (C) 2-ADC decision space with increasing perceptual sensitivity to a stimulus at location 1 (increasing d1). Red circles: Contours of the signal distribution. Gray circle: Contour of the noise distribution. Response probabilities in each decision region vary monotonically with increasing perceptual sensitivity along either dimension. Other conventions are as in (A). (D) Optimal decision surfaces in the 2-ADC decision space. Dashed circles: Contours of the decision variable distributions. Thick dashed lines: Optimal decision boundaries when prior probabilities of all stimulus events are equal. Solid circles: Contours of the posterior; Thick solid lines: Optimal decision boundaries when the prior probability of a stimulus presentation at location 1 is higher than the probability of a catch trial. The marginal distributions of the signal and noise distributions along dimension 1 are shown below (same conventions); horizontal green line: the value for the contours shown in the top panel (see text for details on the various probability notations).
Figure 6
 
Empirical validation and model comparison: Target detection task. (A) Schematic of the “indecision model” for a two-interval (or two-alternative) nonforced choice task. The indecision model partitions decision space differently from the 2-ADC model and specifies that the observer provides a NoGo response when “uncertain,” i.e., when sensory evidence is equivocal for a target stimulus in either interval (gray diagonal band). μ and δ are sensitivity and criterion parameters in this model; δ defines the extent of the NoGo response region. Other conventions are as in Figure 2C. (B) Estimates of sensitivity for the target-detection task from the indecision model (x-axis) and the 2-ADC model (y-axis). Error bars: parameter standard errors based on MLE. Dashed oblique line: Line of identical sensitivities. Data points represent individual observers (N = 17). (C) Schematic of the indecision model with bias, with different criteria (δ1δ2) for a Go response to each interval. (D) Estimates of bias (difference between the values of the criterion for interval 1 and the criterion for interval 2) from the indecision model (x-axis) and the 2-ADC model (y-axis). Dashed lines: Lines of zero bias. Other conventions are as in panel B. (E) Estimates of sensitivity from the 2-ADC (white circles) and indecision (gray circles) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel B. (F) Estimates of bias from the 2-ADC (white squares) and indecision (gray squares) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel D. (Inset) Distribution of differences in BIC values between the two models (indecision − 2-ADC); values to the left of the dashed vertical line indicate a lower BIC value for the indecision model and values to the right a lower BIC value for the 2-ADC model.
Figure 6
 
Empirical validation and model comparison: Target detection task. (A) Schematic of the “indecision model” for a two-interval (or two-alternative) nonforced choice task. The indecision model partitions decision space differently from the 2-ADC model and specifies that the observer provides a NoGo response when “uncertain,” i.e., when sensory evidence is equivocal for a target stimulus in either interval (gray diagonal band). μ and δ are sensitivity and criterion parameters in this model; δ defines the extent of the NoGo response region. Other conventions are as in Figure 2C. (B) Estimates of sensitivity for the target-detection task from the indecision model (x-axis) and the 2-ADC model (y-axis). Error bars: parameter standard errors based on MLE. Dashed oblique line: Line of identical sensitivities. Data points represent individual observers (N = 17). (C) Schematic of the indecision model with bias, with different criteria (δ1δ2) for a Go response to each interval. (D) Estimates of bias (difference between the values of the criterion for interval 1 and the criterion for interval 2) from the indecision model (x-axis) and the 2-ADC model (y-axis). Dashed lines: Lines of zero bias. Other conventions are as in panel B. (E) Estimates of sensitivity from the 2-ADC (white circles) and indecision (gray circles) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel B. (F) Estimates of bias from the 2-ADC (white squares) and indecision (gray squares) models that include (x-axis) or exclude (y-axis) a finger-error term. Other conventions are as in panel D. (Inset) Distribution of differences in BIC values between the two models (indecision − 2-ADC); values to the left of the dashed vertical line indicate a lower BIC value for the indecision model and values to the right a lower BIC value for the 2-ADC model.
Figure 7
 
Empirical validation and model comparison: Length discrimination task. (A) Schematic of a 2-ADC model for a discrimination task (in this case, a length-discrimination task). Two criteria, cA and cB, partition the decision space into three response regions: stimulus above longer (Above > Below, red), stimulus above shorter (Above < Below, blue), and equal perceived length (NoGo or unsure, gray). The key difference with the standard 2-ADC model is that the NoGo decision region is bounded from all sides. X-axis: increasing lengths of the stimulus above. Y-axis: increasing lengths of stimulus below. Origin: point of subjective equality (PSE) of the test and standard stimuli. Other conventions are as in Figure 2C. (B) Fit of the 2-ADC model (solid lines) for each of the two observers in the length-discrimination task. Closed circles and thick lines: proportion of vertical (test) > horizontal (standard) responses and model fits. Open circles and thin lines: proportion of NoGo (unsure) responses and model fits. Dashed lines: Fits of the indecision model. X-axis: length of the vertical stimulus. Arrow: Point of objective equality (104 pixels). Dashed vertical line: PSE. (C) Same as in (A) but schematic of the 2-ADC model that incorporates an interaction term (α) among the decision variables (2-ADCX model, see text for details). Dot-dashed lines: Trajectories of the mean of the decision variable distributions with mutual (competitive) interactions. (D) Same as in (B) but fit of the 2-ADCX model.
Figure 7
 
Empirical validation and model comparison: Length discrimination task. (A) Schematic of a 2-ADC model for a discrimination task (in this case, a length-discrimination task). Two criteria, cA and cB, partition the decision space into three response regions: stimulus above longer (Above > Below, red), stimulus above shorter (Above < Below, blue), and equal perceived length (NoGo or unsure, gray). The key difference with the standard 2-ADC model is that the NoGo decision region is bounded from all sides. X-axis: increasing lengths of the stimulus above. Y-axis: increasing lengths of stimulus below. Origin: point of subjective equality (PSE) of the test and standard stimuli. Other conventions are as in Figure 2C. (B) Fit of the 2-ADC model (solid lines) for each of the two observers in the length-discrimination task. Closed circles and thick lines: proportion of vertical (test) > horizontal (standard) responses and model fits. Open circles and thin lines: proportion of NoGo (unsure) responses and model fits. Dashed lines: Fits of the indecision model. X-axis: length of the vertical stimulus. Arrow: Point of objective equality (104 pixels). Dashed vertical line: PSE. (C) Same as in (A) but schematic of the 2-ADC model that incorporates an interaction term (α) among the decision variables (2-ADCX model, see text for details). Dot-dashed lines: Trajectories of the mean of the decision variable distributions with mutual (competitive) interactions. (D) Same as in (B) but fit of the 2-ADCX model.
Figure 8
 
Relationship to previous models. (A) Schematic of a 2-ADC model. There are three potential stimulus events—stimulus at location 1, at location 2, or no stimulus (catch)—with their associated decision variable distributions (red, blue, and black contours, respectively). The decision rule partitions decision space into three response regions, including a NoGo response. Thick lines: decision boundaries. Left and right panels: Decision variable distributions and optimal decision surfaces for equal (left) or unequal (right) sensitivities for the different stimulus events. (B) Schematic of a 2-AFC model. The decision rule partitions decision space into two response regions. Lower inset: The model is readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (C) Schematic of an indecision model. The decision rule partitions decision space into three response regions, including a NoGo response. Lower inset: This model is also readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (D) Schematic of a GRT model. In addition to the three stimulus events (as with the other models), a fourth compound stimulus event (purple circle) occurs in trials in which target stimuli (or changes) occur at both locations in a given trial. The decision rule partitions decision space into four response regions, including NoGo (or neither) and “Both” responses (2 × 2, complete identification design). For the configuration shown, the GRT model is reducible to two independent one-dimensional models (shown alongside each axis).
Figure 8
 
Relationship to previous models. (A) Schematic of a 2-ADC model. There are three potential stimulus events—stimulus at location 1, at location 2, or no stimulus (catch)—with their associated decision variable distributions (red, blue, and black contours, respectively). The decision rule partitions decision space into three response regions, including a NoGo response. Thick lines: decision boundaries. Left and right panels: Decision variable distributions and optimal decision surfaces for equal (left) or unequal (right) sensitivities for the different stimulus events. (B) Schematic of a 2-AFC model. The decision rule partitions decision space into two response regions. Lower inset: The model is readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (C) Schematic of an indecision model. The decision rule partitions decision space into three response regions, including a NoGo response. Lower inset: This model is also readily reducible to an equivalent, one-dimensional formulation by a linear transformation of decision variables (difference, Ψs = Ψ1 – Ψ2). Other conventions are as in (A). (D) Schematic of a GRT model. In addition to the three stimulus events (as with the other models), a fourth compound stimulus event (purple circle) occurs in trials in which target stimuli (or changes) occur at both locations in a given trial. The decision rule partitions decision space into four response regions, including NoGo (or neither) and “Both” responses (2 × 2, complete identification design). For the configuration shown, the GRT model is reducible to two independent one-dimensional models (shown alongside each axis).
Table 1
 
Comparison of the 2-ADC, 2-ADCX, and indecision models in the length-discrimination task.
Table 1
 
Comparison of the 2-ADC, 2-ADCX, and indecision models in the length-discrimination task.
Observer #1 Observer #2
Model 2-ADC 2-ADCX Indecision 2-ADC 2-ADCX Indecision
# parameters 4 5 5 4 5 5
βs (horizontal) 0.482 0.399 0.554 0.348 0.305 0.388
βt (vertical) 0.489 0.404 0.561 0.366 0.321 0.409
PSE (mm) 102.62 102.64 102.61 98.74 98.82 98.71
cA or δA 1.307 1.218 1.272 1.198 1.146 1.189
cB or δB 0.961 0.866 0.608 0.671 0.611 0.155
α n/a −0.385 n/a n/a −0.227 n/a
λ n/a n/a 0.007 n/a n/a 0.012
AICc 10,676 10,649 10,646 11,010 11,001 11,002
BIC 10,697 10,676 10,672 11,031 11,028 11,029
ΔAICcindecision 30 3 0 8 −1 0
ΔBICindecision 25 4 0 2 −1 0
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×