Free
Research Article  |   March 2009
Using graphical models to infer multiple visual classification features
Author Affiliations
Journal of Vision March 2009, Vol.9, 23. doi:https://doi.org/10.1167/9.3.23
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael G. Ross, Andrew L. Cohen; Using graphical models to infer multiple visual classification features. Journal of Vision 2009;9(3):23. https://doi.org/10.1167/9.3.23.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This paper describes a new model for human visual classification that enables the recovery of image features that explain performance on different visual classification tasks. Unlike some common methods, this algorithm does not explain performance with a single linear classifier operating on raw image pixels. Instead, it models classification as the result of combining the output of multiple feature detectors. This approach extracts more information about human visual classification than has been previously possible with other methods and provides a foundation for further exploration.

Introduction
The classification image algorithm (Ahumada, 2002) is one of the most successful tools for determining the information observers use to make visual classifications. In a typical classification image experiment, participants are presented with numerous noise-corrupted images from two categories and are asked to classify each one. The noise ensures that the image samples cover a large volume of the stimulus space. The classification image algorithm finds the linear classifier that best partitions the classified images. The linear classifier is defined by a set of weights which, when displayed visually, is called a “classification image.” Analyzing the classification image reveals the extent to which each image region is correlated with an observer's classifications. 
The classification process, however, may be more structured than the classification image approach suggests. For example, it may be that classification is the result of combining the detection of parts or features, rather than applying a single linear template. Consider the simple example of an observer attempting to distinguish the letter ‘P’ from ‘Q’ in the presence of visual noise. Given an image to classify, the observer may determine the presence of a ‘P’ not by applying a single linear classifier, but by applying a set of independent feature detectors and combining their output. Detecting a vertical line or an upper right-facing curve would favor a ‘P’ response, while discerning a circle or a low diagonal line would favor a ‘Q’ response. Although a classification image would indicate the importance of all the features’ component pixels to the observer's responses, it would not indicate their division into four separately detected features. 
Recent evidence suggests that human observers utilize multi-feature models in some image classification tasks. For example, Pelli, Farell, and Moore (2003) have convincingly demonstrated that humans recognize noisy word images by parts even though better performance can be achieved by integrating information from the entire word. Similarly, Gold, Cohen, and Shiffrin (2006) verified that participants employed feature-based classification strategies for some simple image classes. 
Cohen, Shiffrin, Gold, Ross, and Ross (2007) used a Gaussian Mixture Model (GMM) to recover the multiple features that may be used in a classification experiment. A GMM is a technique that clusters data into a fixed number of groups, each of which is modeled by a multivariate Gaussian distribution. If an observer employs a non-linear, multi-feature strategy, as in the ‘P’ and ‘Q’ example, the GMM algorithm can, under a reasonable set of assumptions, associate a cluster with each feature, providing more information about the observer's visual processing than the classification image approach. 
Despite its success, the GMM has several limitations: 
  1.  
    The GMM describes the data, but does not provide an explicit model of the classification process.
  2.  
    The classification image technique is not well described as a special case of GMM—it is not similar to the GMM with a single feature.
  3.  
    Using the GMM requires experimenters to collect the participants' confidence ratings on each classification decision, and then discard all but the high-confidence trials. This procedure wastes data and, because traditional classification image experiments do not measure confidence, impedes applying the algorithm to previously collected data.
  4.  
    The recovered features are biased by the pixel values of the uncorrupted images composing each class, preventing a clear distinction between the structure of the experiment and the participants' internal visual mechanisms.
This paper describes and applies GRIFT (GRaphical model for Inferring Feature Templates), a model of human image classification that addresses all of these limitations. GRIFT models the image classification process using a Bayesian network, a probabilistic model that represents the causal relationships between variables (Pearl, 1988). GRIFT posits that human image classification is a non-linear process that results from combining the outputs of several independently computed feature detectors. Just as with the GMM, GRIFT can be applied to classification data to recover the features used to discriminate between two classes. Unlike the GMM, GRIFT provides an explicit classification model, can be used as a replacement for classification images in the single-feature case, avoids stimulus bias, and can be used on experimental data that lack confidence ratings. Ross and Cohen (2008) first presented GRIFT in the proceedings of the Neural Information Processing Systems conference. The goal of this paper is to provide a more detailed explanation of the GRIFT algorithm, to present more extensive experimental and simulation results, to extend the GRIFT model to ratings data, and to demonstrate the model's applicability to detection experiments. 
The remainder of this paper describes the GRIFT model and the algorithm for fitting it to experimental data. We then demonstrate the efficacy of the model on simulated data sets and on data sets previously analyzed in Cohen et al. (2007), Gold et al. (2006), and Gold, Murray, Bennett, and Sekuler (2000). Then a series of newer experiments are described along with their GRIFT results. The paper concludes with a discussion of proposed extensions to GRIFT and possible future experiments. 
The GRIFT model
To clarify the presentation, it is important to define the terms “pixel,” “image,” and “feature.” A pixel is a square region of a stimulus that has a homogeneous brightness value. In an experiment, each stimulus pixel may be displayed using one or more computer monitor pixels depending on the spatial scaling applied to the stimulus—all references to “pixels” will refer to units of the stimulus, rather than those of the display hardware. An image is a collection of grayscale pixels arranged in a two-dimensional matrix. Images can be indexed by row and column. The two-dimensional location of particular pixels, however, is often unimportant. In this work, it is sometimes simpler to use a single pixel index. For example, the 20 th pixel in a stimulus will be referred to as S 20
The definition of the final term, “feature,” is undoubtedly the most controversial. Researchers have defined feature in many different ways, but in this paper a feature is a pixel pattern detected by a linear classifier. A linear classifier is a set of weights assigned to each image pixel along with a threshold value. If the weighted sum of an image's pixels is less than the threshold, the feature is considered present in the image, otherwise the feature is absent. Linear classifiers can define many useful types of features, including those based on the contrast between different image regions. Because the human visual system is contrast-sensitive (Palmer, 1999) and contrast features can indicate the presence or absence of specific pixel patterns, such linear features are particularly informative. Extensions of GRIFT to different types of features are discussed below. Under this definition of feature, the classification image model, which models classification as the result of a single linear classifier, is a single-feature model, while GRIFT is a multi-feature model. 
GRIFT represents the classification process as a Bayesian network (Pearl, 1988), which is a type of graphical model. Physicists, statisticians, and computer scientists developed graphical models to describe the probabilistic interactions between variables. Graphical models are fundamental to modern research in artificial intelligence (e.g., Bishop, 2006) and computer vision (e.g., Forsyth & Ponce, 2003) and are playing a larger role in psychology (e.g., Rehder, 2003). The adjective “graphical” refers to the fact that these models are commonly represented by a graph in which nodes indicate variables and edges connecting the nodes indicate the influence of one variable on another. Bishop (2006) contains full details on all the types of graphical models discussed in this paper. 
A Bayesian network (typically shortened to “Bayes net”) is a causal graphical model. It represents each variable as a node, and connects them with directed edges (lines with arrows at one end) that point from a cause to an effect. The diagram in Figure 1 represents the causal relationships between the variables of the GRIFT model. The GRIFT model describes each classification, C, as the result of a stimulus, S, which is processed by a set of N feature detectors, F = { F 1, F 2, …, F N}, that are instantiated as linear classifiers. Because there is an arrow from the stimulus S to each F i, S is a parent of each F i, and S directly influences the probability distribution of their values. Similarly, each F i directly influences C. No other variables directly influence each other. In particular, S only influences C through the F is and the F is do not directly influence each other. Variables that do not directly influence each other are called conditionally independent. Bayes nets efficiently model the interaction of many variables by encoding assumptions about their conditional independence. 
Figure 1
 
The GRIFT model is a Bayes net that describes classification as the result of combining N feature detectors. The direction of the arrows indicates causal influence—the stimulus S influences each feature detector F i, and they in turn influence the classification decision C.
Figure 1
 
The GRIFT model is a Bayes net that describes classification as the result of combining N feature detectors. The direction of the arrows indicates causal influence—the stimulus S influences each feature detector F i, and they in turn influence the classification decision C.
Because an image can be represented as a vector of pixel values, each image is a point in a high-dimensional space. The linear classifier associated with each feature describes a boundary in this image space that separates images that have that feature from images that do not. That is, each feature detector is sensitive to a particular type of input—for example, feature detector F 1 might respond best to the vertical line of a ‘P’, while feature detector F 2 might respond best to the circle in a ‘Q’. The F i variables are binary and indicate if a feature has been detected ( F i = 1) or not ( F i = 0). Whereas a typical linear classifier is deterministic, the feature detectors in GRIFT are probabilistic. An image that falls near the boundary will have an approximately 50% chance of activating the feature detector, an image far to one side of the boundary will produce nearly a 100% probability of activation, and an image far to the other side will produce nearly a 0% probability of activation. Therefore, a clear, low noise, high contrast image of a ‘P’ will almost certainly cause F 1 = 1, but a noisy, low-contrast image of a ‘Q’ might only have a small chance of causing F 2 = 1. 
The feature activations influence the classification, C, of a stimulus. In most of this paper, we assume two response classes, but an extension to ratings is described below and other generalizations are possible. The presence of some features increases the probability of choosing Class 1 ( C = 1) and the presence of others increases the probability of choosing Class 2 ( C = 2). Returning to the ‘P’ and ‘Q’ example, activation of the vertical line feature would increase the probability of responding ‘P’. Activation of a circle feature would increase the probability of responding ‘Q’. 
Mathematically, the Bayes net representation is useful because it is an efficient representation for the joint probability distribution of all the model variables. For a Bayes net with variables V = { V 1, V 2, …, V M}, the joint probability distribution is  
P ( V ) = i = 1 M P ( V i | p a r e n t s ( V i ) ) ,
(1)
where P( V i∣parents( V i)) is the conditional probability distribution of V i given its parents in the graph. Therefore, in the GRIFT model,  
P ( S , F , C ) = P ( S ) P ( C | F ) i = 1 N P ( F i | S ) .
(2)
This factorization generally provides a much more efficient representation than would be possible if the joint distribution were represented without any assumptions about causality or conditional independence. 
Now that the structure of the model is established, the next task is to specify its component conditional probability distributions: P( S), P( CF), and P( F iS). The distribution of the stimuli, P( S), is under the control of the experimenter. Fitting the GRIFT model to experimental data only relies on the assumption that, across trials, the stimuli sampled from this distribution are independent and identically distributed. 
The conditional distribution of each feature detector's value, P( F iS), is modeled as a logistic regression function on the pixel values of S. Because the feature detectors are assumed to be linear classifiers, each feature distribution is governed by two parameters, a weight vector ω i and a threshold − β i, such that  
P ( F i = 1 | S , ω i , β i ) = ( 1 + exp ( β i + j = 1 | S | ω i j S j ) ) 1
(3)
and  
P ( F i = 0 | S , ω i , β i ) = 1 P ( F i = 1 | S , ω i , β i ) ,
(4)
where ∣ S∣ is the number of pixels in a stimulus and ω ij is the j th element of vector ω i. The logistic regression function satisfies the probabilistic classification properties outlined above: Stimuli near the boundary are classified less deterministically than those far from the boundary. In image pixel space, the weights define the orientation of the boundary that determine the presence or absence of the feature. They also determine the degree of probabilistic behavior that the detector will exhibit—weights with larger absolute values lead to more deterministic output. The weights and threshold jointly determine the probability that a feature is detected for a particular image. There is a 50% probability that a feature will be detected when
j = 1 | S |
ω ij S j = − β i. When
j = 1 | S |
ω ij S j > − β i, i.e., the image is on the “absent” side of the linear boundary, there is less than 50% chance that the feature will be detected. Likewise, when
j = 1 | S |
ω ij S j < − β i, i.e., the image is on the “present” side of the linear boundary, there is a greater than 50% chance that the feature will be detected. 
The conditional distribution of C is represented by a logistic regression function on the feature outputs. Therefore, the conditional distribution of C is determined by a weight vector λ and a threshold − γ that determine the impact of feature activations on the probability of a particular response, such that  
P ( C = 1 | F , λ , γ ) = ( 1 + exp ( γ + i = 1 N λ i F i ) ) 1
(5)
and  
P ( C = 2 | F , λ , γ ) = 1 P ( C = 1 | F , λ , γ ) .
(6)
Detecting a feature with negative λ i increases the probability that the observer will respond “Class 1” and detecting a feature with positive λ i increases the probability that the observer will respond “Class 2.” Note that γ serves the same role as β i in Equation 3
Equations 5 and 6 can be generalized to represent a conditional probability distribution over ratings rather than classifications. Instead of only allowing two responses based on the feature detector values, ratings allow an observer to respond with an integer between 1 and R, where 1 indicates “definitely class 1,” R indicates “definitely class 2,” and the values in between indicate intermediate degrees of belief. Because they are ordered, rating probabilities can be represented with ordinal logistic regression (Agresti, 2002). The probability of a responding with a rating less than or equal to r, where 1 ≤ rR − 1, is given by 
P(Cr|F,λ,γ)=(1+exp(γr+i=1NλiFi))1
(7)
in which γ is a vector with R − 1 elements for which γrγr−1.1 The probability of rating R is therefore 
P(C=R|F,λ,γ)=1P(CR1|F,λ,γ),
(8)
the probability of rating 1 is 
P(C=1|F,λ,γ)=P(C1|F,λ,γ),
(9)
and, for any other rating, 
P(C=r|F,λ,γ)=P(Cr|F,λ,γ)P(Cr1|F,λ,γ).
(10)
If R = 2, these equations correspond exactly to the binary classification probabilities described in Equations 5 and 6
Figure 2 shows the full GRIFT model, including the parameters. To avoid clutter, the figure uses plate notation, in which duplicated model structures are drawn once and enclosed in a box. The ‘N’ in the lower right corner indicates that all the variables in the box and their connections to other variables are duplicated once for each of the features in the model. Note that in Figure 2 the parameters are represented as parents of the previously described GRIFT variables. In accordance with the techniques of Bayesian statistics (Gelman, Carlin, Stern, & Rubin, 2004), the parameters are themselves treated as random variables. 
Figure 2
 
The GRIFT model with its parameters. The model is Bayesian, therefore parameters are also random variables. The plate, the rounded box with N in the lower right corner, represents model structures that are duplicated for each of the N features.
Figure 2
 
The GRIFT model with its parameters. The model is Bayesian, therefore parameters are also random variables. The plate, the rounded box with N in the lower right corner, represents model structures that are duplicated for each of the N features.
Given data from an observer, i.e., a set of trials, each represented by a stimulus, S, and a response, C, the goal is to find the GRIFT parameter values that best account for the data. This parameter search is computationally complex. Even for small images and few features, this model has many parameters: Each ω i is a vector with as many dimensions as there are pixels in S and each feature also contributes a β i and λ i parameter. The fact that the F i variables are hidden, i.e., not directly measurable, also substantially increases the challenge of fitting this model to data. The primary advantage of the Bayesian approach is that it provides a principled way to place constraints on the parameters that make model fitting practical given a reasonable amount of data. 
In Bayesian models, constraints on parameters are represented by prior probability distributions. The priors represent our beliefs about the parameters before any experimental evidence is gathered. After data are gathered, knowledge about each parameter is represented by a posterior distribution, which is conditioned on all the observed data. The posterior describes the combined influence of the prior assumptions and the model likelihood given the observed data. As more data are gathered, the influence of the priors decreases. 
The prior on each λ i parameter reflects the assumption that each feature should have a significant impact on the classification, but no single feature should make the classification deterministic. In particular, the prior is a mixture of two normal distributions with means at ±2,  
P ( λ i ) = 1 2 2 π ( exp ( ( λ i + 2 ) 2 2 ) + exp ( ( λ i 2 ) 2 2 ) ) .
(11)
This prior has a number of desirable characteristics. First, if the density of a prior is too concentrated, its influence on the results will be very strong unless there are a lot of data. Each component normal distribution in Equation 7 has unit variance, which makes the distributions broad enough that the data collected in our experiments will largely determine the parameter estimates. Second, if λ i ≈ 0, F i's output will not significantly influence C. In contrast, if any λ i is too far from zero, a single active feature can make the response nearly deterministic. To avoid these extremes, most of the mass of the prior should be significantly far from zero, but not concentrated at large positive or negative values. The means of the component normals, −2 and 2, determined empirically, satisfy these constraints. 2 
Because the best γ is largely determined by the λ is and the distributions of F and S, γ has a non-informative prior,  
P ( γ ) = 1 .
(12)
This constant prior indicates no preference for any particular value. Although this function does not integrate to 1 as γ ranges from negative to positive infinity, and is therefore not a true probability density, it can be used as an improper prior (Gelman et al., 2004). Improper priors are an accepted Bayesian statistical technique so long as they produce normalized posterior distributions. In GRIFT, P(γ) has no effect on the posterior distributions of the parameters, as demonstrated in the 1, and therefore it is an acceptable improper prior. Analogously, P(βi) = 1 for all i
Because each ω i vector has dimensionality equal to the number of pixels in a stimulus, these parameters present the biggest inferential challenge. As mentioned previously, human visual processing is sensitive to contrasts between image regions. If one image region is assigned positive ω ijs and another is assigned negative ω ijs, the feature detector will be sensitive to the contrast between them. This contrast between regions requires all the pixels within each region to share similar ω ij values. To encourage this local structure and reduce the difficulty of recovering the ω is, the prior distribution was designed to favor assigning similar weights to neighboring pixels. Each ω i parameter has a prior distribution given by  
P ( ω i ) [ j exp ( ( ω i j 1 ) 2 2 ) + exp ( ( ω i j + 1 ) 2 2 ) ] [ ( j , k ) A exp ( ( ω i j ω i k ) 2 2 ) ] ,
(13)
where A is the set of neighboring pixel locations in the stimulus. This density function has two elements. The first term is a mixture of two normal distributions. The components have modes at 1 and −1, respectively, and each has unit variance. The combination assigns roughly equal probability to ω ij values between −1 and 1, but unlike a uniform prior, it places some probability mass at every value and therefore allows ω ij values to lie outside that range. The second component increases as the weights assigned to neighboring pixels become more similar and decreases as they become more different. This type of probability function is known as a Markov random field (Besag, 1974, see also Bishop, 2006), a class of graphical model frequently used in computer vision and physics. Geman and Geman (1984) pioneered the use of MRFs in computer vision as a model for reconstructing noisy images. For the purpose of fitting the model, there is no need to normalize this distribution because the normalization is constant with respect to ωi
Using Equation 1 to combine all of the parameters and variables into a single probability distribution, the model is described by  
P ( S , F , C , ω , β , λ , γ ) = P ( S ) P ( C | F , λ , γ ) P ( γ ) i = 1 N P ( F i | S , ω i , β i ) P ( ω i ) P ( β i ) P ( λ i ) .
(14)
 
Fitting GRIFT to data
Given experimental data (observed classifications for a set of stimuli), the GRIFT model, and priors on the model parameters, the next step is to determine the GRIFT parameter values that best satisfy the prior distributions and best account for the ( S, C) sample pairs gathered from a human observer. The method used to find these parameter values (provided in 1) is an instance of the expectation-maximization (EM) algorithm, a powerful technique for fitting models with unobserved variables such as GRIFT's feature detectors (Dempster, Laird, & Rubin, 1977). EM chooses an initial value for all the parameters and uses it to compute better parameter values from an estimate of the hidden variables' values. The better estimate then replaces the initial parameter values and the algorithm repeats until convergence. EM guarantees the discovery of locally optimal parameter values. By running EM many times with randomized initial parameter values, we can increase the chance that the best parameter values discovered are the globally optimal parameters for the data. 
Fitting GRIFT to data requires choosing a value for N, the number of feature detectors. Determining the optimal N with a high degree of certainty is difficult. Increasing the number of model parameters, in this case, increasing N, almost always improves the ability of the model to fit the data. At the same time, however, increasing the number of parameters also generally increases the chance that the model will overfit the data, i.e., explain noise in the data rather than produce an accurate representation of the classification process. Similar difficulties exist, for example, in determining the correct dimensionality for a multidimensional scaling solution (Borg & Groenen, 1997) or choosing the maximum degree to use in polynomial regression (Bishop, 2006). Because there is no generally accepted solution for determining the correct number of features, we recommend that any application of GRIFT proceed in three steps. The first two steps are outlined in this paper and involve using GRIFT to recover a potential set of features for a range of N and then evaluating each N based on a number of quantitative and qualitative measures discussed below. The final step involves performing additional experiments to verify the features recovered by GRIFT. The experimental results discussed below include the GRIFT models fit with a reasonable range of N values. We present supplementary statistics that, in most cases, either indicate the correct N or provide a strong indication of which values are likely. 
In some cases, it is obvious when N is too large, for example, when the model-fitting algorithm produces feature detectors that either never fire or always fire. Detectors that never fire cannot influence the classification output. If detector F i always fires, regardless of the stimulus presented, a mathematically equivalent model can be constructed by removing that feature and adding λ i to γ. In the Results and Discussion section, we will present the probability of each feature detector firing conditioned on the observers' responses, which can indicate when these useless feature detectors are present in a GRIFT model. The appearance of either type of feature indicates that N is set too high. 3 
Another option is to compute a statistic that has been shown to be helpful in indicating model over-fitting. One commonly used statistic is the Akaike Information Criterion (AIC), which is 2 k − 2ln L, where k is the number of free parameters in the model and L is the likelihood of the parameter values given the data (Akaike, 1974). For our model, the statistic equals 2(N(∣S∣ + 2) + (R − 1)) − 2ln(P(S, Cθ)), where S and C are the observations from all the trials. Lower AIC scores indicate better models. Using this statistic penalizes increases in model complexity, in this case, increases in N, that do not result in substantial increases in model likelihood.4 
Experiments
Five experiments were analyzed to validate the GRIFT model and discover potential multi-feature, non-linear classification strategies. All experiments used variations of the traditional classification image experimental design in which participants are asked to classify a series of noise-corrupted images (e.g., Gold et al., 2006). The first four experiments (light-dark, faces, four-square, and Kanizsa) were classification experiments in which participants categorized stimuli into one of two classes. Each class contained one or more target images. The experiments differed in the number and type of targets in each class. To show that GRIFT may also be adapted to other experimental paradigms, the fifth experiment (square-detection) was a detection experiment in which participants were asked to determine if a single target was present and to respond with a confidence rating. 
The light-dark and faces experiments were first described in Ross and Cohen (2008) and are reanalyzed here. The four-square and Kanizsa data were first presented in Gold et al. (2006) and Gold et al. (2000), respectively. GRIFT was initially applied to the four-square data in Ross and Cohen (2008), but has not been previously applied to the Kanizsa data. New GRIFT model fits and additional analyses for both the four-square and Kanizsa data sets are presented here. The square-detection data have not been previously published. 
General method
Classification experiments
Design details of the previously published classification experiments are given in the papers cited above. On each trial, a participant saw a stimulus (a sample from P( S)) that consisted of a randomly selected target with independent, identically distributed noise added to each pixel. In particular, a stimulus ( S) was produced by randomly selecting one of the available targets ( T), multiplying it by a contrast level ( b), and adding random independent truncated Gaussian noise 5 ( G) at every pixel: S = bT + G. The participant's task was to choose the class of the underlying target. Feedback was provided after each trial. In the Kanizsa experiment, participants completed between 9,512 and 9,814 trials. In the other three experiments, participants completed between 4,000 and 4,102 trials. 
The light-dark and faces experiments were broken into two sessions. Each session consisted of 2000 trials and lasted approximately 90 minutes or less. For the first 100 trials, the participant was reminded of the two target classes by a noise-free, high-contrast display of all the targets along with their class labels after every 20 trials. After the first 100 trials, the reminder displays appeared after every 100 trials. On each trial, the stimulus remained on the display until the participant responded. The participants were not instructed to answer as quickly as possible, but were told to trust their initial impression rather than spending many seconds or minutes studying each stimulus. Auditory feedback was provided after each trial that indicated whether the answer given was correct or incorrect. The experiments were implemented in MATLAB using the Psychtoolbox software (Brainard, 1997). Stimuli were presented on an Apple eMac computer positioned 1 m away from the participants and their head positions were controlled with a chin rest. There was no ambient light and the monitor was calibrated so stimuli could be presented at known brightness values. Observer responses were recorded using the computer keyboard. 
In the light-dark and faces experiments, the contrast level was initialized to a high value and adjusted over the first 101 trials in each session using the stair-casing algorithm (Macmillan & Creelman, 2005). Stair-casing was used to increase or decrease the target contrast level to keep the participant's performance near the 71% correct level. This level was chosen as a good balance between the need to explore responses to a large volume of the image stimulus space and the need to keep the participants engaged in the task. To make the trials statistically independent of one another, the contrast level for the remainder of the experiment was fixed at the mean of the contrasts of the final 20 stair-cased trials. The initial stair-casing trials in each session were discarded when fitting GRIFT models to the data. 
The stimulus generation and experimental procedures for the four-square and Kanizsa data sets were similar to those described above (see Gold et al., 2006, 2000, respectively). The one significant exception is that stair-casing adjustments to the signal contrast levels occurred throughout these experiments to ensure that participants' accuracy did not deviate significantly. Because the response on a trial can influence the contrast level of its successors, this method violates the assumption that trials are independent from one another. This dependence was ignored, however, when fitting GRIFT models to the results. In practice, after an initial adjustment period, participants' contrast levels stay nearly constant through an experimental session, therefore the trials can be reasonably treated as independent. 
Detection experiment
The square-detection experiments used a similar stimulus generation procedure as the four-square experiment, but there was only one potential target in each condition and the observers' task was to determine if this target was present or if the image was purely composed of noise. Instead of responding with a binary classification, participants gave a rating from 1 (definitely absent) to 6 (definitely present). 
Stimuli
Figure 3 shows the classes, targets, and a sample stimulus (target plus noise) from each response class or condition for each of the five experiments. 
Figure 3
 
Targets and sample stimuli from the five experiments.
Figure 3
 
Targets and sample stimuli from the five experiments.
The light-dark experiment asked participants to distinguish between three strips that each had two light and one dark blob and three strips that each had one light and two dark blobs. Observers could successfully distinguish between the two groups either by relying on overall brightness (Class 1 stimuli were brighter than Class 2 stimuli) or by searching for individual light-dark patterns. 
The faces task asked participants to distinguish between stimuli produced from two target faces (from Gold, Bennett, & Sekuler, 1999, unfiltered and down-sampled to 128 × 128 pixels). Classifying faces is a more natural visual task than classifying abstract patterns and faces can be distinguished at a relatively low resolution, which keeps the total number of parameters tractable. We wanted to investigate whether participants would process the faces holistically (Sergent, 1984), i.e., using a single classifier, or through the detection of multiple parts. 
In the four-square experiment, participants were asked to distinguish between two stimulus classes, one in which there were bright squares in the upper-left or upper-right corners and one in which there were bright squares in the lower-left or lower-right corners. These classes can be linearly discriminated by comparing the overall brightness of the top and bottom pixels in each image, but observers may also pursue a multi-feature, non-linear strategy and attempt to detect the four possible bright corners independently. Previous analyses (Cohen et al., 2007; Gold et al., 2006) provide evidence that observers used multi-feature strategies in this experiment. 
The Kanizsa-square experiment required observers to differentiate between two figures that produce slightly different illusory contours, i.e., perceived contours that are not actually present in the stimulus. The corners of the Class 1 and Class 2 targets are tilted to produce illusory vertical contours that are bowed outwards and inwards, respectively. Although the pixels of the illusory contours are identically distributed in each stimulus class and therefore cannot provide useful discriminative information, Gold et al.'s (2000) classification-image analysis indicated that participants used the pixels of the illusory contours to classify the stimuli. The participants focused mainly on the vertical contours. These contours, however, are separated in space. GRIFT was applied to these data to determine if the two illusory contours comprise a single feature or two separate features. 
Finally, the square-detection experiment has three conditions: full, incomplete, and incomplete-rotated. In each condition, participants judged whether a single target is present or absent and responded with a confidence rating. In the full condition, the target is a square; in the incomplete condition, the target is the corners of the full square; and in the incomplete-rotated condition, the target is four corners that have been rotated to the same orientation in order to disrupt illusory contour effects. 
Participants
The light-dark and faces experiments were each run on three observers. These participants were University of Massachusetts Amherst graduate students and the spouse of a UMass postdoctoral fellow. Participants were paid $11 per hour, with a $10 bonus for being the most accurate classifier for a particular experiment. The participants were naive to the underlying model and the fact that we were interested in finding multiple independent feature detectors. 
The square-detection data were collected from two Indiana University undergraduates who were both naive to the purpose of the experiment. Four observers participated in the four-square experiment, two of whom (EA and RS) were naive to the purpose of discovering multiple independent detectors. Three observers participated in the Kanizsa experiment. 
In addition to the human participants, three simulated observers were created to validate GRIFT's ability to recover feature detectors on the four-square task data. The top-vs.-bottom observer classified images by comparing the brightness of the top and bottom pixels. 6 Bright pixels on the top indicated Class 1. The corners observer classified images using four features, each sensitive to a particular corner brightness pattern. Bright pixels in the top-left or top-right corners indicated Class 1, bright pixels in the bottom-left or bottom-right corners indicated Class 2. The combo observer used all the features from both the top-vs.-bottom and corners observers. These three simulated observers were given examples at a fixed noise level and their parameters were adjusted so they classified the stimuli with accuracy similar to the human observers. Because the parameters were fixed throughout the experiments, no stair-casing of the stimuli was employed. 
Results and Discussion
Figures 48 and Tables 113 display the results of fitting GRIFT models to the data from the previously described experiments. The most informative parameter values are the ω is. Keep in mind that the ω is are not image pixel values that the features are attempting to match. Rather, they represent the weights of the linear classifiers that compose each recovered feature detector. The figures present these weights graphically for each value of N for each data set. By examining the pattern of positive and negative weights it is possible to determine what average brightnesses and contrasts are computed by each feature detector. Although the difference between the weights is informative, the sign of the weights is usually not significant—given a fixed number of features, there are typically several sets of features with identical log likelihoods that only differ from each other in the signs of their ω terms and the associated λ and β values. 
Figure 4
 
The most probable ω parameters found from the simulated and human observers in the four-square task. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 4
 
The most probable ω parameters found from the simulated and human observers in the four-square task. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 5
 
The most probable ω parameters found for human participants in the light-dark experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 5
 
The most probable ω parameters found for human participants in the light-dark experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 6
 
The most probable ω parameters found for human participants in the faces experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 6
 
The most probable ω parameters found for human participants in the faces experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 7
 
The most probable ω parameters found for human participants in the Kanizsa experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 7
 
The most probable ω parameters found for human participants in the Kanizsa experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 8
 
The most probable ω parameters found for human participants in the square-detection experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 8
 
The most probable ω parameters found for human participants in the square-detection experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Table 1
 
The γ and λ i values for each GRIFT model fit to data from the top-vs.-bottom, corners, and combo simulated observers from the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 1
 
The γ and λ i values for each GRIFT model fit to data from the top-vs.-bottom, corners, and combo simulated observers from the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Four-square Top vs. bottom Corners Combo
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 2.1 −2.8 −2.9
F 1 −4.0 0.78∣0.23 5.2 0.35∣0.67 5.7 0.24∣0.75
N = 2 γ 1.1 4.7 −5.4
F 1 4.9 0.23∣0.78 −3.7 0.76∣0.49 4.2 0.31∣0.79
F 2 −3.8 0.93∣0.96 −4.0 0.75∣0.45 4.4 0.40∣0.84
N = 3 γ −5.8 5.6 2.0
F 1 3.8 0.26∣0.76 −3.5 0.73∣0.46 −4.5 0.75∣0.28
F 2 4.3 0.26∣0.80 −3.8 0.43∣0.20 −4.7 0.72∣0.39
F 3 4.4 0.43∣0.24 −4.2 0.67∣0.47 4.9 0.47∣0.74
N = 4 γ 0.8 0.2 −1.8
F 1 −4.5 0.74∣0.21 −3.6 0.75∣0.52 −4.0 0.69∣0.31
F 2 4.5 0.22∣0.76 −3.6 0.47∣0.23 −4.1 0.76∣0.40
F 3 −3.5 0.59∣0.63 3.3 0.53∣0.68 4.8 0.52∣0.82
F 4 4.5 0.32∣0.19 3.8 0.23∣0.47 4.0 0.57∣0.83
N = 5 γ −3.4 −2.6 2.6
F 1 5.0 0.17∣0.68 −4.0 0.48∣0.24 −4.5 0.39∣0.20
F 2 5.6 0.24∣0.81 4.0 0.22∣0.47 −4.5 0.53∣0.34
F 3 3.9 0.71∣0.68 4.1 0.41∣0.64 −4.2 0.72∣0.45
F 4 −4.0 0.36∣0.41 4.1 0.22∣0.45 3.9 0.18∣0.39
F 5 −3.8 0.66∣0.77 −3.7 0.19∣0.28 4.1 0.22∣0.76
N = 6 γ 3.2 2.0 2.4
F 1 −6.4 0.78∣0.21 4.6 0.26∣0.50 −3.7 0.46∣0.15
F 2 −3.4 0.61∣0.54 −4.1 0.76∣0.53 −4.7 0.52∣0.35
F 3 4.2 0.29∣0.15 −4.4 0.61∣0.39 −4.5 0.34∣0.20
F 4 −4.0 0.55∣0.22 4.2 0.55∣0.80 4.0 0.26∣0.78
F 5 3.6 0.54∣0.52 −4.0 0.17∣0.24 4.3 0.32∣0.59
F 6 4.3 0.10∣0.20 −3.9 0.27∣0.20 −3.8 0.69∣0.47
Table 2
 
AIC scores and data log likelihoods for GRIFT models of the data from simulated observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Table 2
 
AIC scores and data log likelihoods for GRIFT models of the data from simulated observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Four-squareSimulated Observer Fit N
1 2 3 4 5 6
Top vs. bottom AIC 3,758 3,813 3,812 3,845 3,864 3,995
LnL −1,812 −1,774 −1,707 −1,657 −1,601 −1,581
Corners AIC 4,466 4,438 4,397 4,398 4,398 4,413
LnL −2,166 −2,086 −2,000 −1,934 −1,868 −1,810
Combo AIC 3,534 3,549 3,510 3,553 3,585 3,623
LnL −1,700 −1,642 −1,556 −1,511 −1,461 −1,415
Table 3
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 3
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Four-square AC EA JG RS
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 2.5 2.1 2.9 −2.6
F 1 −5.9 0.78∣0.27 −4.2 0.72∣0.28 −5.7 0.75∣0.27 4.6 0.32∣0.77
N = 2 γ 3.3 −2.8 −1.6 2.9
F 1 −5.0 0.65∣0.19 3.2 0.25∣0.62 −6.0 0.55∣0.16 −4.6 0.50∣0.16
F 2 −5.3 0.55∣0.15 3.4 0.24∣0.66 5.2 0.47∣0.85 −4.8 0.45∣0.14
N = 3 γ −3.2 −5.3 −2.6 −2.8
F 1 −5.5 0.65∣0.26 4.1 0.34∣0.75 5.8 0.26∣0.69 −4.7 0.36∣0.11
F 2 4.3 0.13∣0.57 4.1 0.35∣0.70 5.5 0.59∣0.89 5.1 0.37∣0.77
F 3 5.1 0.57∣0.83 4.2 0.21∣0.27 −5.4 0.90∣0.69 5.0 0.12∣0.26
N = 4 γ 4.2 3.7 0.6 −0.9
F 1 −5.0 0.56∣0.18 −4.6 0.78∣0.56 −4.5 0.92∣0.65 −5.5 0.88∣0.78
F 2 −5.0 0.59∣0.14 −3.9 0.62∣0.23 −5.2 0.70∣0.27 −4.4 0.64∣0.20
F 3 −6.2 0.89∣0.70 4.1 0.24∣0.59 4.8 0.09∣0.30 5.0 0.67∣0.89
F 4 5.0 0.56∣0.83 −5.2 0.10∣0.08 6.1 0.58∣0.86 4.1 0.65∣0.90
N = 5 γ 1.6 2.7 −1.7 2.3
F 1 4.9 0.11∣0.33 −4.8 0.72∣0.30 5.0 0.29∣0.78 −5.0 0.87∣0.77
F 2 −5.0 0.72∣0.25 −4.7 0.21∣0.16 −5.4 0.90∣0.70 −4.6 0.43∣0.18
F 3 5.1 0.65∣0.88 4.5 0.39∣0.72 −5.2 0.43∣0.16 −4.5 0.70∣0.43
F 4 −4.7 0.89∣0.61 4.7 0.21∣0.37 4.9 0.14∣0.34 4.5 0.28∣0.71
F 5 −5.0 0.42∣0.17 −4.3 0.74∣0.64 4.9 0.62∣0.86 4.4 0.71∣0.88
N = 6 γ −7.4 1.4 2.9 −6.7
F 1 4.8 0.44∣0.78 −5.0 0.52∣0.25 −3.8 0.80∣0.35 −4.9 0.68∣0.25
F 2 −4.4 0.54∣0.14 −4.2 0.79∣0.74 −4.9 0.86∣0.62 5.1 0.81∣0.92
F 3 4.8 0.51∣0.81 −4.2 0.81∣0.65 −3.9 0.29∣0.12 −4.7 0.73∣0.54
F 4 −4.8 0.87∣0.69 4.8 0.29∣0.69 −3.7 0.72∣0.25 4.9 0.65∣0.85
F 5 5.3 0.67∣0.89 4.8 0.67∣0.74 6.0 0.62∣0.87 4.0 0.59∣0.81
F 6 5.4 0.15∣0.40 4.2 0.17∣0.38 5.0 0.11∣0.25 5.2 0.14∣0.30
Table 4
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Table 4
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Four-squareParticipant Fit N
1 2 3 4 5 6
AC AIC 3,493 3,349 3,250 3,173 3,080 3,143
LnL −1,680 −1,542 −1,426 −1,322 −1,209 −1,174
EA AIC 4,150 4,068 4,017 3,969 3,926 3,958
LnL −2,008 −1,901 −1,810 −1,720 −1,632 −1,582
JG AIC 3,742 3,547 3,330 3,291 3,225 3,266
LnL −1,804 −1,640 −1,466 −1,381 −1,282 −1,236
RS AIC 4,017 3,843 3,707 3,664 3,594 3,628
LnL −1,942 −1,788 −1,655 −1,567 −1,466 −1,417
Table 5
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the light-dark experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 5
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the light-dark experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Light-dark PL1 PL2 PL3
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 2.0 2.3 −2.0
F 1 −4.7 0.77∣0.23 −5.4 0.68∣0.34 3.2 0.42∣0.52
N = 2 γ −0.5 1.3 −1.1
F 1 −6.0 0.78∣0.24 4.5 0.29∣0.63 −3.7 0.30∣0.22
F 2 3.9 0.95∣0.89 −4.5 0.87∣0.69 2.8 0.47∣0.54
N = 3 γ 2.8 −3.0 −2.0
F 1 −6.7 0.78∣0.23 −5.7 0.53∣0.40 −3.6 0.62∣0.58
F 2 3.7 0.95∣0.89 6.1 0.32∣0.57 3.5 0.20∣0.24
F 3 −3.3 0.93∣0.97 5.5 0.41∣0.54 3.9 0.69∣0.77
N = 4 γ −2.2 −1.9 −1.8
F 1 6.7 0.22∣0.77 −2.0 0.62∣0.29 −3.4 0.18∣0.12
F 2 −3.7 0.05∣0.11 −5.3 0.53∣0.40 −3.6 0.14∣0.15
F 3 −3.3 0.93∣0.97 5.4 0.33∣0.57 3.4 0.22∣0.22
F 4 2.0 1.00∣1.00 5.2 0.42∣0.53 3.7 0.35∣0.47
N = 5 γ 1.2 0.0 −1.5
F 1 −6.7 0.78∣0.23 −1.9 0.62∣0.29 −3.6 0.86∣0.80
F 2 −3.7 0.05∣0.11 5.3 0.47∣0.60 3.3 0.18∣0.20
F 3 3.3 0.07∣0.03 5.4 0.33∣0.57 −3.5 0.17∣0.11
F 4 −2.0 0.00∣0.00 −5.2 0.58∣0.47 3.4 0.83∣0.87
F 5 2.0 1.00∣1.00 −2.0 1.00∣1.00 3.2 0.25∣0.27
Table 6
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the light-dark experiment. Bold numbers indicate the minimum AIC.
Table 6
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the light-dark experiment. Bold numbers indicate the minimum AIC.
Light-darkParticipant Fit N
1 2 3 4 5
PL1 AIC 3,427 3,452 3,524 3,624 3,724
LnL −1,662 −1,625 −1,611 −1,611 −1,611
PL2 AIC 4,088 4,147 3,999 4,095 4,195
LnL −1,993 −1,973 −1,849 −1,847 −1,847
PL3 AIC 5,029 5,081 5,131 5,214 5,288
LnL −2,464 −2,439 −2,414 −2,406 −2,393
Table 7
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the faces experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 7
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the faces experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Faces PF1 PF2 PF3
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ −3.3 −2.8 3.1
F 1 6.8 0.38∣0.64 5.4 0.48∣0.51 −5.8 0.52∣0.47
N = 2 γ −1.5 −0.9 2.9
F 1 6.9 0.38∣0.64 5.4 0.48∣0.51 −5.6 0.76∣0.73
F 2 −1.9 1.00∣1.00 −1.9 1.00∣.100 5.7 0.28∣0.34
N = 3 γ 0.6 −0.8 6.6
F 1 6.9 0.38∣0.64 5.4 0.48∣0.51 −5.6 0.76∣0.73
F 2 −1.9 1.00∣1.00 −2.0 1.00∣1.00 −5.7 0.72∣0.66
F 3 −2.0 1.00∣1.00 −1.9 0.00∣0.00 1.9 1.00∣1.00
Table 8
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the faces experiment. Bold numbers indicate the minimum AIC.
Table 8
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the faces experiment. Bold numbers indicate the minimum AIC.
FacesParticipant Fit N
1 2 3
PF1 AIC 5,081 5,968 6,857
LnL −2,095 −2,095 −2,095
PF2 AIC 5,992 6,880 7,768
LnL −2,551 −2,551 −2,551
PF3 AIC 5,908 6,643 7,528
LnL −2,509 −2,432 −2,431
Table 9
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the Kanizsa experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 9
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the Kanizsa experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Kanizsa AJR AMC JMG
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 4.5 −3.6 4.1
F 1 −8.8 0.55∣0.40 7.6 0.43∣0.61 −7.8 0.56∣0.37
N = 2 γ −2.3 3.8 4.0
F 1 8.8 0.45∣0.60 −7.6 0.57∣0.39 −7.8 0.56∣0.37
F 2 −2.0 1.00∣1.00 0.5 0.44∣0.45 0.6 0.18∣0.19
N = 3 γ −2.9 −3.7 2.2
F 1 6.0 0.47∣0.57 7.6 0.43∣0.61 −7.9 0.56∣0.37
F 2 6.0 0.47∣0.57 −2.0 0.00∣0.00 0.4 0.87∣0.87
F 3 −6.0 0.54∣0.43 −2.0 0.00∣0.00 1.6 1.00∣1.00
Table 10
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the Kanizsa experiment. Bold numbers indicate the minimum AIC.
Table 10
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the Kanizsa experiment. Bold numbers indicate the minimum AIC.
KanizsaParticipant Fit N
1 2 3
AJR AIC 12,765 14,019 15,379
LnL −5,755 −5,755 −5,807
AMC AIC 12,469 13,723 14,977
LnL −5,606 −5,606 −5,606
JMG AIC 12,538 13,793 15,046
LnL −5,641 −5,642 −5,641
Table 11
 
The γ and λ i values for each GRIFT model fit to data from human observer PS1 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Table 11
 
The γ and λ i values for each GRIFT model fit to data from human observer PS1 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Square-detection PS1-full PS1-inc PS1-incrot
λ i P i 1P i 6 λ i P i 1P i 6 λ i P i 1P i 6
N = 1 γ 1 0.8 1.0 3.4
γ 2 0.3 0.1 −0.1
γ 3 −1.9 −2.3 −2.9
γ 4 −5.3 −6.0 −5.6
γ 5 −6.1 −7.5 −9.3
F 1 4.9 0.26∣0.70 5.6 0.34∣0.76 5.5 0.48∣0.79
N = 2 γ 1 0.1 0.1 1.5
γ 2 −0.5 −1.3 −3.6
γ 3 −3.6 −4.4 −5.6
γ 4 −7.9 −9.1 −9.9
γ 5 −9.0 −11.2 −14.6
F 1 6.3 0.28∣0.69 3.2 0.46∣0.45 4.1 0.79∣0.55
F 2 2.7 0.37∣0.43 7.1 0.33∣0.76 7.3 0.40∣0.78
N = 3 γ 1 5.4 −0.6 9.5
γ 2 4.3 −3.5 4.5
γ 3 0.3 −7.4 2.1
γ 4 −4.7 −13.6 −2.5
γ 5 −6.3 −16.9 −7.3
F 1 −4.6 0.52∣0.55 2.8 0.14∣0.46 −7.7 0.59∣0.22
F 2 7.3 0.27∣0.69 5.9 0.51∣0.44 −2.6 0.26∣0.11
F 3 −2.0 0.65∣0.37 8.8 0.33∣0.76 4.2 0.84∣0.57
N = 4 γ 1 5.7 5.0 1.1
γ 2 4.5 2.1 −5.3
γ 3 0.6 −2.1 −7.7
γ 4 −4.6 −8.2 −12.9
γ 5 −6.2 −11.7 −18.4
F 1 −1.3 0.20∣0.07 2.6 0.12∣0.44 −1.7 0.84∣0.64
F 2 −4.6 0.50∣0.53 −6.2 0.49∣0.57 2.3 0.76∣0.91
F 3 −1.9 0.72∣0.46 1.2 0.60∣0.80 5.6 0.80∣0.53
F 4 7.3 0.27∣0.69 8.8 0.33∣0.76 8.5 0.40∣0.77
Table 12
 
The γ and λ i values for each GRIFT model fit to data from human observer PS2 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Table 12
 
The γ and λ i values for each GRIFT model fit to data from human observer PS2 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Square-detection PS2-full PS2-inc PS2-incrot
λ i P i 1P i 6 λ i P i 1P i 6 λ i P i 1P i 6
N = 1 γ 1 2.4 2.7 2.5
γ 2 0.3 0.5 0.2
γ 3 −1.7 −1.5 −1.4
γ 4 −4.6 −4.8 −4.3
γ 5 −7.7 −7.6 −7.0
F 1 5.6 0.21∣0.79 5.8 0.19∣0.80 5.8 0.25∣0.75
N = 2 γ 1 2.2 6.9 2.3
γ 2 −0.1 4.5 −0.1
γ 3 −3.1 1.7 −2.1
γ 4 −6.3 −1.6 −6.1
γ 5 −12.9 −7.4 −10.5
F 1 4.5 0.21∣0.65 −4.4 0.79∣0.31 4.0 0.14∣0.45
F 2 7.4 0.16∣0.70 6.8 0.14∣0.68 7.4 0.23∣0.71
N = 3 γ 1 9.4 2.2 3.8
γ 2 7.1 −0.4 1.0
γ 3 3.9 −3.3 −1.2
γ 4 0.9 −7.1 −5.2
γ 5 −6.0 −12.3 −9.6
F 1 4.3 0.23∣0.68 4.9 0.05∣0.36 −2.1 0.51∣0.30
F 2 −7.2 0.84∣0.31 6.6 0.21∣0.75 7.4 0.25∣0.72
F 3 4.1 0.02∣0.24 2.6 0.31∣0.68 4.3 0.09∣0.37
N = 4 γ 1 7.8 15.7 13.2
γ 2 4.3 12.4 9.6
γ 3 1.1 9.3 7.4
γ 4 −1.8 5.7 3.1
γ 5 −8.9 −0.5 −1.6
F 1 −4.6 0.14∣0.08 −4.7 0.94∣0.64 −1.8 0.45∣0.23
F 2 −4.2 0.73∣0.26 −6.7 0.82∣0.31 −8.0 0.75∣0.28
F 3 7.2 0.16∣0.68 −3.9 0.80∣0.38 −4.3 0.91∣0.62
F 4 4.2 0.02∣0.25 3.2 0.80∣0.95 3.9 0.85∣0.90
Table 13
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the square-detection experiment. Bold numbers indicate the minimum AIC.
Table 13
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the square-detection experiment. Bold numbers indicate the minimum AIC.
Square-detectionParticipant Fit N
1 2 3 4
PS1-full AIC 11,930 12,002 12,092 12,221
LnL −5,894 −5,869 −5,843 −5,842
PS1-inc AIC 12,232 12,277 12,302 12,422
LnL −6,045 −6,001 −5,948 −5,942
PS1-incrot AIC 10,978 10,830 10,910 11,014
LnL −5,418 −5,278 −5,252 −5,238
PS2-full AIC 11,612 11,510 11,568 11,662
LnL −5,735 −5,618 −5,581 −5,562
PS2-inc AIC 11,582 11,474 11,517 11,610
LnL −5,720 −5,600 −5,556 −5,536
PS2-incrot AIC 12,027 11,903 11,982 12,072
LnL −5,942 −5,814 −5,788 −5,767
The tables contain each model's γ value and the λ i values associated with each feature detector. Large λ i values indicate that a feature's detection will greatly influence the classification decision. Some feature detectors associated with large λ is, however, might have very little influence. For example, a feature detector's ω i and β i values might be such that the feature detector is never activated by any of the stimuli, rendering the detector useless no matter what λ i is associated with it. Therefore, the tables also list estimates for P( F i = 1∣ C = 1) and P( F i = 1∣ C = 2), the probabilities of feature detector F i firing given a particular classification responses. If both of these values are near 0, the stimuli almost never activate this feature detector. If they are both near 1, the feature detector is always active and therefore is acting as an additional threshold term to the linear function in P( CF). Feature detectors whose firing probabilities differ depending on the observer's classification decision are the most useful for modeling those decisions. The β i values are not reported because they are usually not informative without knowledge of the exact ω i values, which are presented graphically. The tables also present the AIC and data log likelihood values for each GRIFT model and data set. 
Four-square
We present the results on the four-square data first because they are easy to understand and because the simulated observers provide an important validation of the GRIFT approach. The result of using GRIFT to recover the feature detectors for the four-square experimental data are given in Figure 4 and Tables 14. GRIFT models with 1–6 features were fit to the data from each of the three simulated and four human observers. 
Note that the constant gray regions between the stimuli corners (see Figure 3) were discarded before the experimental data were analyzed by GRIFT. Because these stimulus regions are always constant it is reasonable to assume that they are not used in the classification process. These areas, however, do provide significant visual separation between the corners. To incorporate this visual separation into GRIFT, the neighborhood functions for the P( ω i) distributions ( Equation 9) were adjusted so that they would not penalize the assignment of very different weights across corner boundaries. 
Across all observers, the best one-feature model (left column of Figure 4) was based on the contrast between the top and bottom of the image. In the figure, positive and negative weights are represented by red and blue colors, respectively. Recall that, within a feature, the signs of the weights are generally not meaningful, so that, after an appropriate shift of other parameters, a positive top and negative bottom feature is equivalent to a negative top and positive bottom feature. The important factor is that one area has a large weight and the other has a large weight of the opposite polarity, indicating a contrast feature that is sensitive to the presence or absence of relative brightness in those regions. It is interesting to note that this result is extremely similar to the result produced by classification images of the data, reinforcing the strong similarity between one-feature GRIFT and that approach. 
The results of fitting GRIFT to the three simulated observers, top-vs.-bottom, corners, and combo, demonstrates that when GRIFT generates the data, the correct features can be reliably recovered. For data generated by the top-vs.-bottom observer, for all values of N, GRIFT correctly recovered one or more feature detectors sensitive to the contrast between the top and bottom of the stimulus. It is important to note that, even though the stimuli were generated from images with corners, GRIFT does not hypothesize the existence of any corner-sensitive feature detectors, even for large N. That is, GRIFT recovered the feature detectors used to generate the responses, not the features used to generate the stimuli. The minimum AIC value is for N = 1. 
GRIFT also recovers the appropriate feature detectors for data generated by the corners observer. As N increases, corner-sensitive features appear. When N = 4, each of the four feature detectors is sensitive to the presence or absence of a different corner of the stimulus, matching the strategy of the corners observer. For N > 4 the GRIFT models find four corner feature detectors and fill the remaining slots with uninterpretable, noisy feature detectors. Examining the AIC values in Table 2 indicates that the adjusted fit values are virtually identical for 3 ≤ N ≤ 5. The AIC value for N = 3 is the lowest by a very small margin, probably because with the appropriate γ, it is difficult to distinguish between three and four-corner strategies in these data. That is, if an observer uses three corner detectors, and they all fail, he or she can default to assuming that the fourth would have succeeded without actually computing it (for a similar issue regarding top-vs.-bottom see Footnote 6). The feature activation probabilities for N = 5 indicate that, compared to the activity frequencies of the four corner features, the extraneous fifth feature is seldom active for either response class. 
Analysis of the data generated by the combo observer appropriately reveals the presence of both corner and top-vs.-bottom detectors, especially for N = 5 & 6. This recovery is significant because, although the features used in the two strategies spatially overlap, GRIFT was still able to separate them out from the classification data. The minimum AIC was for N = 3, possibly indicating that N = 3 is a more compact representation of the combo strategy or that AIC penalizes complexity too harshly in some cases, further highlighting the view that features recovered by GRIFT should be used as a starting point for further experiments. 
GRIFT revealed that all four human observers applied multi-feature strategies. 7 The minimal AIC values ( Table 4) were all for N = 5. Looking at the GRIFT parameters, however, reveals important differences in strategies. Of all the human observers, JG's detector weights ( Figure 4) show the clearest corner patterns and this participant also had the largest decline in data log likelihood as N increased. On the other end of the spectrum, the corner detection patterns are least visible in EA's data, and this participant also exhibited the smallest improvement in data log likelihood as N increased. AC and RS are between these two extremes on both the visual corner pattern and log likelihood spectra. Interestingly, this GRIFT analysis suggests that at least some of the human observers (e.g., AC & JG) used a hybrid strategy, i.e., both corner features and an overall difference in brightness between the top and bottom halves of the stimuli. It is potentially noteworthy that one of the non-naive observers, JG, exhibited the strongest indications of a corner detection strategy, while one of the naive observers, EA, exhibited the weakest indications of this strategy. 
Light-dark
Three participants, PL1, PL2, and PL3, were run with the light-dark stimuli. Although PL1 and PL2 performed near the expected accuracy level (82% and 73%), PL3 performed near chance (55%). Because the noise levels were fixed after the first 101 trials, a participant with good luck at the end of that calibration period could experience very high noise levels for the remainder of the experiment, leading to poor performance. 8 Regardless, all three participants appear to have used different classification methods, providing a very informative contrast, and so the data from all three participants are provided. The results of fitting the GRIFT model to the participants' data are given in Figure 5 and Tables 5 and 6
The AIC values and feature detection probabilities indicate that PL1 used a one-feature strategy, linearly classifying the stimuli by measuring their overall brightness. Although we knew that the targets allowed for successful classification using this method, this result was surprising because it implies that the observer was able to maintain a roughly constant brightness threshold across the stimulus and across time. It was expected that such a strategy would be more challenging than the within-image comparisons that enabled a linear strategy on the four-square stimuli. 
PL2, on the other hand, clearly employed a non-linear, multi-feature strategy. For N = 1 and N = 2, the most interpretable feature detector is sensitive to overall stimulus brightness. This brightness detector disappears when N = 3 and the best-fit model consists of three detectors, each sensitive to one of the three positions a bright or dark spot can appear. The detectors of the N = 3 model only outperform the overall brightness detector if they are all present—they are jointly, but not singly, informative. When N = 4 the overall brightness detector reappears, added to the three pattern detectors. Increasing to N = 5 adds a useless fifth feature detector. The AIC scores indicate that the N = 3 model is the best fit to the data, further confirming that this observer used a multi-feature strategy. 
The GRIFT models of participant PL3 had minimum AIC for N = 1 and mostly recovered noisy weight patterns and detectors that exhibit a small difference in activation probabilities between the two classes. The one-feature model is probably the best fit, and because performance was extremely low, it can be assumed that the participant was reduced to near random guessing much of the time. 
The clear distinction between the GRIFT fits for all three observers demonstrate the effectiveness of GRIFT in distinguishing between different classification strategies. 
Faces
The faces experiment presented the largest computational challenge. After the experiment, the stimuli were down-sampled further to 32 × 32 and the background surrounding the faces was removed by cropping, reducing the stimuli to 26 × 17. These steps were necessary to make the EM algorithm computationally feasible, and to reduce the number of model parameters so they would be sufficiently constrained by the samples. 
The results for three participants are given in Figure 6 and Tables 7 and 8. Participants PF1 and PF2's data were clearly best fit by one-feature GRIFT models. Increasing the number of features simply caused the algorithm to add detectors that were never or always active. As explained previously, such feature detectors are superfluous because they can be eliminated or absorbed into the γ term. PF1's one-feature model clearly places significant weight near the eyebrows, nose, and other facial features. PF2's one-feature weights are much noisier and harder to interpret. This might be related to PF2's poor performance on the task—only 53% accuracy compared to PF1's 72% accuracy. Perhaps the noise level was too high and PF2 was guessing rather than using image information much of the time. PF1's detector was active for 38% of Class 1 responses and 64% of Class 2 responses, a relatively large difference in activation frequency indicating a very predictive feature. PF2's detector was active for 48% of Class 1 responses and 51% of Class 2 responses, a very small difference indicating that this feature is not very predictive of PF2's responses. 
Participant PF3's data produced a genuine two-feature GRIFT model, albeit one that is difficult to interpret. The weights present in the two-feature model are very different from those in the one-feature model, and the weight patterns in the two detectors are subtly different from one another. The Class 1 face has a left eyebrow that is darker than its right eyebrow and both feature detectors compute a brightness contrast between the left and right eye regions of an input stimulus. They also both place large weights near the nose and around parts of the mouth areas and the second feature detector places weights that correspond to the left boundary between the face and the gray surrounding pixels in the noise-free targets. The faces differ in nose and mouth structure as well as in the brightness of the forehead, cheek, and chin regions and these weights may indicate sensitivity to those differences. Regardless, none of PF3's N = 2 detectors had large differences between their Class 1 and Class 2 activation frequencies and, as with PF1 and PF2, PF3's minimum AIC score was for the one-feature model. 
Overall the results on faces support the hypothesis that face classification is generally holistic and configural, rather than the result of individual part classification, especially when detection of individual features is difficult, as was the case in this experiment (Sergent, 1984). 
Kanizsa
The GRIFT models fit to the Kanizsa experimental data confirm many of the results in Gold et al. (2000). These results can be seen in Figure 7 and Tables 9 and 10. The stimuli were downsampled to 25 × 25 pixels to make the model-fitting algorithm computationally tractable. According to GRIFT, the observers mainly relied on pixels from the vertical, but not the horizontal, illusory contours when classifying the stimulus. According to the best-fit GRIFT models, AJR strongly used both contours, JMG relied only on the left contour, and AMC appeared to make less overall use of the contour pixels. Gold et al. reached the same conclusion using a classification image analysis. Our results deviate from the previous work in assigning substantial weight to the horizontal lips of the four three-quarter circles, while the classification images of Gold et al. indicated that these pixels were not used in classification. It is possible that this difference arises from our decision to use a Markov random field prior probability distribution to smooth the weights during the parameter-fitting process, while Gold et al. applied a smoothing filter after calculating the classification image. This difference warrants further investigation to determine which model is more accurate. 
We had speculated that perhaps the two contours were detected separately and independently, but the GRIFT analysis does not support that hypothesis. GRIFT did not produce a multi-feature model for any of the observers suggesting that, when present, both illusory contours were processed as a single feature. For all participants, AIC was lowest for N = 1. When N > 1, GRIFT generated models with only one useful feature, except for observer AJR with N = 3. However, examining Table 10 reveals that this model has worse AIC and log likelihood values than AJR's N = 1 model. Therefore, adding two feature detectors with λ 2 = λ 3 = 0 to the N = 1 model would result in a three-feature model with better AIC and log likelihood than the N = 3 model discovered by the EM optimization algorithm. The N = 1 model, however, would still be preferable because it uses fewer parameters to get the same likelihood, producing a better AIC score. Therefore, AJR's N = 3 result is a clear case of the EM procedure not finding the globally optimal parameter values, but it serves as a demonstration of the utility of the AIC and log likelihood values in detecting such problems. 
Square detection
The results from fitting GRIFT to the square-detection data are reported in Figure 8 and Tables 1113. The previous experiments were all classification experiments in which observers only gave one of two responses to every trial. The square-detection experiment required participants to provide a rating from 1 (target definitely absent) to 6 (target definitely present). Therefore, there are 6 P( F i = 1∣ C = r) values for each feature. However, because the values tend to increase, decrease, or stay constant as r increases, we summarize them by only reporting P( F i = 1∣ C = 1) and P( F i = 1∣ C = 6) in Tables 11 and 12
The AIC values indicate that participant PS1 used one feature detector in the full and incomplete conditions, but used two feature detectors for the incomplete-rotated condition. The AIC values for participant PS2 indicated two detectors for all the conditions. In the full and incomplete conditions, both participants' feature detectors for their AIC-minimizing models were sensitive to the contours (real or illusory) connecting the corners. This corresponds well to the illusory contour sensitivity demonstrated in the Kanizsa data analysis. Participant PS2's two-feature models in these conditions consisted of detectors that were sensitive to different regions of the square, but neither contained detectors sensitive to particular corners. 
For both participants, the models for the incomplete-rotated condition were qualitatively different than those observed for the other two conditions. Participant PS1's data was best-fit by a two-feature model in which the largest weights were on the corners, although both features still placed some weight on the (supposedly disrupted) illusory contour regions. Participant PS2's best-fit model had one detector that focused on detecting the presence of the upper-left corner and one feature sensitive to the tops of both upper corners. These results lead to the conclusion that rotating the contours did significantly disrupt the illusory contours and greatly reduced their effect on stimulus detection. The striking qualitative differences between this condition and the full and incomplete conditions indicate that the illusory contour influence discovered by Gold et al. (2000) is also present in detection tasks. In this experiment, the participants were sensitive to different pieces of the illusory contour, but the horizontal contours were influential, while they were not influential in the Kanizsa classification data. It is also interesting that participant PS1 only showed evidence of a multi-feature strategy in the incomplete-rotated case. This is a type of qualitative strategy change that would be invisible in a classification image analysis. 
General discussion
This article has described the GRIFT model for determining potential features used in human image classification. GRIFT is a Bayesian network that describes classification as the combination of multiple, independently detected features. GRIFT provides a generative, probabilistic model of classification that can incorporate prior knowledge and assumptions about these features and account for human data. 
GRIFT models classification as a two-stage process in which the output of a set of independent feature detectors are pooled to produce a classification. Such a two-stage organization is not unique to GRIFT and has been used in many other psychophysical and neurological models. For example, Pelli et al. (2003) created a two-stage model for word recognition in which the outputs of independent letter detectors are combined to create the perception of a word. Rust, Mante, Simoncelli, and Movshon (2006) developed a model that represented the response of MT neurons to motion as the result of combining the outputs of multiple V1 neurons. This model structure is analogous to GRIFT's assumption of multiple feature detectors that mediate between the raw visual input and the classification decision. Similarly, Anzai, Peng, and Van Essen (2007) model the receptive fields of V2 neurons as the result of different methods of combining V1 neuron outputs. 
The experimental data used by GRIFT are compatible with the original classification-image method. In fact, the four-square and Kanizsa human participant data were originally analyzed using that algorithm. One of the advantages of GRIFT is that it allows the reanalysis of old data to reveal new information; fitting multi-feature GRIFT models can reveal previously hidden non-linear classification strategies. 
As mentioned, a one-feature GRIFT model is very similar to the classic classification-image model of classification. In both cases, a linear combination of pixel values is compared to a threshold. There are, however, a number of differences between the two models. In the classification image model, the threshold is a normally distributed random variable, which accounts for human classification inconsistency. In GRIFT, the threshold is not random, but is wrapped, along with the weighted pixel sum, in a logistic regression function ( Equations 3 and 4) which accounts for randomness in feature detection. The feature detector output is passed to a second logistic regression function that determines classification ( Equations 5 and 6), which is a second source of randomness with no equivalent in the single-step classification process modeled by a classification image. 
Another contrast between GRIFT and classification image analysis is that GRIFT parameters are fit using the full stimuli displayed to the participants, while the classification image algorithm only operates on the noise field present in the stimuli. Using the full stimuli is convenient because it removes the requirement of storing target-free noise fields, or the data necessary to construct them, during an experiment. The classification image algorithm also requires the true class label of each stimulus, while GRIFT only relies on the participants' responses. These advantages result from GRIFT's use of Bayesian networks and the EM optimization algorithm. It is possible to construct a Bayesian network describing the classification image model that could also be optimized using full stimuli and without requiring the true class labels. Further theoretical and empirical work would be necessary to determine if this style of optimization produces results equivalent to the traditional Ahumada (2002) method. 
Perhaps the most salient difference in the one-feature case is the use of prior probabilities on the parameters. While the classification image algorithm aims to maximize the likelihood of the data, GRIFT, as described above, also factors in prior beliefs about the classification process. Such priors can be advantageous, particularly when the stimulus images have many pixels. In these cases, simply maximizing the likelihood might not sufficiently recover the true parameters, either because noise in the data will have too great an influence or because there are many possible solutions with nearly equivalent likelihoods. Gold et al. (2000) dealt with this problem by smoothing their classification images to eliminate noise. GRIFT achieves a similar result by applying the aforementioned Markov random field prior on the ωi parameters. Whereas the practice of smoothing classification images requires some manual estimation of the correct amount of smoothing in each instance, in the prior probability approach, the influence of the prior automatically declines as more data are gathered. Despite these technical differences, in practice, we have found the result of fitting a one-feature GRIFT model to be extremely similar to the result of fitting a classification image. 
One of the strengths of the Bayesian approach is that it allows researchers to alter the model to reflect their assumptions. The prior distributions on the parameters can easily be changed to reflect knowledge gained in previous experiments. Furthermore, extending the feature detector model to include simple non-linearities (squaring the weighted sum of the pixels, for example) or to use alternative probability distributions could be combined with the appropriate priors to encourage the formation of edge detectors, Gabor filters, or other biologically motivated features. 
The graphical model approach also allows new versions of GRIFT based on different feature parameterizations that may be useful in various situations. In the current implementation, the number of parameters scales linearly with the size of the stimulus images. Fitting the model to the classification of very large stimuli might require an impractical number of sample classifications or extraordinary computational resources. A possible solution to such problems is to adopt new feature parameterizations. One possibility would be to replace the per-pixel weights with a few parameters designating image regions in which all pixels should receive an identical weight. For example, in the four-square task, a feature sensitive to bright top corners might simply describe the height, width, and location of a rectangular region and assign all the pixels in this region a weight of −1 and all the pixels outside this region a weight of 1. This type of parameterization is highly compatible with the assumptions that neighboring feature weights are similar. For our four-square stimuli, this parameterization would replace 64 independent parameters with 6; for larger stimuli, the savings are even greater. These changes in parameterization simply require changing the conditional probability functions of the features to use the new parameters, and calculating a few related derivatives so the optimization code functions correctly. Describing feature weights geometrically would also allow us to encode prior distributions on the feature positions and allow those positions to vary from trial to trial, which could imbue the features with greater and lesser degrees of translational and rotational invariance. 
GRIFT's success on traditional classification image data also leaves open the question of analyzing other types of experiments. The Bubbles method (Gosselin & Schyns, 2001), for example, uses a very different noise model that may, in some cases, be more natural than adding Gaussian noise to every pixel. Although the Bubbles technique has been criticized for lacking the theoretical rigor of classification images with white Gaussian pixel noise (Murray & Gold, 2004), the GRIFT model, which provides a mathematically clear model of classification and which does not assume that the noise is white and Gaussian, might provide a useful framework for analyzing Bubbles experiments. 
GRIFT, like the classification image method, assumes that observer responses are the result of a consistent strategy. It is more likely, however, that participants continue to refine the features they use as an experiment progresses. Although invisible to a single-feature model, the multi-feature GRIFT model can indicate the presence of these changes. For example, in the square-detection experiment GRIFT recovered a two feature-detector model for participant PS1 in the incomplete-rotated condition. One of these features consisted of weights on all four corners, but was also sensitive to pixel values between the corners. The second feature was more exclusively focused on the corner pixels. Hypothesizing that these two detectors, which detect similar image structures, might be the result of PS1 pursuing different strategies at different times, we split the data chronologically in half and fit one-feature GRIFT models to each part. As demonstrated in Figure 9, the feature detector weights for the two halves strongly resemble the two features recovered from the full data set. This result suggests that the participant's search for evidence became more localized as the experiment progressed. GRIFT successfully detected evidence of this transition. Other data that were best fit by multi-feature GRIFT models were similarly examined, but they lacked a clear correspondence between full-data and half-data feature detectors. This result indicates that in some cases multiple feature detectors are used simultaneously, while in others they indicate shifts in strategy. 
Figure 9
 
Left: The most probable ω parameters for square-detection participant PS1 in the incomplete-rotated condition with two feature detectors. Center and right: The most probable ω parameters recovered when one-detector models were independently fit to the first and second halves of PS1's incomplete-rotated data. The similarity between these detectors and the detectors recovered on the full data set indicate that PS1 shifted strategies over the course of the experiment.
Figure 9
 
Left: The most probable ω parameters for square-detection participant PS1 in the incomplete-rotated condition with two feature detectors. Center and right: The most probable ω parameters recovered when one-detector models were independently fit to the first and second halves of PS1's incomplete-rotated data. The similarity between these detectors and the detectors recovered on the full data set indicate that PS1 shifted strategies over the course of the experiment.
Explicit modeling of this change in classification strategy over time is a very promising direction for future research. One potential approach is to alter the model so that the feature detector weights and other parameters are allowed to change, subject to some reasonable constraints, between trials. By relaxing the assumption that observers employ a constant classification strategy across time, a dynamic model would provide a more realistic representation of the processes used in the task and could provide better explanations of many data sets. The success of this first version of GRIFT on human data provides a firm foundation for such future developments and we are optimistic about the model's future utility. 
Appendix A
The GRIFT algorithm
The goal of the algorithm is to find the parameter values that best satisfy the prior distributions and best account for the ( S, C) samples gathered from a human observer. Mathematically, this corresponds to finding the mode of P( ω, β, λ, γS, C), where S and C refer to all of the observed samples. The algorithm is derived from the expectation-maximization (EM) method, a widely used optimization technique for dealing with hidden variables (Dempster, Laird, & Rubin, 1977, also see Bishop, 2006, or Gelman et al., 2004), in this case, F, the feature detector outputs for all the trials. To maximize P(θS, C), where θ = (ω, β, λ, γ), observe that 
P(θ|S,C)=P(F,θ|S,C)P(F|S,C,θ),
(A1)
which in turn implies that 
log(P(θ|S,C))=log(P(F,θ|S,C))log(P(F|S,C,θ)).
(A2)
Assume that there is a prior estimate for the parameters, θ*, which implies a distribution P(FS, C, θ*) that can be calculated from the GRIFT model using Equation 14. If we compute the expectation of Equation A2 with respect to this distribution, the left-hand side is unaffected because it does not depend on F. On the right-hand side, E(log(P(FS, C, θ))) is maximal for θ = θ*, so any choice of θ that increases E(log(P(F, θS, C))) will increase log(P(θS, C)). 
Therefore, the EM algorithm for the GRIFT model consists of choosing random initial parameters θ* = ( ω*, β*, λ*, γ*) and then finding the θ that maximizes  
Q ( θ , θ * ) = F P ( F | S , C , θ * ) log ( P ( C , F , S | θ ) ) + log ( P ( θ ) )
(A3)
which is proportional to E(log( P( F, θS, C))) because log( P( F, θS, C)) = log( P( C, F, Sθ)) + log( P( θ)) − log( P( S, C)) and log( P( S, C)) does not depend on θ or F
The θ that maximizes Q then becomes θ* for the next iteration, and the process is repeated until convergence. The presence of both the P( C, F, Sθ) and P( θ) terms encourages the algorithm to find parameters that explain the data and match the assumptions encoded in the parameters' prior distributions. As the amount of available data increases, the relative influence of the priors decrease, so it is possible to discover feature detectors that violate prior beliefs given enough evidence. 
Using the joint probability distribution of the GRIFT model,  
Q ( θ , θ * ) F P ( F | S , C , θ * ) ( log ( P ( C | F , λ ) ) + i = 1 N log ( P ( F i | S , ω i , β i ) ) ) + i = 1 N log ( P ( ω i ) ) + log ( P ( λ i ) ) ,
(A4)
dropping the P( S) term, which is independent of the parameters, and the log( P( β i)) and log( P( γ)) terms, which are 0 because P( γ) = P( β i) = 1. As mentioned before, the normalization constants for the log( P( ω i)) elements can be ignored during optimization—the log makes them additive constants to Q. The functional form of every additive term is described in the GRIFT model section, and P( FS, C, θ*) can be calculated using the model's joint probability function. 
Each iteration of EM requires maximizing Q, but it is not possible to compute the maximizing θ in closed form. Fortunately, it is relatively easy to search for the best θ. Because Q separates into many additive components, it is possible to efficiently compute its gradient with respect to each of the elements of θ and use this information to find a locally maximum θ assignment with the scaled conjugate gradient descent algorithm (Bishop, 1995). Even a locally maximum value of θ usually provides good EM results—P(ω, β, λ, γS, C) is still guaranteed to improve after every iteration. 
The result of any EM procedure is only guaranteed to be a locally optimal answer, and finding the globally optimal θ is made more challenging by the large number of parameters. GRIFT adopts the standard solution of running EM many times, each instance starting with a random θ*, and then accepting the final θ from the instance that produced the most probable parameters. For this model and the data presented in the paper, 20–30 random restarts were sufficient. 
The results of the GRIFT-fitting algorithm are relatively insensitive to the procedure for randomly initializing θ*. Some methods encouraged faster EM convergence or lead to a greater percentage of successful restarts depending on the stimulus. The λ* parameters were initialized by random samples from a normal distribution and then half were negated so the features would tend to start evenly assigned to the two classes, except for γ*, which was initialized to 0. In the four-square, light-dark, Kanizsa, and square-detection experiments, the ω* parameters were initialized from a uniform distribution. In the faces experiments, the ω* parameters were initialized by adding normal noise to the optimal linear classifier separating the two targets. Because of the large number of pixels in the faces stimuli, the other initialization procedures frequently produced initial assignments with extremely low probabilities, which led to slow EM convergence and an excess of local maxima. The β* were set to the optimal threshold for distinguishing the classes using the initial ω* as a linear classifier (except when they were accidentally set to the negation of this value—which did not appear to cause any problems). Altering the initialization procedure did not appear to change results. In most cases, it only affected the speed of EM or the number of restarts required to reliably discover the best parameter values. 
The GRIFT model is non-convex, therefore it is theoretically possible that there exist multiple near-optimal sets of parameters for a given data set, where each set is represented by a different local maximum of the posterior distribution. This has not proven to be a practical problem—in our experience, when a set of restarts leads to multiple solutions with very similar posterior probabilities, those solutions are qualitatively similar. Typically, their parameters only differ by trivial amounts or by their signs. As described in the Results and Discussion section, parameter sets that are identical except for differences in sign are functionally identical or equivalent to one another. 
Acknowledgments
This research was supported by NSF Grant SES-0631602 to A. L. Cohen. M. G. Ross was supported by NIMH grant MH16745. 
The authors thank Florin Cutzu, Arnab Dhua, Jason Gold, Michelle Greene, Tom Griffiths, Erik Learned-Miller, Richard Murray, Adam Sanborn, Richard Shiffrin, Mark Steyvers, and Chen Yu for helpful discussions, information, ideas, and insights. We especially thank Jason Gold for co-designing, conducting, and providing the data for the square-detection experiment. 
The authors also thank the editor and anonymous reviewers for many helpful suggestions that improved the article. 
Portions of this research were previously published as “GRIFT: A graphical model for inferring visual classification features from human data” in Advances in Neural Information Processing Systems 20 (2008), the proceedings of the 2007 Neural Information Processing Systems (NIPS) conference. 
Commercial relationships: none. 
Corresponding author: Michael G. Ross. 
Address: Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA. 
Footnotes
Footnotes
1  Maintaining the requirement that γ rγ r−1 during model fitting is inconvenient. An alternative parameterization is γ r = γ 1
z = 2 r
α z 2 for all r ≥ 2. Any choice for γ 1 and ( α 2, α 3, …, α R−1) will be equivalent to γ r parameters with the desired ordering.
Footnotes
2  The results reported below are not specific to this particular choice of prior. We also investigated a version of GRIFT with a unimodal prior on λ. In particular, Equation 11 was replaced with a normal distribution, P( λ i) =
1 2 π σ 2
exp(−
λ i 2 2 σ 2
), with the standard deviation, σ, set to both 1 and 2. In both cases, GRIFT produced results (on the data from the corners simulated observer in the four-square experiment discussed below) that were qualitatively and quantitatively similar to GRIFT implemented with the prior defined by Equation 11.
Footnotes
3  If a zero-mean normal prior is applied to the λ parameters (see Footnote 2), always active or never active features do not tend to appear. Instead, GRIFT produces features with λ i = 0 (the value that maximizes the λ prior). This result conveys the same information—that the model has too many feature detectors.
Footnotes
4  In addition to AIC, several alternative model-selection approaches were tried and each approach was judged based on its ability to indicate the correct number of features on the four-square simulated data. Prior work (Ross & Cohen, 2008) measured the mutual information between the feature detectors and the classifications, but, unlike AIC, this approach required a subjective judgment of the model size at which the mutual information curve appeared to be leveling off. The Bayesian information criterion (BIC) (Schwarz, 1978) penalized model complexity too heavily. Four-fold and five-times repeated two-fold (Dietterich, 1998) cross validation were unreliable given the size of the data set. Leave-one-out cross validation might have been successful, but was not computationally tractable given the current implementation. Because GRIFT uses improper priors (discussed previously), the Bayesian marginal likelihood (see Bishop, 2006) approach was not available.
Footnotes
5  The truncation ensured that the stimulus pixel values remained within the display's output range.
Footnotes
6  Our simulation employed two features, one that fires for Class 1 patterns (top brighter than bottom) and one that fires for Class 2 patterns (bottom brighter than top). It turns out, however, that these two features are logically equivalent to a GRIFT model that contains only one top-bottom contrast-sensitive feature and an appropriate γ.
Footnotes
7  Initials, rather than participant numbers, are used to facilitate comparison with past work.
Footnotes
8  Although the results of a poorly performing observer provide an informative contrast in this instance, we suggest that, in most cases, researchers should avoid this issue by continuing to stair-case stimulus contrast levels throughout an experiment.
References
Agresti, A. (2002). Categorical data analysis. New York: Wiley-Interscience.
Ahumada, A. J.Jr. (2002). Classification image weights and internal noise level estimation. Journal of Vision, 2, (1):8, 121–131, http://journalofvision.org/2/1/8/, doi:10.1167/2.1.8. [PubMed] [Article] [CrossRef]
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19, 716–723. [CrossRef]
Anzai, A. Peng, X. Van Essen, D. C. (2007). Neurons in monkey visual area V2 encode combinations of orientations. Nature Neuroscience, 10, 1313–1321. [PubMed] [CrossRef] [PubMed]
Besag, J. (1974). Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society: B (Methodological), 36, 192–236.
Bishop, C. M. (1995). Neural networks for pattern recognition. New York: Oxford University Press.
Bishop, C. M. (2006). Pattern recognition and machine learning. New York: Springer.
Borg, I. Groenen, P. (1997). Modern multidimensional scaling: Theory and applications. New York: Springer.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Cohen, A. L. Shiffrin, R. M. Gold, J. M. Ross, D. A. Ross, M. G. (2007). Inducing features from visual noise. Journal of Vision, 7, (8):15, 1–14, http://journalofvision.org/7/8/15/, doi:10.1167/7.8.15. [PubMed] [Article] [CrossRef] [PubMed]
Dempster, A. P. Laird, N. M. Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B (Methodological), 39, 1–38.
Dietterich, T. G. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10, 1895–1923. [PubMed] [CrossRef] [PubMed]
Forsyth, D. A. Ponce, J. (2003). Computer vision: A modern approach. Upper Saddle River: Prentice Hall.
Gelman, A. Carlin, J. B. Stern, H. S. Rubin, D. B. (2004). Bayesian data analysis. Boca Raton: Chapman & Hall/CRC.
Geman, S. Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–741. [CrossRef] [PubMed]
Gold, J. Bennett, P. J. Sekuler, A. B. (1999). Identification of band-pass filtered letters and faces by human and ideal observers. Vision Research, 39, 3537–3560. [PubMed] [CrossRef] [PubMed]
Gold, J. M. Cohen, A. L. Shiffrin, R. (2006). Visual noise reveals category representations. Psychonomics Bulletin & Review, 13, 649–655. [PubMed] [CrossRef]
Gold, J. M. Murray, R. F. Bennett, P. J. Sekuler, A. B. (2000). Deriving behavioural receptive fields for visually completed contours. Current Biology, 10, 663–666. [PubMed] [Article] [CrossRef] [PubMed]
Gosselin, F. Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261–2271. [PubMed] [CrossRef] [PubMed]
Macmillan, N. A. Creelman, C. D. (2005). Detection theory: A user's guide. Philadelphia: Lawrence Erlbaum Associates.
Murray, R. F. Gold, J. M. (2004). Troubles with bubbles. Vision Research, 44, 461–470. [PubMed] [CrossRef] [PubMed]
Palmer, S. E. (1999). Vision science: Photons to phenomenology. Cambridge: The MIT Press.
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan Kaufmann.
Pelli, D. G. Farell, B. Moore, D. C. (2003). The remarkable inefficiency of word recognition. Nature, 423, 752–756. [PubMed] [CrossRef] [PubMed]
Rehder, B. (2003). Categorization as causal reasoning. Cognitive Science, 27, 709–748. [CrossRef]
Ross, M. G. Cohen, A. L. Platt,, J. C. Koller,, D. Singer,, Y. Roweis, S. (2008). GRIFT: A graphical model for inferring visual classification features from human data. Advances in neural information processing systems. (20, pp. 1217–1224). Cambridge: The MIT Press.
Rust, N. C. Mante, V. Simoncelli, E. P. Movshon, J. A. (2006). How MT cells analyze the motion of visual patterns. Nature Neuroscience, 9, 1421–1431. [PubMed] [CrossRef] [PubMed]
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464. [CrossRef]
Sergent, J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75, 221–242. [PubMed] [CrossRef] [PubMed]
Figure 1
 
The GRIFT model is a Bayes net that describes classification as the result of combining N feature detectors. The direction of the arrows indicates causal influence—the stimulus S influences each feature detector F i, and they in turn influence the classification decision C.
Figure 1
 
The GRIFT model is a Bayes net that describes classification as the result of combining N feature detectors. The direction of the arrows indicates causal influence—the stimulus S influences each feature detector F i, and they in turn influence the classification decision C.
Figure 2
 
The GRIFT model with its parameters. The model is Bayesian, therefore parameters are also random variables. The plate, the rounded box with N in the lower right corner, represents model structures that are duplicated for each of the N features.
Figure 2
 
The GRIFT model with its parameters. The model is Bayesian, therefore parameters are also random variables. The plate, the rounded box with N in the lower right corner, represents model structures that are duplicated for each of the N features.
Figure 3
 
Targets and sample stimuli from the five experiments.
Figure 3
 
Targets and sample stimuli from the five experiments.
Figure 4
 
The most probable ω parameters found from the simulated and human observers in the four-square task. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 4
 
The most probable ω parameters found from the simulated and human observers in the four-square task. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 5
 
The most probable ω parameters found for human participants in the light-dark experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 5
 
The most probable ω parameters found for human participants in the light-dark experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 6
 
The most probable ω parameters found for human participants in the faces experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 6
 
The most probable ω parameters found for human participants in the faces experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 7
 
The most probable ω parameters found for human participants in the Kanizsa experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 7
 
The most probable ω parameters found for human participants in the Kanizsa experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 8
 
The most probable ω parameters found for human participants in the square-detection experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 8
 
The most probable ω parameters found for human participants in the square-detection experiment. Red indicates very positive ω components, blue indicates very negative components, green indicates zero.
Figure 9
 
Left: The most probable ω parameters for square-detection participant PS1 in the incomplete-rotated condition with two feature detectors. Center and right: The most probable ω parameters recovered when one-detector models were independently fit to the first and second halves of PS1's incomplete-rotated data. The similarity between these detectors and the detectors recovered on the full data set indicate that PS1 shifted strategies over the course of the experiment.
Figure 9
 
Left: The most probable ω parameters for square-detection participant PS1 in the incomplete-rotated condition with two feature detectors. Center and right: The most probable ω parameters recovered when one-detector models were independently fit to the first and second halves of PS1's incomplete-rotated data. The similarity between these detectors and the detectors recovered on the full data set indicate that PS1 shifted strategies over the course of the experiment.
Table 1
 
The γ and λ i values for each GRIFT model fit to data from the top-vs.-bottom, corners, and combo simulated observers from the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 1
 
The γ and λ i values for each GRIFT model fit to data from the top-vs.-bottom, corners, and combo simulated observers from the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Four-square Top vs. bottom Corners Combo
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 2.1 −2.8 −2.9
F 1 −4.0 0.78∣0.23 5.2 0.35∣0.67 5.7 0.24∣0.75
N = 2 γ 1.1 4.7 −5.4
F 1 4.9 0.23∣0.78 −3.7 0.76∣0.49 4.2 0.31∣0.79
F 2 −3.8 0.93∣0.96 −4.0 0.75∣0.45 4.4 0.40∣0.84
N = 3 γ −5.8 5.6 2.0
F 1 3.8 0.26∣0.76 −3.5 0.73∣0.46 −4.5 0.75∣0.28
F 2 4.3 0.26∣0.80 −3.8 0.43∣0.20 −4.7 0.72∣0.39
F 3 4.4 0.43∣0.24 −4.2 0.67∣0.47 4.9 0.47∣0.74
N = 4 γ 0.8 0.2 −1.8
F 1 −4.5 0.74∣0.21 −3.6 0.75∣0.52 −4.0 0.69∣0.31
F 2 4.5 0.22∣0.76 −3.6 0.47∣0.23 −4.1 0.76∣0.40
F 3 −3.5 0.59∣0.63 3.3 0.53∣0.68 4.8 0.52∣0.82
F 4 4.5 0.32∣0.19 3.8 0.23∣0.47 4.0 0.57∣0.83
N = 5 γ −3.4 −2.6 2.6
F 1 5.0 0.17∣0.68 −4.0 0.48∣0.24 −4.5 0.39∣0.20
F 2 5.6 0.24∣0.81 4.0 0.22∣0.47 −4.5 0.53∣0.34
F 3 3.9 0.71∣0.68 4.1 0.41∣0.64 −4.2 0.72∣0.45
F 4 −4.0 0.36∣0.41 4.1 0.22∣0.45 3.9 0.18∣0.39
F 5 −3.8 0.66∣0.77 −3.7 0.19∣0.28 4.1 0.22∣0.76
N = 6 γ 3.2 2.0 2.4
F 1 −6.4 0.78∣0.21 4.6 0.26∣0.50 −3.7 0.46∣0.15
F 2 −3.4 0.61∣0.54 −4.1 0.76∣0.53 −4.7 0.52∣0.35
F 3 4.2 0.29∣0.15 −4.4 0.61∣0.39 −4.5 0.34∣0.20
F 4 −4.0 0.55∣0.22 4.2 0.55∣0.80 4.0 0.26∣0.78
F 5 3.6 0.54∣0.52 −4.0 0.17∣0.24 4.3 0.32∣0.59
F 6 4.3 0.10∣0.20 −3.9 0.27∣0.20 −3.8 0.69∣0.47
Table 2
 
AIC scores and data log likelihoods for GRIFT models of the data from simulated observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Table 2
 
AIC scores and data log likelihoods for GRIFT models of the data from simulated observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Four-squareSimulated Observer Fit N
1 2 3 4 5 6
Top vs. bottom AIC 3,758 3,813 3,812 3,845 3,864 3,995
LnL −1,812 −1,774 −1,707 −1,657 −1,601 −1,581
Corners AIC 4,466 4,438 4,397 4,398 4,398 4,413
LnL −2,166 −2,086 −2,000 −1,934 −1,868 −1,810
Combo AIC 3,534 3,549 3,510 3,553 3,585 3,623
LnL −1,700 −1,642 −1,556 −1,511 −1,461 −1,415
Table 3
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 3
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the four-square experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Four-square AC EA JG RS
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 2.5 2.1 2.9 −2.6
F 1 −5.9 0.78∣0.27 −4.2 0.72∣0.28 −5.7 0.75∣0.27 4.6 0.32∣0.77
N = 2 γ 3.3 −2.8 −1.6 2.9
F 1 −5.0 0.65∣0.19 3.2 0.25∣0.62 −6.0 0.55∣0.16 −4.6 0.50∣0.16
F 2 −5.3 0.55∣0.15 3.4 0.24∣0.66 5.2 0.47∣0.85 −4.8 0.45∣0.14
N = 3 γ −3.2 −5.3 −2.6 −2.8
F 1 −5.5 0.65∣0.26 4.1 0.34∣0.75 5.8 0.26∣0.69 −4.7 0.36∣0.11
F 2 4.3 0.13∣0.57 4.1 0.35∣0.70 5.5 0.59∣0.89 5.1 0.37∣0.77
F 3 5.1 0.57∣0.83 4.2 0.21∣0.27 −5.4 0.90∣0.69 5.0 0.12∣0.26
N = 4 γ 4.2 3.7 0.6 −0.9
F 1 −5.0 0.56∣0.18 −4.6 0.78∣0.56 −4.5 0.92∣0.65 −5.5 0.88∣0.78
F 2 −5.0 0.59∣0.14 −3.9 0.62∣0.23 −5.2 0.70∣0.27 −4.4 0.64∣0.20
F 3 −6.2 0.89∣0.70 4.1 0.24∣0.59 4.8 0.09∣0.30 5.0 0.67∣0.89
F 4 5.0 0.56∣0.83 −5.2 0.10∣0.08 6.1 0.58∣0.86 4.1 0.65∣0.90
N = 5 γ 1.6 2.7 −1.7 2.3
F 1 4.9 0.11∣0.33 −4.8 0.72∣0.30 5.0 0.29∣0.78 −5.0 0.87∣0.77
F 2 −5.0 0.72∣0.25 −4.7 0.21∣0.16 −5.4 0.90∣0.70 −4.6 0.43∣0.18
F 3 5.1 0.65∣0.88 4.5 0.39∣0.72 −5.2 0.43∣0.16 −4.5 0.70∣0.43
F 4 −4.7 0.89∣0.61 4.7 0.21∣0.37 4.9 0.14∣0.34 4.5 0.28∣0.71
F 5 −5.0 0.42∣0.17 −4.3 0.74∣0.64 4.9 0.62∣0.86 4.4 0.71∣0.88
N = 6 γ −7.4 1.4 2.9 −6.7
F 1 4.8 0.44∣0.78 −5.0 0.52∣0.25 −3.8 0.80∣0.35 −4.9 0.68∣0.25
F 2 −4.4 0.54∣0.14 −4.2 0.79∣0.74 −4.9 0.86∣0.62 5.1 0.81∣0.92
F 3 4.8 0.51∣0.81 −4.2 0.81∣0.65 −3.9 0.29∣0.12 −4.7 0.73∣0.54
F 4 −4.8 0.87∣0.69 4.8 0.29∣0.69 −3.7 0.72∣0.25 4.9 0.65∣0.85
F 5 5.3 0.67∣0.89 4.8 0.67∣0.74 6.0 0.62∣0.87 4.0 0.59∣0.81
F 6 5.4 0.15∣0.40 4.2 0.17∣0.38 5.0 0.11∣0.25 5.2 0.14∣0.30
Table 4
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Table 4
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the four-square experiment. Bold numbers indicate the minimum AIC.
Four-squareParticipant Fit N
1 2 3 4 5 6
AC AIC 3,493 3,349 3,250 3,173 3,080 3,143
LnL −1,680 −1,542 −1,426 −1,322 −1,209 −1,174
EA AIC 4,150 4,068 4,017 3,969 3,926 3,958
LnL −2,008 −1,901 −1,810 −1,720 −1,632 −1,582
JG AIC 3,742 3,547 3,330 3,291 3,225 3,266
LnL −1,804 −1,640 −1,466 −1,381 −1,282 −1,236
RS AIC 4,017 3,843 3,707 3,664 3,594 3,628
LnL −1,942 −1,788 −1,655 −1,567 −1,466 −1,417
Table 5
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the light-dark experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 5
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the light-dark experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Light-dark PL1 PL2 PL3
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 2.0 2.3 −2.0
F 1 −4.7 0.77∣0.23 −5.4 0.68∣0.34 3.2 0.42∣0.52
N = 2 γ −0.5 1.3 −1.1
F 1 −6.0 0.78∣0.24 4.5 0.29∣0.63 −3.7 0.30∣0.22
F 2 3.9 0.95∣0.89 −4.5 0.87∣0.69 2.8 0.47∣0.54
N = 3 γ 2.8 −3.0 −2.0
F 1 −6.7 0.78∣0.23 −5.7 0.53∣0.40 −3.6 0.62∣0.58
F 2 3.7 0.95∣0.89 6.1 0.32∣0.57 3.5 0.20∣0.24
F 3 −3.3 0.93∣0.97 5.5 0.41∣0.54 3.9 0.69∣0.77
N = 4 γ −2.2 −1.9 −1.8
F 1 6.7 0.22∣0.77 −2.0 0.62∣0.29 −3.4 0.18∣0.12
F 2 −3.7 0.05∣0.11 −5.3 0.53∣0.40 −3.6 0.14∣0.15
F 3 −3.3 0.93∣0.97 5.4 0.33∣0.57 3.4 0.22∣0.22
F 4 2.0 1.00∣1.00 5.2 0.42∣0.53 3.7 0.35∣0.47
N = 5 γ 1.2 0.0 −1.5
F 1 −6.7 0.78∣0.23 −1.9 0.62∣0.29 −3.6 0.86∣0.80
F 2 −3.7 0.05∣0.11 5.3 0.47∣0.60 3.3 0.18∣0.20
F 3 3.3 0.07∣0.03 5.4 0.33∣0.57 −3.5 0.17∣0.11
F 4 −2.0 0.00∣0.00 −5.2 0.58∣0.47 3.4 0.83∣0.87
F 5 2.0 1.00∣1.00 −2.0 1.00∣1.00 3.2 0.25∣0.27
Table 6
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the light-dark experiment. Bold numbers indicate the minimum AIC.
Table 6
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the light-dark experiment. Bold numbers indicate the minimum AIC.
Light-darkParticipant Fit N
1 2 3 4 5
PL1 AIC 3,427 3,452 3,524 3,624 3,724
LnL −1,662 −1,625 −1,611 −1,611 −1,611
PL2 AIC 4,088 4,147 3,999 4,095 4,195
LnL −1,993 −1,973 −1,849 −1,847 −1,847
PL3 AIC 5,029 5,081 5,131 5,214 5,288
LnL −2,464 −2,439 −2,414 −2,406 −2,393
Table 7
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the faces experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 7
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the faces experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Faces PF1 PF2 PF3
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ −3.3 −2.8 3.1
F 1 6.8 0.38∣0.64 5.4 0.48∣0.51 −5.8 0.52∣0.47
N = 2 γ −1.5 −0.9 2.9
F 1 6.9 0.38∣0.64 5.4 0.48∣0.51 −5.6 0.76∣0.73
F 2 −1.9 1.00∣1.00 −1.9 1.00∣.100 5.7 0.28∣0.34
N = 3 γ 0.6 −0.8 6.6
F 1 6.9 0.38∣0.64 5.4 0.48∣0.51 −5.6 0.76∣0.73
F 2 −1.9 1.00∣1.00 −2.0 1.00∣1.00 −5.7 0.72∣0.66
F 3 −2.0 1.00∣1.00 −1.9 0.00∣0.00 1.9 1.00∣1.00
Table 8
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the faces experiment. Bold numbers indicate the minimum AIC.
Table 8
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the faces experiment. Bold numbers indicate the minimum AIC.
FacesParticipant Fit N
1 2 3
PF1 AIC 5,081 5,968 6,857
LnL −2,095 −2,095 −2,095
PF2 AIC 5,992 6,880 7,768
LnL −2,551 −2,551 −2,551
PF3 AIC 5,908 6,643 7,528
LnL −2,509 −2,432 −2,431
Table 9
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the Kanizsa experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Table 9
 
The γ and λ i values for each GRIFT model fit to data from the human observers in the Kanizsa experiment. The P i 1 and P i 2 values indicate the probability that feature detector F i will be active when the observer responds “Class 1,” P( F i = 1∣ C = 1), and “Class 2,” P( F i = 1∣ C = 2), respectively.
Kanizsa AJR AMC JMG
λ i P i 1P i 2 λ i P i 1P i 2 λ i P i 1P i 2
N = 1 γ 4.5 −3.6 4.1
F 1 −8.8 0.55∣0.40 7.6 0.43∣0.61 −7.8 0.56∣0.37
N = 2 γ −2.3 3.8 4.0
F 1 8.8 0.45∣0.60 −7.6 0.57∣0.39 −7.8 0.56∣0.37
F 2 −2.0 1.00∣1.00 0.5 0.44∣0.45 0.6 0.18∣0.19
N = 3 γ −2.9 −3.7 2.2
F 1 6.0 0.47∣0.57 7.6 0.43∣0.61 −7.9 0.56∣0.37
F 2 6.0 0.47∣0.57 −2.0 0.00∣0.00 0.4 0.87∣0.87
F 3 −6.0 0.54∣0.43 −2.0 0.00∣0.00 1.6 1.00∣1.00
Table 10
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the Kanizsa experiment. Bold numbers indicate the minimum AIC.
Table 10
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the Kanizsa experiment. Bold numbers indicate the minimum AIC.
KanizsaParticipant Fit N
1 2 3
AJR AIC 12,765 14,019 15,379
LnL −5,755 −5,755 −5,807
AMC AIC 12,469 13,723 14,977
LnL −5,606 −5,606 −5,606
JMG AIC 12,538 13,793 15,046
LnL −5,641 −5,642 −5,641
Table 11
 
The γ and λ i values for each GRIFT model fit to data from human observer PS1 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Table 11
 
The γ and λ i values for each GRIFT model fit to data from human observer PS1 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Square-detection PS1-full PS1-inc PS1-incrot
λ i P i 1P i 6 λ i P i 1P i 6 λ i P i 1P i 6
N = 1 γ 1 0.8 1.0 3.4
γ 2 0.3 0.1 −0.1
γ 3 −1.9 −2.3 −2.9
γ 4 −5.3 −6.0 −5.6
γ 5 −6.1 −7.5 −9.3
F 1 4.9 0.26∣0.70 5.6 0.34∣0.76 5.5 0.48∣0.79
N = 2 γ 1 0.1 0.1 1.5
γ 2 −0.5 −1.3 −3.6
γ 3 −3.6 −4.4 −5.6
γ 4 −7.9 −9.1 −9.9
γ 5 −9.0 −11.2 −14.6
F 1 6.3 0.28∣0.69 3.2 0.46∣0.45 4.1 0.79∣0.55
F 2 2.7 0.37∣0.43 7.1 0.33∣0.76 7.3 0.40∣0.78
N = 3 γ 1 5.4 −0.6 9.5
γ 2 4.3 −3.5 4.5
γ 3 0.3 −7.4 2.1
γ 4 −4.7 −13.6 −2.5
γ 5 −6.3 −16.9 −7.3
F 1 −4.6 0.52∣0.55 2.8 0.14∣0.46 −7.7 0.59∣0.22
F 2 7.3 0.27∣0.69 5.9 0.51∣0.44 −2.6 0.26∣0.11
F 3 −2.0 0.65∣0.37 8.8 0.33∣0.76 4.2 0.84∣0.57
N = 4 γ 1 5.7 5.0 1.1
γ 2 4.5 2.1 −5.3
γ 3 0.6 −2.1 −7.7
γ 4 −4.6 −8.2 −12.9
γ 5 −6.2 −11.7 −18.4
F 1 −1.3 0.20∣0.07 2.6 0.12∣0.44 −1.7 0.84∣0.64
F 2 −4.6 0.50∣0.53 −6.2 0.49∣0.57 2.3 0.76∣0.91
F 3 −1.9 0.72∣0.46 1.2 0.60∣0.80 5.6 0.80∣0.53
F 4 7.3 0.27∣0.69 8.8 0.33∣0.76 8.5 0.40∣0.77
Table 12
 
The γ and λ i values for each GRIFT model fit to data from human observer PS2 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Table 12
 
The γ and λ i values for each GRIFT model fit to data from human observer PS2 in the square-detection experiment. The P i 1 and P i 6 values indicate the probability that feature detector F i will be active when the observer responds “1” (target definitely absent), P( F i = 1∣ C = 1), and “6” (target definitely present), P( F i = 1∣ C = 6), respectively.
Square-detection PS2-full PS2-inc PS2-incrot
λ i P i 1P i 6 λ i P i 1P i 6 λ i P i 1P i 6
N = 1 γ 1 2.4 2.7 2.5
γ 2 0.3 0.5 0.2
γ 3 −1.7 −1.5 −1.4
γ 4 −4.6 −4.8 −4.3
γ 5 −7.7 −7.6 −7.0
F 1 5.6 0.21∣0.79 5.8 0.19∣0.80 5.8 0.25∣0.75
N = 2 γ 1 2.2 6.9 2.3
γ 2 −0.1 4.5 −0.1
γ 3 −3.1 1.7 −2.1
γ 4 −6.3 −1.6 −6.1
γ 5 −12.9 −7.4 −10.5
F 1 4.5 0.21∣0.65 −4.4 0.79∣0.31 4.0 0.14∣0.45
F 2 7.4 0.16∣0.70 6.8 0.14∣0.68 7.4 0.23∣0.71
N = 3 γ 1 9.4 2.2 3.8
γ 2 7.1 −0.4 1.0
γ 3 3.9 −3.3 −1.2
γ 4 0.9 −7.1 −5.2
γ 5 −6.0 −12.3 −9.6
F 1 4.3 0.23∣0.68 4.9 0.05∣0.36 −2.1 0.51∣0.30
F 2 −7.2 0.84∣0.31 6.6 0.21∣0.75 7.4 0.25∣0.72
F 3 4.1 0.02∣0.24 2.6 0.31∣0.68 4.3 0.09∣0.37
N = 4 γ 1 7.8 15.7 13.2
γ 2 4.3 12.4 9.6
γ 3 1.1 9.3 7.4
γ 4 −1.8 5.7 3.1
γ 5 −8.9 −0.5 −1.6
F 1 −4.6 0.14∣0.08 −4.7 0.94∣0.64 −1.8 0.45∣0.23
F 2 −4.2 0.73∣0.26 −6.7 0.82∣0.31 −8.0 0.75∣0.28
F 3 7.2 0.16∣0.68 −3.9 0.80∣0.38 −4.3 0.91∣0.62
F 4 4.2 0.02∣0.25 3.2 0.80∣0.95 3.9 0.85∣0.90
Table 13
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the square-detection experiment. Bold numbers indicate the minimum AIC.
Table 13
 
AIC scores and data log likelihoods for GRIFT models of the data from human observers in the square-detection experiment. Bold numbers indicate the minimum AIC.
Square-detectionParticipant Fit N
1 2 3 4
PS1-full AIC 11,930 12,002 12,092 12,221
LnL −5,894 −5,869 −5,843 −5,842
PS1-inc AIC 12,232 12,277 12,302 12,422
LnL −6,045 −6,001 −5,948 −5,942
PS1-incrot AIC 10,978 10,830 10,910 11,014
LnL −5,418 −5,278 −5,252 −5,238
PS2-full AIC 11,612 11,510 11,568 11,662
LnL −5,735 −5,618 −5,581 −5,562
PS2-inc AIC 11,582 11,474 11,517 11,610
LnL −5,720 −5,600 −5,556 −5,536
PS2-incrot AIC 12,027 11,903 11,982 12,072
LnL −5,942 −5,814 −5,788 −5,767
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×