November 2008
Volume 8, Issue 15
Free
Research Article  |   November 2008
The direction of measured face aftereffects
Author Affiliations
Journal of Vision November 2008, Vol.8, 1. doi:https://doi.org/10.1167/8.15.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christopher P. Benton, Emma C. Burgess; The direction of measured face aftereffects. Journal of Vision 2008;8(15):1. https://doi.org/10.1167/8.15.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Prolonged viewing of a face can result in a change of our perception of subsequent faces. This process of adaptation is believed to be functional and to reflect optimization-driven changes in the neural encoding. Because it is believed to target the neural systems underlying face processing, the measurement of face aftereffects is seen as a powerful behavioral technique that can provide deep insights into our facial encoding. Face identity aftereffects have typically been measured by assessing the way in which adaptation changes the perception of images from a test sequence, the latter commonly derived from morphing between two base images. The current study asks to what extent such face aftereffects are driven by the test sequence used to measure them. Using subjects trained to respond either to identity of expression, we examined the effects of identity and expression adaptation on test stimuli that varied in both identity and expression. We found that face adaptation produced measured aftereffects that were congruent with the adaptation stimulus; the composition of the test sequences did not affect the measured direction of the face aftereffects. Our results support the view that face adaptation studies can meaningfully tap into the intrinsically multidimensional nature of our representation of facial identity.

Introduction
Our perception of faces is mutable. The diet of faces to which you are exposed can bias your judgment of a face's identity, gender, expression, and racial group (Leopold, O'Toole, Vetter, & Blanz, 2001; Webster, Kaping, Mizokami, & Duhamel, 2004). This phenomenon is termed adaptation, with the subsequent judgment bias termed the aftereffect. Adaptation has been widely used over the past decade or so to probe the nature of our facial representations. Face aftereffect studies are particularly influential precisely because of their use of adaptation. Adaptation occurs with many sensory attributes and is widely held to be functional (Attneave, 1954; Barlow, 1961; Simoncelli & Olshausen, 2001). The basic theory is that a population of neurons encoding a particular attribute shift their responses to better encode the statistical properties of that attribute (Barlow, 2001). Adaptation therefore directly targets those neurons encoding the adapted attribute (Clifford, 2005). 
The focus of the current paper is to examine what determines the direction of identity aftereffects, a concern that makes particular sense if you think of a face as existing in a perceptual face space. The notion of face space comes from computational approaches to face recognition based on principle components analysis (PCA) and other multidimensional representation systems (Turk & Pentland, 1991; Valentine, 1991). These approaches operate by extracting the average face from a corpus of faces and then calculating an orthogonal basis set that captures the variability of the corpus from their average. The basic conception of a PCA derived face space is therefore one in which the average or prototypical face lies at the center of a multidimensional space. Any face may be encoded as a vector within that space by projecting the difference between the face and the prototype onto the axes (or Eigenfaces) making up the space. 
Many face adaptation studies have measured aftereffects using a technique in which a morph sequence is created that runs between two faces (say, A and B). Typically participants are asked to make a perceptual judgment to a number of images along that sequence. This may involve classifying the images, for example as most similar to A or B (Benton, Jennings, & Chatting, 2006), or rating the images in some manner (Jeffery, Rhodes, & Busey, 2007). Face aftereffects are measured by seeing how adaptation (usually to either A or B) changes the pattern of those judgments. Findings from such studies have been used to provide evidence about the fundamental nature of our facial encoding. 
From a given example face in face space, moving along an identity trajectory through and beyond the prototypical or average face results in anti-faces of the original starting point. Adaptation to an anti-face results in identification of the prototype as the example face (Leopold et al., 2001). This finding is commonly accounted for by proposing that adaptation to a particular face results in movement of the internal prototype toward the adaptation stimulus: a recoding of face space. The test image, in this case the pre-adaptation prototype, is effectively repelled from the adaptation stimulus (Rhodes et al., 2005). The aftereffect created by adaptation to an anti-face appears specific to the face used to create that anti-face. When a morph sequence between a different identity and its anti-face is used as the test stimulus there is no repulsion of the test stimulus. Such findings have been taken to provide strong support for the face-space model (Leopold et al., 2001; Rhodes & Jeffery, 2006; Robbins, McKone, & Edwards, 2007). 
The findings from face adaptation studies fit well with the idea that face aftereffects are created by the repulsion of the test image from the adaptation image. Within this view, the direction of the aftereffect is that of the vector within face space running from adaptation to test image. However, at least in the context of assessing the direction of face aftereffects, identity adaptation studies contain a confound, one that is to do with the information that the experimenter is putting into the task. The adapting stimulus and the test stimuli are essentially built from one another. For example in the anti-face adaptation example described above, the adapting stimulus is constructed from the original face and the prototype with the three forming a single morph trajectory. The response given by subjects (i.e., the original face) is therefore intrinsically encoded in the relationship between the adaptation and test images. The identity specificity of the aftereffect may be less to do with the direction of the aftereffect within face space, but more to do with the fact that the constructional relationship simply does not hold when the adaptation stimulus does not fall on the line within face space containing the test sequence. 
A concrete example of the above is given by an image based account of identity aftereffects. In this, any image along a morph sequence is represented as the weighted average of the images from which it was constructed. Adaptation would work by shifting the relative weighting of the component images—any aftereffects must necessarily occur along the morph sequence. Adaptation would be driven by the relative weighting of the component images within the adaptation image, the maximum aftereffect arising when the adaptation image is one of the component images of the morph sequence. This would result in a reduction of the weighting of the component image corresponding to the adaptation image, leading to a repulsion of the test image along the morph sequence away from the adaptor. In this scenario, identity aftereffects have little to do with what we conceive of as face perception; they simply reflect the operation of an image-based scratchpad, which subjects use to complete the psychophysical task. 
Thus, the measured direction of face identity aftereffects may be determined simply by the manner in which the experimental stimuli are built, rather than necessarily reflecting the repulsion of test from adaptation within a multidimensional face space. If true, then conclusions based upon the direction of face aftereffects may be suspect. In the current study we therefore assess whether the direction of face aftereffects is determined by repulsion of the test image by the adaptation image, or whether the direction is determined by the test sequence used. 
Methods
The idea behind the following experiments is straightforward. First, we train subjects to respond either to facial identity or facial expression. We then measure the strength of identity adaptation and expression adaptation upon test morph sequences that vary simultaneously in both of these attributes. If the direction of the aftereffect is determined by that of the test sequences, then we should measure distortions of both identity and expression in both adaptation conditions. If, on the other hand, the direction of the aftereffect is independent of the direction of the test sequence, then we should see only identity aftereffects under identity adaptation and expression aftereffects under expression adaptation. 
To build the stimuli employed in our experiment we used two actors A and B, each displaying two facial expressions, happy and sad (see Figure 1); the collection of these images is described elsewhere (Benton et al., 2007). We morph between these images (Tiddeman, Burt, & Perrett, 2001) to produce unidimensional morphs (the ‘edges’ in Figure 1) and multidimensional morphs (the ‘diagonals’). We have used the term unidimensional to refer to a morph sequence in which only one attribute (either expression or identity) changes, and the term multidimensional to refer to a morph sequence where both expression and identity change. 
Figure 1
 
Construction of the stimuli used in our experiments from 4 original images (Ah, Bh, As, and Bs). The arrows show the directions of the morph sequences such that, for example, 75% on the BhAs test sequence indicates an image that is 25% Bh and 75% As.
Figure 1
 
Construction of the stimuli used in our experiments from 4 original images (Ah, Bh, As, and Bs). The arrows show the directions of the morph sequences such that, for example, 75% on the BhAs test sequence indicates an image that is 25% Bh and 75% As.
To describe the stimuli used in our experiment we use a capital letter to indicate identity ( A or B) followed by a lowercase letter to indicate expression ( h or s). We use the uppercase letter ‘ M’ to indicate the identity midpoint and, similarly, the lowercase letter ‘ m’ to indicate the expression midpoint (see Figure 1). Our test sequences (used to measure the strength of adaptation) are AhBs and AsBh. For adaptation stimuli we used the midpoints of the unidimensional morphs: Mh, Ms, Am, and Bm. We split these into two conditions, an identity adaptation condition (with adaptation stimuli Am and Bm) and an expression adaptation condition (with adaptors Mh and Ms). We measure the strength of adaptation as the difference in balance point between the two adaptors within each of these conditions for each test sequence (Benton et al., 2006, 2007). Note that each test sequence varies identity, shifting from A to B. However in Sequence 1 this identity shift is accompanied by a change in facial expression from happy to sad, in Sequence 2 it is accompanied by a shift from sad to happy
The balance point is defined as the point along a morph sequence, which is equally likely to be judged as displaying either of the two target attributes. We measured balance points using an adaptive method of constants procedure (Watt & Andrews, 1981) in which subjects view images from our test sequences and classify these as either identity A or identity B or as happy or sad (dependent upon response condition, see below). For each test sequence we use responses from 64 image presentations (or trials) to estimate the balance point by fitting a cumulative Gaussian to the resultant data (Wichmann & Hill, 2001a). We refer to the group of 64 trials used to measure a balance point as a run. We term a single run presented individually, or an interleaved group of runs, as a session
Our eight subjects were evenly divided into two groups, an identity response group and an expression response group. Those in the identity group were instructed only to respond to identity, those in the expression group were instructed only to respond to expression. Rather than choosing the 50% midpoints of the unidimensional morphs as our adaptation stimuli we estimated their perceptual midpoints psychophysically for each subject and used these as their adaptation stimuli. A fixed choice across subjects might have added an unwanted identity or expression bias into, respectively, the expression or identity adaptation. 
During the experiments, participants were presented with three types of image. Adaptation images, test images (to which subjects respond), and comparison images. The latter were essentially for training purposes and displayed the categories to which the subjects responded. The average interocular distance for the adaptation and test images was 2° (91 pixels). In order to prevent retinotopic adaptation, adapt and test images rotated around a fixation point once every 5 seconds in a circular trajectory of diameter 1°. Trajectory start position and direction were randomly determined for each stimulus. Except during presentation of the comparison images (see below) the fixation spot was always present. 
There was a 500 msec gap between all stimuli presented within a trial. Test stimuli were presented for 1000 msec, comparison stimuli for 2000 msec. Adaptation consisted of an initial 30 second adaptation stimulus followed by a test stimulus (to which subjects respond). Subsequent adaptation trials within a session consisted of adaptation top-up (5 seconds) followed by the test stimulus. Comparison stimuli were presented prior to each test stimulus in the non-adaptation sessions. The comparison stimuli were composed of the start and end images of the morph sequence to which the test stimulus for that trial belonged. The end image was presented directly above the start image, with the ensemble presented in the middle of the screen. In those tasks where comparison images were used, subjects indicated whether intermediate morph images most closely resembled the start or end of the morph sequences. The purpose of the comparison images was therefore to cue the subjects to the appropriate response. The size of the component images within each ensemble was reduced to 75% of that of the adaptation and test images. 
From the point of view of a participant, the procedure would start with gathering each of the unadapted balance points from the unidimensional morphs (to be used subsequently as adaptation stimuli). Depending on the sequence ( AhBh, AsBs, AhAs, or BhBs), subjects would respond as identity A or identity B or happy or sad by pressing the up and down arrows on a keyboard. These four runs were completed individually (i.e., not interleaved) and in random order, for each subject. Participants were then subjected to the adaptation part of the experiment. We measured the balance points on the two test sequences in response to each of the 4 predetermined adaptation stimuli. For each adaptation sequence the two test sequences were randomly interleaved so that we simultaneously measured the effects of adaptation on both. This was done in order to prevent subjects associating any particular identity with any particular facial expression. For each subject there were therefore 4 adaptation sessions; these were presented in random order. 
Directly before each adaptation session, subjects carried out a training session. For subjects in the identity response condition the training session consisted of finding balance points on interleaved runs for measuring balance points for the AhBh and AsBs sequences. For subjects in the expression response condition the training session consisted of finding balance points on the interleaved AhAs and BhBs runs. From the point of view of the participant, the training sessions consisted of test images that varied in both identity and expression. The comparison images presented prior to the test images cued subjects to respond to either identity or expression as appropriate. 
Stimuli were presented on a Lacie Electron Blue IV 22” monitor. Spatial resolution was 1024 × 768 pixels (23° by 17°); temporal resolution was 75 Hz. The edges of the face stimuli were blurred to display mean luminance (54 cd/m 2). The experiments took place in a darkened room where the monitor was the only strong source of illumination. Subjects sat comfortably in an armchair at a viewing distance of 100 cm from the monitor with the keyboard (used for responses) on their lap. Identity response subjects pressed the up arrow to indicate identity A, the bottom arrow to indicate identity B. Expression response subjects pressed the up arrow to indicate happy and the down arrow to indicate sad. All subjects had normal or corrected-to-normal vision. Stimulus presentation was controlled by a PC; the images were rendered using the Cogent Graphics Matlab extension developed by John Romaya at the LON at the Wellcome Department of Imaging Neuroscience. 
Results
If the direction of the aftereffect is determined by the direction of the test sequence then we would expect (for example) that adaptation to Mh or Bm would produce similar distortions in Sequence 2 because both adaptors lie, in terms of image similarity, closer to one end of the sequence ( Bh) than the other ( As). Consequently, we should be able to measure similar identity and expression aftereffects under adaptation to identity adaptation to expression. On the other hand, if the direction of the aftereffect is determined by the vector connecting the test and adaptation stimuli, and is independent of the test trajectory, then adaptation to Mh and Bm should produce orthogonal aftereffects on images drawn from Sequence 2—we should expect to find no expression aftereffect under identity adaptation and vice versa
In the identity adaptation condition we calculate the strength of the face aftereffect by taking (for each sequence) the difference between the balance points under adapt to Bm and adapt to Am. Similarly, in the expression adaptation condition, we calculate the strength of the aftereffect as the difference in balance point under adapt to Ms and adapt to Mh. So for each condition we measure the strength of the aftereffect along Sequence 1 and along Sequence 2, taking into account their directions with respect to the paired adaptation stimuli. 
We plot results from our experiments, averaged across test sequence, in Figure 2. The top panel shows the strength of the aftereffect when the adaptation type and response type are congruent; the bottom panel shows aftereffect strength when adaptor and response are incongruent. The results clearly indicate that, on average, there is no incongruent aftereffect. In other words, identity and expression adaptation fail to elicit expression and identity aftereffects when the aftereffects are measured with sequences that change in both identity and expression. 
Figure 2
 
Strength of adaptation when adaptation stimulus and response are congruent (top panel) and incongruent (bottom panel). Error bars show 95% confidence limits. Note that all hypothesis testing in the current study is achieved through the use of 95% confidence limits (Cumming & Finch, 2005). Filled circles show results from identity response subjects, squares show results from expression response subjects. Error bars for each subject are derived through a bootstrapping procedure in which 10000 bootstrap estimates of balance point are generated for each psychometric function (Wichmann & Hill, 2001b) and then propagated through the relevant adaptation strength calculations (Benton et al., 2006, 2007). Subject error bars were calculated using the percentile method (Efron & Tibshirani, 1993). Triangles show the averages across subjects with error bars calculated from the standard error of the means. Downward pointing filled triangles show the averages for identity response subjects, downward pointing unfilled triangles show averages for expression response subjects, while upward pointing triangles show averages across both groups.
Figure 2
 
Strength of adaptation when adaptation stimulus and response are congruent (top panel) and incongruent (bottom panel). Error bars show 95% confidence limits. Note that all hypothesis testing in the current study is achieved through the use of 95% confidence limits (Cumming & Finch, 2005). Filled circles show results from identity response subjects, squares show results from expression response subjects. Error bars for each subject are derived through a bootstrapping procedure in which 10000 bootstrap estimates of balance point are generated for each psychometric function (Wichmann & Hill, 2001b) and then propagated through the relevant adaptation strength calculations (Benton et al., 2006, 2007). Subject error bars were calculated using the percentile method (Efron & Tibshirani, 1993). Triangles show the averages across subjects with error bars calculated from the standard error of the means. Downward pointing filled triangles show the averages for identity response subjects, downward pointing unfilled triangles show averages for expression response subjects, while upward pointing triangles show averages across both groups.
In individual subjects there are instances of small incongruent aftereffects; however, it should be remembered that the adaptation stimuli for all subjects fall at their estimated perceptual midpoints. Therefore for each subject there is likely to be a small amount of expression adaptation in the identity adaptation condition and vice versa. There is no particular reason to think that these individual biases are systematic across the population of observers; the average in the incongruent plot shows no overall bias. 
Discussion
The experiments described above address the utility of the fundamental methodology used to measure face aftereffects. These are typically assessed by looking at the influence of adaptation on the perception of images from a morphed sequence running between two base images (each of which may have been constructed from a number of averaged images). Our results clearly demonstrate that the direction of the face aftereffect is not determined by the test sequence but is determined by the direction of the adaptation stimulus relative to the test stimulus. 
This study was motivated by a concern that the test sequences used to measure face adaptation could form an inherent confound for studies in this field. This might occur if, for example, the tasks essentially fail to tap into the multidimensional nature of our facial representations. If true then any image of the morph sequence may be treated in the same manner as it is constructed, i.e., as the weighted average of the two images from which it was constructed. In this case adaptation might simply represent a change of that weighting, meaning that the direction of the subsequent aftereffect would necessarily be the same as that of the test sequence. This would mean that the measures derived from the tasks fail to tap properly into our perception of faces, and that the conclusions drawn are not generalizable beyond the tasks used. 
It would therefore appear that the findings of those studies using face adaptation to explore face space are not tarnished by the implicit confound of test sequence direction. Of course the results described in the current study necessarily look only at a rather constrained two dimensional face space constructed with expression (happy to sad) and identity (actor A to actor B) as its axes. However, unless measurements of face adaptation behave in a radically different manner to that found in this study, there seems little reason to suppose that our findings cannot be generalized to the situation when face aftereffects are being measured in the context of a (for example) multidimensional facial identity space. 
Rhodes and Jeffery (2006) also recently questioned the generality of conclusions drawn from face adaptation studies. They noted that measured aftereffects are necessarily constrained by the test sequences used and proposed that the magnitude of measured aftereffects may be driven simply by the perceptual contrast between test and adaptation stimuli. To test whether this was the case Rhodes and Jeffery created sequences matched in perceptual contrast that either did or did not run through the prototypical face. Their finding, that aftereffect magnitudes were greater for trajectories containing the prototype, cannot be explained on the basis of perceptual contrast and supports the idea that face aftereffects reflect a shift in the prototype. 
Our results also answer a rather basic question; namely, that of the nature of the aftereffect induced by the adaptor; identity adaptation produces an identity aftereffect, and expression adaptation produces an expression aftereffect. Note that these separable responses should not be taken as evidence for the separate encoding of the two attributes. Indeed, recent adaptation studies provide evidence for a partial overlap in the neural representation of these attributes (Ellamil, Susskind, & Anderson, 2008; Fox & Barton, 2007). In the current study, although expression and identity were deliberately covaried this was done so that, over the two randomly interleaved test sequences, there was no correlation between identity and expression. Had we used only one of the test sequences, we may well have measured substantial incongruent adaptation. However this would simply have reflected the subject gaining knowledge of the correlation between identity and expression, and subsequently using this information in their decision making processes. 
In conclusion, this study demonstrates that the direction of the measured face aftereffect is that of a repulsion of the test stimulus away from the adaptation stimulus. The direction of the aftereffect is unaffected by the direction of the test sequence used for its measurement. Measured face aftereffects appear to tap into the intrinsically multidimensional nature of facial representations. Our findings validate those of other studies that have looked at face adaptation and present an essential link in the chain of evidence from findings in face adaptation. 
Acknowledgments
The authors wish to thank our three anonymous reviewers for their constructive comments. 
Commercial relationships: none. 
Corresponding author: Christopher P. Benton. 
Email: chris.benton@bristol.ac.uk. 
Address: 12a Priory Road, Bristol, BS8 1TU, UK. 
References
Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review, 61, 183–193. [PubMed] [CrossRef] [PubMed]
Barlow, H. (2001). Redundancy reduction revisited. Network, 12, 241–253. [PubMed] [CrossRef] [PubMed]
Barlow, H. B. Rosenblith, W. A. (1961). Possible principles underlying thetransformations of sensory messages. Sensory communication. (pp. 217–314). Cambridge, MA: MIT Press.
Benton, C. P. Etchells, P. J. Porter, G. Clark, A. P. Penton-Voak, I. S. Nikolov, S. G. (2007). Turning the other cheek: The viewpoint dependence of facial expression after-effects. Proceedings of the Royal Society B: Biological Sciences, 274, 2131–2137. [PubMed] [Article] [CrossRef]
Benton, C. P. Jennings, S. J. Chatting, D. J. (2006). Viewpoint dependence in adaptation to facial identity. Vision Research, 46, 3313–3325. [PubMed] [CrossRef] [PubMed]
Clifford, C. W. G. Clifford, C. W. G. Rhodes, G. (2005). Functional ideas about adaptation applied to spatial and motion vision. Fitting the mind to the world: Adaptation and after-effects in high-level vision. (pp. 47–82). Oxford, UK: Oxford University Press.
Cumming, G. Finch, S. (2005). Inference by eye: Confidence intervals and how to read pictures of data. American Psychologist, 60, 170–180. [PubMed] [CrossRef] [PubMed]
Efron, B. Tibshirani, R. J. (1993). An introduction to the bootstrap. Boca Raton, FL: Chapman & Hall/CRC.
Ellamil, M. Susskind, J. M. Anderson, A. K. (2008). Examinations of identity invariance in facial expression adaptation. Cognitive, Affective, & Behavioral Neuroscience, 8, 273–281. [PubMed] [CrossRef]
Fox, C. J. Barton, J. J. (2007). What is adapted in face adaptation The neural representations of expression in the human visual system. Brain Research, 1127, 80–89. [PubMed] [CrossRef] [PubMed]
Jeffery, L. Rhodes, G. Busey, T. (2007). Broadly tuned, view-specific coding of face shape: Opposing figural aftereffects can be induced in different views. Vision Research, 47, 3070–3077. [PubMed] [CrossRef] [PubMed]
Leopold, D. A. O'Toole, A. J. Vetter, T. Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94. [PubMed] [CrossRef] [PubMed]
Rhodes, G. Jeffery, L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46, 2977–2987. [PubMed] [CrossRef] [PubMed]
Rhodes, G. Robbins, R. Jaquet, E. McKone, E. Jeffery, L. Clifford, C. W. G. Clifford, C. W. G. Rhodes, G. (2005). Adaptation and face perception: How aftereffects implicate norm-based coding of faces. Fitting the mind to the world: Adaptation and after-effects in high-level vision. (pp. 213–240). Oxford, UK: Oxford University Press.
Robbins, R. McKone, E. Edwards, M. (2007). Aftereffects for face attributes with different natural variability: Adapter position effects and neural models. Journal of Experimental Psychology: Human Perception and Performance, 33, 570–592. [PubMed] [CrossRef] [PubMed]
Simoncelli, E. P. Olshausen, B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24, 1193–1216. [PubMed] [CrossRef] [PubMed]
Tiddeman, B. P. Burt, D. M. Perrett, D. I. (2001). Prototyping and transforming facial texture for perception research. IEEE Computer Graphics and Applications, 21, 42–50. [CrossRef]
Turk, M. Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3, 71–86. [CrossRef] [PubMed]
Valentine, T. (1991). Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 43, 161–204. [PubMed] [CrossRef]
Watt, R. J. Andrews, D. P. (1981). APE: Adaptive Probit Estimation of psychometric functions. Current Psychology Review, 1, 205–214. [CrossRef]
Webster, M. A. Kaping, D. Mizokami, Y. Duhamel, P. (2004). Adaptation to natural face categories. Nature, 428, 557–561. [PubMed] [CrossRef] [PubMed]
Wichmann, F. A. Hill, N. J. (2001a). The psychometric function: I Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63, 1293–1313. [PubMed] [CrossRef]
Wichmann, F. A. Hill, N. J. (2001b). The psychometric function: II Bootstrap-based confidence intervals and sampling. Perception & Psychophysics, 63, 1314–1329. [PubMed] [CrossRef]
Figure 1
 
Construction of the stimuli used in our experiments from 4 original images (Ah, Bh, As, and Bs). The arrows show the directions of the morph sequences such that, for example, 75% on the BhAs test sequence indicates an image that is 25% Bh and 75% As.
Figure 1
 
Construction of the stimuli used in our experiments from 4 original images (Ah, Bh, As, and Bs). The arrows show the directions of the morph sequences such that, for example, 75% on the BhAs test sequence indicates an image that is 25% Bh and 75% As.
Figure 2
 
Strength of adaptation when adaptation stimulus and response are congruent (top panel) and incongruent (bottom panel). Error bars show 95% confidence limits. Note that all hypothesis testing in the current study is achieved through the use of 95% confidence limits (Cumming & Finch, 2005). Filled circles show results from identity response subjects, squares show results from expression response subjects. Error bars for each subject are derived through a bootstrapping procedure in which 10000 bootstrap estimates of balance point are generated for each psychometric function (Wichmann & Hill, 2001b) and then propagated through the relevant adaptation strength calculations (Benton et al., 2006, 2007). Subject error bars were calculated using the percentile method (Efron & Tibshirani, 1993). Triangles show the averages across subjects with error bars calculated from the standard error of the means. Downward pointing filled triangles show the averages for identity response subjects, downward pointing unfilled triangles show averages for expression response subjects, while upward pointing triangles show averages across both groups.
Figure 2
 
Strength of adaptation when adaptation stimulus and response are congruent (top panel) and incongruent (bottom panel). Error bars show 95% confidence limits. Note that all hypothesis testing in the current study is achieved through the use of 95% confidence limits (Cumming & Finch, 2005). Filled circles show results from identity response subjects, squares show results from expression response subjects. Error bars for each subject are derived through a bootstrapping procedure in which 10000 bootstrap estimates of balance point are generated for each psychometric function (Wichmann & Hill, 2001b) and then propagated through the relevant adaptation strength calculations (Benton et al., 2006, 2007). Subject error bars were calculated using the percentile method (Efron & Tibshirani, 1993). Triangles show the averages across subjects with error bars calculated from the standard error of the means. Downward pointing filled triangles show the averages for identity response subjects, downward pointing unfilled triangles show averages for expression response subjects, while upward pointing triangles show averages across both groups.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×