Open Access
Article  |   December 2019
Agent identity drives adaptive encoding of biological motion into working memory
Author Affiliations
  • Quan Gu
    Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China
  • Wenmin Li
    Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China
  • Xiqian Lu
    Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China
  • Hui Chen
    Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China
  • Mowei Shen
    Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China
    mwshen@zju.edu.cn
    https://person.zju.edu.cn/en/moweishen
  • Zaifeng Gao
    Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China
    zaifengg@zju.edu.cn
    https://person.zju.edu.cn/en/zaifengg
Journal of Vision December 2019, Vol.19, 6. doi:https://doi.org/10.1167/19.14.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Quan Gu, Wenmin Li, Xiqian Lu, Hui Chen, Mowei Shen, Zaifeng Gao; Agent identity drives adaptive encoding of biological motion into working memory. Journal of Vision 2019;19(14):6. doi: https://doi.org/10.1167/19.14.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To engage in normal social interactions, we have to encode human biological motions (BMs, e.g., walking and jumping), which is one of the most salient and biologically significant types of kinetic information encountered in everyday life, into working memory (WM). Critically, each BM in real life is produced by a distinct person, carrying a dynamic motion signature (i.e., identity). Whether agent identity influences the WM processing of BMs remains unknown. Here, we addressed this question by examining whether memorizing BMs with different identities promoted the WM processing of task-irrelevant clothing colors. Two opposing hypotheses were tested: (a) WM only stores the target action (element-based hypothesis) and (b) WM stores both action and irrelevant clothing color (event-based hypothesis), interpreting each BM as an event. We required participants to memorize actions that either performed by one agent or distinct agents, while ignoring clothing colors. Then we examined whether the irrelevant color was also stored in WM by probing a distracting effect: If the color was extracted into WM, the change of irrelevant color in the probe would lead to a significant distracting effect on action performance. We found that WM encoding of BMs was adaptive: Once the memorized actions had different identities, WM adopted an event-based encoding mode regardless of memory load and probe identity (Experiment 1, different-identity group of Experiment 2, and Experiment 3). However, WM used an element-based encoding mode when memorized-actions shared the same identity (same-identity group of Experiment 2) or were inverted (Experiment 4). Overall, these findings imply that agent identity information has a significant effect on the WM processing of BMs.

Introduction
As an extensively social species, we observe people engaging in different biological motions (BMs), such as walking, jumping, and handshaking, almost every day (for reviews, see Pavlova, 2012; Puce & Perrett, 2003; Steel, Ellem, & Baxter, 2015; Troje, 2013). These BM stimuli convey rich social information (Pavlova, 2012; Puce & Perrett, 2003; Thornton, 2018), which can be conspicuously demonstrated by point-light displays (PLDs) that depict human movements via a simple set of minute light points (e.g., 12 points) placed at distinct joints of a moving human body (Johansson, 1973). Although these PLDs are highly impoverished (e.g., clothes and head cues are absent), once in motion they can be rapidly recognized as coherent, meaningful movements. Critically, multiple aspects of social information, such as identity, walking direction, sex, attractiveness, interaction, intention, and emotion can be extracted from PLDs during visual perception (e.g., Alaerts, Nackaerts, Meyns, Swinnen, & Wenderoth, 2011; Atkinson, Dittrich, Gemmell, & Young, 2004; Loula, Prasad, Harber, & Shiffrar, 2005; Manera, Schouten, Becchio, Bara, & Verfaillie, 2010; Morrison, Hannah, Louise, & Hannah, 2018; Pollick, Lestou, Ryu, & Cho, 2002; Rizzolatti, Fogassi & Gallese, 2001; Roether, Omlor, Christensen, & Giese, 2009; van Boxtel & Lu, 2012; for reviews, see Blakemore, 2008; Sokolov et al., 2012; Troje, 2013). Therefore, the ability to retain these BMs in working memory (WM), which is a postperceptual buffer storing and manipulating a limited set of information for ongoing tasks (Baddeley & Hitch, 1974), plays a pivotal role in our normal social interactions (Urgolites & Wood, 2013), and is perhaps one of the most sophisticated forms of memory processing in the brain (Ding, Gao, & Shen, 2017). 
Recently, researchers have attempted to uncover the processing mechanisms of BMs in WM. These studies predominantly focused on the WM storage of the pure action information embedded in BMs. For instance, WM can retain three to four individual actions that are stored independently from location, color, shape, and color-shape binding (e.g., Cai et al., 2018; Gao, Bentin, & Shen, 2015; Lu et al., 2016; Shen, Gao, Ding, Zhou, & Huang, 2014; Wood, 2007, 2011); storing action-related binding is resource-demanding (e.g., Ding et al., 2015; Liu, Lu, Wu, Shen, & Gao, 2019; Lu, Ma, Zhao, Gao, & Shen, 2019). To isolate pure action information, the tested BM stimuli were in fact collected from a single actor in almost all WM studies of BM (e.g., the BMs from the widely used Vanrie & Verfaillie, 2004, database were acquired from one actor). However, each BM in our daily life is produced by a distinct person, carrying a dynamic motion signature that informs of one's identity (e.g., Barclay, Cutting, & Kozlowski, 1978; Beardsworth & Buckner, 1981; Cutting & Kozlowski, 1977; Loula et al., 2005; Runeson & Frykholm, 1983, 1986; Stevenage, Nixon, & Vince, 1999; Troje, Westhoff, & Lavrov, 2005). Although agent identity and action are processed by different neural substrates (e.g., Cai et al., 2018; Downing, Jiang, Shuman, & Kanwisher, 2001; Downing, Peelen, Wiggett, & Tew, 2006; Peelen, Wiggett, & Downing, 2006; Puce & Perrett, 2003; Urgesi, Candidi, Ionta, & Aglioti, 2007), recent studies have implied that there is an intimate relation between action and BM identity (Balas & Pearson, 2017; Pilz & Thornton, 2017; Simhi & Yovel, 2017). For instance, Pilz and Thornton (2017) found that body motion affects the processing of identity. Moreover, it has been found that identity influences our cognitive processing in general (e.g., Bavel, Hackel, & Xiao, 2014) and action processing in particular (e.g., Schain, Lindner, Beck, & Echterhoff, 2012). For instance, Lindner, Echterhoff, Davidson, and Brand (2010) found that observation of other-performed actions induced false memories of self-performance; intriguingly, this phenomenon could be reduced by enhancing the identity cue of action to the observer (Schain et al., 2012). Therefore, it is possible that the identity information of BMs has an impact on action processing in a WM task. 
To the best of our knowledge, only Wood (2008) and Cai et al. (2018) in a WM task displayed BMs with distinct identities by creating three-dimensional (3D) animations with a variety of physical features (e.g., hair style, faces, body shape). Moreover, they addressed the WM storage of action-related information instead of exploring the role of identity. Therefore, although identity could be extracted via BM perception (e.g., Barclay et al., 1978; Beardsworth & Buckner, 1981; Cutting & Kozlowski, 1977; Loula et al., 2005; Runeson & Frykholm, 1983, 1986; Stevenage et al., 1999; Troje et al., 2005), whether identity information affects the WM processing of BM remains unknown. 
To address this question, we focused on a common phenomenon in our daily life: While memorizing the actions of distinct people on a street, will their clothing colors be extracted into WM involuntarily? A similar question has been extensively explored for static visual objects in WM (e.g., Gao, Li, Yin, & Shen, 2010; Gao et al., 2016; Shen, Tang, Wu, Shui, & Gao, 2013; Shin & Ma, 2016, 2017; Swan, Collins, & Wyble, 2016; Zhao, Chen, Zhang, Shen, & Gao, 2018). A growing body of research has found that even if only one basic feature is required to be memorized, the other features will also be extracted into this cognitive unit, suggesting that an object-based encoding manner takes place for visual objects.1 Because each BM corresponds to an event (Shipley & Zacks, 2008), as an analogy, it is possible that WM adopts an event-based encoding manner. This hypothesis is theoretically reasonable. Particularly, an event is a higher level cognitive structure than an object, which typically contains several basic components, including agents, the relations among agents, agent properties (including physical characteristics, such as height and clothing color), event states, and spatiotemporal locations (Barwise & Perry, 1983).2 For instance, a segment of time at a given location for an object is considered as an event if the observer perceives a beginning and an end (Zacks & Tversky, 2001). Therefore, an object is usually a component of an event, and the previously revealed object-based encoding might be a reflection of event-based encoding for processing events in WM. 
This event-based encoding hypothesis was first addressed by Ding et al. (2015). Particularly, participants were shown a set of colored PLD-format BMs, the color of which was used to index the clothing color of actors (cf. Wood, 2008). Participants had to memorize the actions while ignoring the irrelevant color of the PLD-format BMs. Ding et al. examined the fate of irrelevant color by probing a distracting effect, which has been commonly used in exploring object-based encoding in previous WM and long-term memory studies (e.g., Ecker et al., 2013; Gao et al., 2010, 2016; Shen et al., 2013; Zhao et al., 2018). This distracting effect was examined via a change detection task, wherein change in an irrelevant element (i.e., color) of the probed stimuli was manipulated (Irrelevant-change vs. No-change). If a change of the irrelevant color in the probed stimuli significantly impairs the performance of the target element [prolonged reaction time (RT) or lowered accuracy], then we can deduce that the irrelevant color is involuntarily encoded into WM. However, Ding et al. (2015) found that the change of irrelevant color did not affect the performance of action (both accuracy and RT), implying that WM only stored the target action. In other words, instead of adopting an event-based encoding manner, WM adopted an element-based encoding manner for BMs. 
It is critical to note that all the BMs in Ding et al. (2015) were from the same actor (adopted from Vanrie & Verfaillie, 2004) and hence shared the same motion signature. It is possible that the identity information of BMs affects the WM processing of BMs. To be specific, the WM encoding of BMs could be adaptive: WM encoding of BMs employs an algorithm of event-based encoding when facing different identities, yet uses element-based encoding when facing a set of BMs with the same identity. From this perspective, the finding of Ding et al. (2015) may be constrained by the specific setting (same identity) of the memory array. However, no study to date has examined whether event-based encoding occurs when memorizing BMs with distinct identities, which is the actual situation we encounter in everyday life. To close this gap, we followed the basic procedure of Ding et al. (2015). Particularly, we showed participants a set of colored BMs, and used the PLD-format BMs to control for the physical aspects of human BMs while isolating the kinematic information. However, in contrast to Ding et al. (2015), the memorized BMs were collected from distinct actors with idiosyncratic gestures (i.e., distinct identity) to simulate real-world conditions. We required participants to only memorize actions, while ignoring the irrelevant color. Critically, the irrelevant color could be changed in 50% of the trials. If the event-based encoding took place for BMs in WM, we would observe a significant distracting effect driven by the change of irrelevant color. 
Experiment 1: Event-based encoding under different identities
We examined the core prediction of the current study: Event-based encoding could occur in WM when participants face a set of BMs of different identities. Similar to Ding et al. (2015), Experiment 1 displayed two or five BMs, which was within or beyond the WM capacity for BMs (3–4 BMs, Shen et al., 2014), but with distinct identities. A probed BM was presented on the screen center, the color of which could be changed in 50% of trials. 
Method
Participants
Following Ding et al. (2015), 24 naive students from Zhejiang University (16 women, 20.5 ± 1.6 years old on average) were paid to participate in the experiment. All participants provided signed informed consent and had normal color vision as well as normal or corrected-to-normal visual acuity. The study was carried out in accordance with the Code of Ethics of the World Medical Association, and was approved by the institutional review board of the Department of Psychology and Behavioral Sciences, Zhejiang University. Additionally, two participants noticed (self-reporting after the experiment) that there were six actions in total (see the stimuli that follows) after certain trials; hence, they just memorized an unused BM instead when they were required to memorize five BMs. This strategy made the manipulation of memory load unsuccessful, and we hence replaced these two participants. 
Stimuli and apparatus
All the PLD-format BM stimuli were generated via a Kinect-based BM capture toolbox (Shi et al., 2017). Six actors (three male and three female) were recruited to define the identity information (i.e., six identities). In line with Ding et al. (2015), we required participants to memorize six categories of actions. The actions were waving, walking, chopping, spading, jumping, and drinking (Figure 1). Note jumping, spading, walking, and waving were the actions used in Ding et al. (2015); however, painting and cycling in Ding et al. (2015) were difficult to generate using the Kinect-based BM capture toolbox; thus, these two BMs were replaced with chopping and drinking. Each actor had a distinct motion signature and performed all six actions. To verify that the recorded BM indeed carried distinct identity information, we conducted a pilot experiment. We displayed three BMs for 2 s at the screen center. The three BMs could be from the same actor (40 trials), or from three different actors (40 trials); the two conditions were displayed randomly. A question mark was displayed immediately after the offset of the three BMs, and the participants had to judge whether the identity of the displayed three BMs was the same or different. Moreover, we did not provide any feedback to participants' responses to avoid any top-down guide on identity identification, such that we could obtain a relatively pure estimation of recognition ability of identity. Twenty participants (12 women, 21.2 ± 2.2 years old on average) took part in the experiment. We found that identity recognition accuracy (62%) was significantly higher than the chance level, tone-side (19) = 6.541, p < 0.001, Cohen's d = 1.463, BF = 13,546. This result suggested that although identity recognition was relatively difficult, participants could effectively differentiate the two identity conditions. 
Figure 1
 
Example frames for the biological motion (BM) stimuli used in the current study. Here, all BM stimuli are from the same female actor. The BM stimuli from left to right are waving, walking, chopping, spading, jumping, and drinking.
Figure 1
 
Example frames for the biological motion (BM) stimuli used in the current study. Here, all BM stimuli are from the same female actor. The BM stimuli from left to right are waving, walking, chopping, spading, jumping, and drinking.
Every animation consisted of 60 distinct frames with a filmed speed of 30 Hz by Kinect 2.0, and each frame had 13 points. Moreover, each frame was displayed twice in succession, leading to a 2-s PLD at a refresh rate of 60 Hz. The heights of the six actors (in a standing still state) were rather similar, being 105, 112, 116, 118, 120, and 124 pixels on the screen, respectively. Overall, the displays subtended at approximately a 3.90° × 1.55° visual angle from a viewing distance of 60 cm in a dark and sound-shielded room. 
In line with Ding et al. (2015), six distinct colors were used to represent the clothing colors of the BMs, while the point on the head was always white. The six color values were red (255, 0, 0, in RGB value), green (0, 255, 0), blue (0, 0, 255), yellow (255, 255, 0), cyan (0, 255, 255), and magenta (255, 0, 255). The memorized actions in a trial were randomly selected from the six types of action without repetition. Each action had a distinct identity and a distinct color. The actions' identities in a trial were randomly selected from the six actors without repetition (the participants were informed of this setting before the experiment), and the actions' colors in a trial were also randomly selected from the six colors without repetition. The stimuli were displayed on a black background projected onto a 17-in. Dell (Xiamen, China) CRT monitor. 
Procedure
Each trial began with the presentation of two white digits on the screen center for 500 ms (Figure 2). Participants were required to repeat the two digits (e.g., by stating “one” and “four”) loudly throughout a trial. This concurrent articulatory suppression task was used to prevent participants from verbally rehearsing the BMs (e.g., Ding et al., 2015; Shen et al., 2014). Next, a red fixation appeared for 300 ms to notify the participants of the upcoming WM task. After a blank interval of 150–350 ms, the memory array was presented. Two or five colored BMs with different identities were displayed on the screen for 2 s, and were randomly positioned at seven potential locations uniformly distributed on an invisible circle (6.50° in radius) centered on the screen. After the memory array, a 1-s blank interval was displayed, followed by a probe on the screen center. The participants had to judge whether the action had appeared in the memory array and pressed a button on the keyboard to relay the judgment within 4 s (press “F” for new action and “J” for old action). The target dimension (action) and the irrelevant dimension (color) of the probe changed independently (with a probability of 50%) relative to the memorized stimuli. This setting resulted in four different types of probes with equal probability (old-action and same-color, old-action and different-color, new-action and same-color, and new-action and different-color). When a change took place, the value of the corresponding element in the probe changed into a new one, which was not used in the prior memory array. Moreover, when a new action was displayed, the corresponding identity was randomly selected from the six actors (i.e., the identity was an old one in the memory array in 1/3 and 5/6 of action-change trials for two BMs and five BMs, respectively). If an old action was displayed, its identity would be maintained the same as the old action in the memory array. This resulted in three action-identity combinations overall. Particularly, when the probe was an old action from the memory array, it belonged to same-identity and old-action, in which the same actor performed the old action in the tested memory array. When the probe was a new action, there were two types: (a) same-identity and new-action, in which one of the actors from the memory array performed a new action that was not used in the memory array; (b) different-identity and new-action, in which a new actor performed a new action that was not in the memory array. Both response accuracy and RT were emphasized and recorded. There was a 1,500 to 2,000-ms blank interval between trials. 
Figure 2
 
A schematic illustration of a single trial in Experiment 1 (old-action & different-color condition), in which the color changes. The two biological motions (BMs) in the memory array are of yellow and red color; the BM in the probe is of magenta color. It is worth noting that since the current task is a high-level cognitive task, we did not calibrate and equalize the luminance of the used colors, which is commonly performed in low-level vision studies (Allred & Flombaum, 2014). However, the used colors of BMs on the screen were reported to be clear and differentiable according by the participants.
Figure 2
 
A schematic illustration of a single trial in Experiment 1 (old-action & different-color condition), in which the color changes. The two biological motions (BMs) in the memory array are of yellow and red color; the BM in the probe is of magenta color. It is worth noting that since the current task is a high-level cognitive task, we did not calibrate and equalize the luminance of the used colors, which is commonly performed in low-level vision studies (Allred & Flombaum, 2014). However, the used colors of BMs on the screen were reported to be clear and differentiable according by the participants.
A 2 (set size: 2 vs. 5 BMs) × 2 (irrelevant color: same-color vs. different-color) within-subjects design was adopted. Each condition included 40 trials, resulting in 160 randomly presented trials. The experiment was divided into four blocks with a 3-min break between each two blocks. Before the experimental trials, 12 practice trials were completed to ensure that the participants understood the instructions. 
Analysis
Since the current study aimed at elucidating whether the irrelevant element could be extracted to WM and the potential index reflecting this modulation could be different under different experimental conditions (see footnote 1), it was important to avoid reaching a null result due to insensitive indexes. To this end, we followed previous studies (e.g., Ding et al., 2015; Ecker et al., 2013; Kirmsse, Zimmer, & Ecker, 2018; Shen et al., 2013; Zhang, Shen, Tang, Zhao, & Gao, 2013), recording and analyzing both the response accuracy and RT (see Table 1 for the descriptive statistics of the current study). For the RT, only trials with correct responses were entered into further analysis. Moreover, to sensitively probe the influence of color change on the WM performance of BM, the current study focused on data where the target dimension remained the same between the memory array and probe while the change of the irrelevant dimension was manipulated (i.e., old-action & same-color, old-action & different-color; hence we termed this partial analysis). If the irrelevant color was extracted into WM, the change of color should affect at least one of the indexes (i.e., accuracy and RT). To measure this influence, we conducted a two-way repeated-measures analysis of variance (ANOVA) on accuracy and RT, separately, by taking set size (two vs. five BMs) and irrelevant color (same-color vs. different color) as the within-subjects factors. 
Table 1
 
Mean accuracy and reaction time (RT) of the task in each factor combination (distinguished between old action and new action) of Experiments 1 to 4. The data in parentheses stand for standard error. The reported partial analysis in the main text focused on the data of the old action conditions. A full analysis is reported in Supplementary File S1, in which we compared the effect of color change by combining the old-action and new-action data.
Table 1
 
Mean accuracy and reaction time (RT) of the task in each factor combination (distinguished between old action and new action) of Experiments 1 to 4. The data in parentheses stand for standard error. The reported partial analysis in the main text focused on the data of the old action conditions. A full analysis is reported in Supplementary File S1, in which we compared the effect of color change by combining the old-action and new-action data.
It is of note that the current partial analysis was different from that used in previous studies focusing on the whole data set (full analysis; e.g., the signal detection theory analysis reported in Ding et al., 2015 as well in Supplementary File S1). We considered that adopting partial analysis would be more sensitive and appropriate for revealing the underlying mechanism. Particularly, the performance of the target action in the current paradigm was modulated by two factors. The first factor is the mismatch between the representations of irrelevant color. A mismatch of the irrelevant dimension would lead to a detailed comparison process during the comparison phase of WM (Yin et al., 2011; 2012), hence prolonging the response time to the target dimension. The second factor is the incongruence of the response tendency between the target and irrelevant dimension. For instance, in the new-action and different-color condition, both the target action and irrelevant color changed, leading to a congruent response tendency because both tend to evoke a “change” response. In this case, the response to the target dimension is facilitated. However, in the new-action and same-color condition, there was an incongruence of response tendency between the target and irrelevant dimension, because the target dimension changed while the irrelevant dimension was maintained the same. Consequently, if only the second factor had an effect, the performance in the new-action and different-color condition would be facilitated relative to the new-action and same-color condition, which is in conflict with the effect that the first factor would predict. Taking the two aforementioned factors into consideration, we can clearly predict that the performance under the old-action and same-color condition should be significantly better than that under the old-action and different-color condition, because in the latter condition both factors function in the same direction. However, the difference between the new-action and same-color condition and the new-action and different-color condition, to some extent, is difficult to predict, because the two factors function in different directions. The exact result pattern depends on the relative contribution of the two factors in the experiment, which explained the mixed result patterns between the new-action and same-color and new-action and different-color conditions in the current study (see the data in the row of New action of Table 1). To uncover the influence of irrelevant-color change on the WM performance of BM, we therefore focused on the old-action and same-color condition and the old-action and different-color condition. A full analysis would neglect the influence of the second factor. However, to have a direct comparison with Ding et al. (2015), we also reported the results of the full analysis (by setting action change as a control variable) in Supplementary File S1 and found similar results as reported here for Experiments 14
Additionally, we calculated the Bayes factor (BF10, Jeffreys, 1961; Rouder, Morey, Speckman, & Province, 2012; Rouder, Speckman, Sun, Morey, & Iverson, 2009) to examine the ratio of the likelihood probability between the alternative hypothesis (H1) relative to the null hypothesis (H0), with the JASP statistical software (JASP Team, 2018, version 0.9.2.0). A BF10 of 3 constitutes substantial evidence for the H1 over the H0, whereas a BF10 of 10 is considered to be strong evidence for the H1 over the H0 (Jeffreys, 1961; Wetzels & Wagenmakers, 2012). In this study, we reported the BF to represent the BF10. Thus, BF > 3 indicates evidence for the presence of the main effect or interaction under consideration, while BF < 1/3 indicates evidence for the absence of the effect under consideration. To compute the BFs, we set Cauchy priors at default (for t tests: r = 0.707; for ANOVAs: r = 0.5, 1, and 0.354 for fixed effects, random effects, and covariates, respectively). For the ANOVA, we reported the BFInclusion value (cf. Wagenmakers et al., 2018) for each factor in the model (i.e., a main effect or an interaction effect), which indicates the likelihood of the data under the models that included an effect compared to equivalent models stripped of the effect (i.e., Bayesian model averaging) while excluding higher-order interactions. 
Results
The ANOVA revealed a significant main effect of set size on accuracy [Figure 3A; F(1, 23) = 75.941, p < 0.001, ηp2 = 0.768, BF = 2.740e+15] and RT [Figure 3B; F(1, 23) = 15.102, p < 0.001, ηp2 = 0.396, BF = 6332.049], suggesting that WM performance was worse under the 5-BM than under the 2-BM condition. The main effect of irrelevant color was not significant on accuracy [F(1, 23) = 2.791, p = 0.108, ηp2 = 0.108, BF = 0.471]. Critically, a significant main effect of irrelevant color was found on RT [F(1, 23) = 32.402, p < 0.001, ηp2 = 0.585, BF = 206.936], suggesting that the change of irrelevant color (1,147 ms) prolonged the RT relative to the color no-change condition (1,069 ms). Further analysis found that the change of irrelevant color prolonged the RT relative to the color no-change condition in both 2 BMs [t(23) = 4.936, p < 0.001, Cohen's d = 1.008, BF = 465.691] and 5 BMs condition [t(23) = 2.709, p = 0.013, Cohen's d = 0.553, BF = 4.010]. However, the set size × irrelevant color interaction did not reach significance on accuracy [F(1, 23) = 0.842, p = 0.368, ηp2 = 0.035, BF = 0.342] or RT [F(1, 23) = 1.164, p = 0.292, ηp2 = 0.048, BF = 0.381]. 
Figure 3
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 1. The error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Figure 3
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 1. The error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Discussion
Experiment 1 showed that the change of color slowed down WM processing of action at the test phase regardless of memory load, implying that the irrelevant color was extracted into WM, supporting the event-based encoding hypothesis. 
In Experiment 2, we further examined whether the identity information affected WM encoding, by directly manipulating whether the memorized actions belonged to the same agent (i.e., the setting of Experiment 4 in Ding et al. 2015) or different agents (the setting of Experiment 1). If the WM encoding of BMs was adaptive, we would replicate the finding of Ding et al. (2015) under the same-identity condition while replicating the finding of Experiment 1 under the different-identity condition. 
Experiment 2: Adaptive encoding of BMs into WM
In Experiment 2, the memorized actions in a trial could belong to the same agent (same-identity) or different agents (different-identity). To reveal different processing manners while avoiding any processing-strategy contamination, we set BM identity as a between-subjects factor. 
Method
There were 24 participants (12 women, 21.0 ± 1.8 years old on average) in the same-identity group and 24 participants (13 women, 21.7 ± 2.3 years old on average) in the different-identity group. 
The procedure was the same as that in Experiment 1 except for the following aspects: (1) we required participants to memorize three actions, considering that both Experiment 1 of current study and Experiment 4 of Ding et al. (2015) found that memory load did not modulate the WM encoding of irrelevant dimensions of BM. Moreover, this setting enabled us to further examine the event-based encoding of BMs in a new load condition. (2) In the same-identity group, the participants were informed that the memorized three actions in a trial belonged to one actor; the actor in each trial was randomly selected from the six actors used in Experiment 1. The probe and the memory array in a trial shared the same identity. (3) In the different-identity group, the participants were informed that the memorized three actions in a trial were from three distinct actors. Moreover, when a new action appeared, the corresponding identity was randomly selected from the six actors (i.e., the identity was an old one in the memory array in 1/2 of the new-action trials). 
There were in total 80 randomly presented trials for each group. The experiment was divided into four blocks, with a 3-min break between each block. A two-way mixed ANOVA was conducted on accuracy and RT, separately, by setting irrelevant color (same-color vs. different-color) as a within-subjects factor and memory identity (same-identity vs. different-identity) as a between-subjects factor. The others were the same as in Experiment 1
Results
The mixed ANOVA did not reveal a significant main effect of memory identity on accuracy [Figure 4A; F(1, 46) = 0.198, p = 0.659, ηp2 = 0.004, BF = 0.336] or RT [Figure 4B; F(1, 46) = 3.107, p = 0.085, ηp2 = 0.063, BF = 0.819]. The main effect of irrelevant color did not reach significance on accuracy [F(1, 46) = 0.187, p = 0.667, ηp2 = 0.004, BF = 0.236]. The main effect of irrelevant color was marginally significant on RT [F(1, 46) = 4.026, p = 0.051, ηp2 = 0.080, BF = 0.912]. The memory identity × irrelevant color interaction was not significant on accuracy [F(1, 46) = 1.683, p = 0.201, ηp2 = 0.035, BF = 0.576]; however, it reached significance on RT [F(1, 46) = 9.304, p = 0.004, ηp2 = 0.168, BF = 15.208]. An analysis of the simple main effect revealed that the main effect of irrelevant color was significant on RT [t(23) = 4.822, p < 0.001, Cohen's d = 0.984, BF = 360.939] in the different-identity group, illustrating that the change of color slowed down participants' response to the target. However, the main effect of irrelevant color did not reach significance on RT [t(23) = 0.613, p = 0.546, Cohen's d = 0.125, BF = 0.255] in the same-identity group. 
Figure 4
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 2. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Figure 4
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 2. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Discussion
There were two main findings in Experiment 2. First, we replicated the finding of Experiment 1 (two or five BMs) under a new load condition (three BMs), supporting an event-based encoding manner in the different-identity group. When the irrelevant dimension changed in the probe, the change signal slowed down participants' response to the target dimension. Second, we replicated the finding of Ding et al. (2015) in the same-identity group: the change of irrelevant dimension no longer disturbed participants' response to the target dimension, supporting an element-based encoding manner. These results implied that WM adopted an adaptive processing manner in encoding BM. Moreover, the replication of the findings of Ding et al. (2015) suggested that the event-based encoding in the current study was not because of different parameters being used (e.g., different exposure time of the memory array and different BM sets). 
It is of note that while both the same-identity condition of Experiment 2 and Ding et al. (2015) revealed an element-based encoding manner, these results did not imply that WM treated each BM as an object facing BMs with the same identities. Instead, WM still treated each BM as an event, but focused on the target dimension of the event. Similarly, Verfaillie (1993) presented participants a PLD-format BM, which could be a walking human or a walking nonhuman, and required the participants to judge whether the displayed stimulus was a human or a nonhuman. Verfaillie (1993) named this task an object-decision task. We argue that there was actually no conflict between the current study and that by Verfaillie (1993). Verfaillie (1993) called the task an object-decision task, because that study focused on the recognition of objects in different in-depth orientations while using BM as the stimulus of interest. From the viewpoint of the current study, because participants perceived BM events and judged the category of the perceived events, the object-decision task in Verfaillie (1993) is essentially an event-decision task. 
One alternative explanation to the results of Experiment 1 and the different-identity group of Experiment 2 was that participants fulfilled the task in identity change of the probe instead of action change. There were at least two reasons against this alternative. First, memorizing actions was much easier than memorizing identities of actions (the participants demonstrated an identity recognition accuracy of 62% when judging the identity status of three BMs; see the pilot experiment reported in the Stimuli and apparatus session of Experiment 1); it appears unlikely that the participants memorized a more difficult dimension to fulfill the task. Second, the probe's identity in the action-change condition could be an old one from the memory array (1/3 and 5/6 of action-change trials for load 2 and 5, respectively in Experiment 1; and 1/2 of action-change trials in Experiment 2), and the two load conditions of Experiment 1 were displayed randomly. Under such circumstances, it would be difficult for participants to make a correct judgment based on the identity change of the probe. 
However, because the identity of the action was new in certain trials when a new action was displayed in Experiments 1 and 2 (different-identity group), this setting might have affected the processing strategy of BM implicitly, leading to the involuntary process of identity. Conversely, considering that the binding between identity and action within a trial was maintained constant between the memory and test arrays with a probability of at least 50% (i.e., an old action was probed), participants may have used identity information to facilitate the identification of the action. We addressed both issues in Experiment 3 by manipulating the setting of the probe. 
Experiment 3: the influence of probe identity
Experiment 3 required participants to memorize three BMs with different identities, while the probe's identity was manipulated: The probe's identity was always an old one, which was randomly selected from the tested memory array (old-identity), or always a new one that was not used in the tested memory array (new-identity). If the results of Experiments 1 and 2 (different-identity group) were due to the appearance of new identity in the probe, then a distracting effect should be observed in the new-identity group but should vanish in the old-identity group. If the results of Experiments 1 and 2 were found because participants used the identity information to improve action recognition, then a distracting effect should be observed only in the old-identity group, but should vanish in the new-identity group. However, if the results of Experiments 1 and 2 were rooted in the fact that the memorized BMs had distinct identities, a distracting effect should be observed in both conditions regardless of probe settings. 
Method
There were 24 participants (16 women, 20.5 ± 1.4 years old on average) in the old-identity group and 24 participants (eight women, 20.8 ± 1.8 years old on average) in the new-identity group. One participant in the new-identity group was replaced because his overall RT exceeded 2.5 standard deviations of the averaged RT of all participants. 
The three BMs in the memory array had different identities. Critically, we tested two types of probe identity: (1) old-identity group: the probe's identity was always one of the identities in the tested memory array; (2) new-identity group: the probe's identity was always a new one that was not used in the tested memory array (i.e., a fourth identity was used in the probe), to discourage the processing of identity of the memorized actions. All the other aspects were the same as those in different-identity group of Experiment 2
A two-way mixed ANOVA was conducted on accuracy and RT, separately, by taking irrelevant color (same-color vs. different-color) as a within-subjects factor and probe identity (old-identity vs. new-identity) as a between-subjects factor. 
Results
The mixed ANOVA did not reveal a significant main effect of probe identity on accuracy [Figure 5A; F(1, 46) = 0.137, p = 0.713, ηp2 = 0.003, BF = 0.307] or RT [Figure 5B; F(1, 46) = 0.119, p = 0.732, ηp2 = 0.003, BF = 0.624]. As for the irrelevant color, the ANOVA reached significance on accuracy [F(1, 46) = 4.374, p = 0.042, ηp2 = 0.087, BF = 1.518] and RT [F(1, 46) = 9.959, p = 0.003, ηp2 = 0.178, BF = 12.411], suggesting that the change of color impaired participants' performance and slowed down participants' response to the target dimension. Further analysis found that the change of irrelevant color prolonged the RT relative to the color no-change condition in both the old-identity group [t(23) = 2.265, p = 0.033, Cohen's d = 0.462, BF = 1.800] and the new-identity group [t(23) = 2.198, p = 0.038, Cohen's d = 0.449, BF = 1.607]. For the accuracy, we found that the main effect of irrelevant color did not reach significance in either the old-identity group [t(23) = 1.446, p = 0.162, Cohen's d = 0.295, BF = 0.537] or the new-identity group [t(23) = 1.534, p = 0.139, Cohen's d = 0.313, BF = 0.600]. The probe identity × irrelevant color interaction was not significant on accuracy [F(1, 46) = 0.019, p = 0.890, ηp2 < 0.001, BF = 0.286] or RT [F(1, 46) = 0.004, p = 0.950, ηp2 < 0.001, BF = 0.283], suggesting that the extraction of irrelevant color was not modulated by the probe identity. 
Figure 5
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 3. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Figure 5
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 3. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Discussion
In Experiment 3, we found that the change of color slowed down participants' response to the target dimension, regardless of the setting of the probe. These results suggested that the findings in Experiments 1 and 2 were not due to the appearance of new identity in the probe or due to the strategy that participants used the identity information to facilitate the action recognition. Instead, the observed event-based encoding was related to the fact that the memorized BMs had distinct identities. 
Experiment 4: Inverting BMs erased event-based encoding
Although we provided consistent evidence supporting the event-based encoding hypothesis when the memorized BM had distinct identities relative to the same identity, there might have been certain extra low-level cues in the different-identity condition that drove the observed effect, for instance, the moving frequency of each BM, relative height of each BM, and the potential motion coherence between memorized BMs. Experiment 4 attempted to address this issue by vertically inverting the memorized and probed BMs in the different-identity group of Experiment 2. It is well accepted that the social perception of BM becomes dramatically impaired when inverting the BM stimuli (e.g., Barclay et al., 1978; Ikeda, Blake, & Watanabe, 2005; Shi, Weng, He, & Jiang, 2010; Pavlova & Sokolov, 2000; Poljac, Verfaillie, Wagemans, 2011; see chapter 6 of Hemeren, 2008, for an excellent review), including the perception of BM identity (Loula et al., 2005) and reflexive attentional orienting of BM (Shi et al., 2010). Therefore, inverting BM could effectively reduce the contribution of agent identity information in the upright stimulus set, while preserving most of the low-level information. If the low-level cues drove the event-based encoding, then this processing mode should not be affected by the inversion of the memory array. Otherwise, element-based encoding would emerge. 
Method
There were 24 participants (14 women, 23.1 ± 2.4 years old on average) in Experiment 4. One participant was replaced because his performance was below 2.5 standard deviations of the average. The other aspects were the same as in the different-identity condition in Experiment 2, except for inverting the BM stimuli for both the memory array and probe. 
To directly compare between the inverted BM condition and upright BM condition (different-identity condition in Experiment 2), a two-way mixed ANOVA was conducted on accuracy and RT, separately, by setting irrelevant color (same-color vs. different-color) as a within-subjects factor and BM orientation (upright BM vs. inverted BM) as a between-subjects factor. 
Results
The mixed ANOVA did not reveal a significant main effect of BM orientation on accuracy [Figure 6A; F(1, 46) = 2.814, p = 0.100, ηp2 = 0.058, BF = 0.929], but on RT [Figure 6B; F(1, 46) = 4.982, p = 0.031, ηp2 = 0.098, BF = 1.400], implying that it was more difficult to recognize the inverted BM. The main effect of irrelevant color did not reach significance on accuracy [F(1, 46) = 1.518, p = 0.224, ηp2 = 0.032, BF = 0.431], but reached significance on RT [F(1, 46) = 12.455, p < 0.001, ηp2 = 0.213, BF = 20.559]. The BM orientation × irrelevant color interaction was not significant on accuracy [F(1, 46) = 0.169, p = 0.683, ηp2 = 0.004, BF = 0.304]; however, it reached significance on RT [F(1, 46) = 5.171, p = 0.028, ηp2 = 0.101, BF = 3.139]. A simple effect analysis revealed that, unlike the significant difference found on RT in the upright BM condition of Experiment 2 [t(23) = 4.822, p < 0.001, Cohen's d = 0.984, BF = 360.939], the main effect of irrelevant color was not significant on RT in the inverted BM condition [t(23) = 0.786, p = 0.440, Cohen's d = 0.160, BF = 0.284]. 
Figure 6
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 4. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Figure 6
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 4. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Discussion
By inverting the BM stimuli in the different-identity group of Experiment 2, the change of irrelevant color did not affect the WM performance, suggesting that the event-based encoding vanished. Therefore, the findings of Experiments 13 were largely not due to the low-level cues (e.g., the relative height and size of each BM) of the stimuli. 
However, because Experiment 4 revealed a null effect of color change on RT, and the processing of BM might not have occurred in a typical way due to the inversion (e.g., Poljac et al., 2011), the current Experiment 4 could not entirely remove the possibility that the event-based encoding did not rely on low-level information. Future studies may further test this alternative by scaling the actors to the same height or obtaining joint angles for the different agents and using them to prepare a common body model so that it would not be possible for the spatial factors to contribute. Moreover, other than low-level spatial information affecting BM processing, it has been suggested that the low-level temporal information embedded in BM also plays a role (e.g., Hill & Pollick, 2000). Therefore, in addition to testing the role of low-level spatial information in driving the event-based encoding, further studies need to consider the contribution of low-level temporal information in event-based encoding.3 
General discussion
The current study for the first time investigated the influence of agent identity on the encoding of BM, by focusing on whether agent clothing color could be extracted into WM while memorizing actions that belong to either one agent or distinct agents. We found consistent evidence that even though the color of BM was task irrelevant and participants were instructed to memorize individual actions, once the memorized actions had distinct identities, the color was always involuntarily extracted into WM, regardless of memory load and probe identity (Experiment 1, different-identity group of Experiment 2, and Experiment 3). These results support the event-based encoding hypothesis of WM for storing actions in daily life. Moreover, when the memorized BMs shared the same identity (same-identity group of Experiment 2) or had distinct identities but were inverted to abolish identity (Experiment 4), WM turned to use an element-based encoding manner. These results suggested that the revealed event-based encoding is not due to a universal automatic encoding of color information for the attended items. Overall, our study implies that identity information conveyed by BM has a significant effect on the WM processing of BM. 
Theoretical implications of the current study
The current study implies that WM processing is not fixed, but adaptive, selecting a corresponding computation algorithm (e.g., event- or element-based encoding) according to the memory context. In line with this assumption, Wood (2008) required participants to memorize both agent action and clothing color of colored 3D animations (BMs) and found that the mode of WM storage was modulated by the manner of information processing; the two elements were stored separately when the BM stimuli were generated by the same agent and presented sequentially on the screen center; however, the two elements could be bound together if certain extra cues (e.g., displaying BMs in distinct locations) were added. Considering that the two pieces of supporting evidence both come from BM, it is necessary to examine this adaptive processing of WM by using other stimuli in future studies. Meanwhile, the current study provided psychophysical evidence supporting the existence of both event- and element-based encoding of BM, neural imaging studies are needed to determine whether the two processing modes have corresponding neural substrates in the brain. In contrast, we ought to admit that currently we cannot offer any specific explanation as to why our cognitive system is equipped with two such processing modes. Moreover, we demonstrated that agent identity modulated the mode of BM processing; however, how and when our cognitive system shifts from event-based to element-based encoding remains unknown. It is particularly interesting to know whether the cognitive system immediately switches to or gradually tunes to the element-based encoding, once our brain notices that the stimuli share the same identity. 
The revealed adaptive processing may help explain certain discrepancies in WM studies. For instance, ample studies have revealed that when participants are required to memorize a target feature dimension while ignoring the other dimensions of static visual objects, they actually retain both target and task-irrelevant dimensions in WM (object-based encoding; e.g., Gao et al., 2016; Shen et al., 2013; Shin & Ma, 2016, 2017; Swan et al., 2016); however, two neuroimaging studies failed to find this object-based encoding manner using Gabor stimuli (Serences, Ester, Vogel, & Awh, 2009) or orientated bars (Woodman & Vogel, 2008). Since the tested Gabor stimuli (all sharing an outer contour of circle) and orientated bars (all sharing an outer contour of a rectangle) have the same outer configuration in each tested stimulus set, WM may treat them as sharing the same identity and hence adopt the element-based encoding mode. 
Our findings also have important implications for studies using BMs. PLD-format BM has been extensively used in studying the mechanisms of human action processing and the ability of social cognition (e.g., Blakemore, 2008; Puce & Perrett, 2003; Sokolov et al., 2012; Troje, 2013). Although the fact that pure kinematic information from PLD-format BM could sufficiently convey identity information has been well established (e.g., Cutting & Kozlowski, 1977; Cutting, Proffitt, & Kozlowski, 1978; Loula et al., 2005; Troje, Westhoff, & Lavrov, 2005), most of the existing studies actually ignore/do not control the identity factor (but see Alaerts et al., 2011). The current study suggests that the cognitive processing of BM may be modified when BM identity is the same during the experiment (e.g., the same-identity group in Experiment 2; Ding et al., 2015). However, because in our everyday life we process BMs with different identities, using the same-identity setting of BM hence are likely to impede us from understanding the normal processing of BM in our life. Therefore, future studies should pay attention to the factor of identity. Meanwhile, it is also worth noting that the influence of identity on the cognitive processing of BM may be modulated by certain other factors. Particularly, Alaerts et al. (2011) consistently found that the BM identity (gender in particular) did not affect the performance of BM tasks in four distinct perceptual tasks (i.e., action recognition, gender recognition, BM recognition, and emotion recognition). There are two key differences between Alaerts et al. (2011) and the current study. First, different cognitive functions were explored. Alaerts et al. (2011) tapped the perceptual processing of BM, whereas the current study focused on a postperceptual stage WM. Second, different experimental tasks were employed. Alaerts et al. (2011) required participants to perform a recognition task (e.g., pointing out the specific type of action in an action recognition task), while the current study used a change detection task (i.e., judging whether the probe was a new or an old one in the memory array). Future studies should pay attention to the relationship between the experimental task and the cognitive functions. The factor of identity may not be an important control factor in circumstances such as in Alaerts et al. (2011), but may serve as a critical control factor in circumstances such as in the current study.4 
The current study implies that identity has an impact on action processing, contributing to the view that there is an intimate relation between action and BM identity (e.g., Balas & Pearson, 2017; Pilz & Thornton, 2017; Simhi & Yovel, 2017). However, the existing studies predominately investigated the effect of body motion on person recognition (i.e., identity identification; e.g., Pilz & Thornton, 2017). Different from these studies, the current study for the first time examined the relation between action and BM identity from a reversed perspective, by focusing on and revealing the effect of identity on action encoding (i.e., whether the irrelevant color was encoded). It is worth noting that the former line of studies mainly focused on perception, while the latter line (current study) on tapping WM. Considering that WM and perception might have distinct processing mechanisms (e.g., Lin & Yeh, 2014; Zhang & Luck, 2011), to fully reveal the interaction between action and identity, two additional lines of studies should be considered in the future. First, how does the identity affect action perception; second, how does body action affect identity memorization when WM should be involved. Additionally, considering the intimate relation between action and identity, it will be interesting to elucidate neural mechanisms underlying the interaction between identity and action. 
Our study also adds new evidence supporting the view that social information embedded in BM has a pervasive impact on our cognitive processing, even when the social information (the identity information conveyed by BM) is task irrelevant. Previous studies have shown that meaningful interactions between human actions significantly affect our perceptual (Manera et al., 2010; Neri, Luu, & Levi, 2006) and WM (Ding et al., 2017) processing of individual actions. The current study revealed that the motion signature conveyed by PLD-format BM enables our brain to realize the identity status of the memorized actions and adopt different algorithms accordingly, supporting an intimate interaction between social and cognitive processing. 
Revealing the underlying mechanism of WM via reaction time
In line with previous studies (e.g., Ecker et al., 2013; Gao et al., 2010, 2016; Shen et al., 2013; Zhao et al., 2018), the current study found that the change of irrelevant elements predominately affected on RT. Some researchers have indicated that one reason leading to the absence of effect on accuracy was due to accuracy-speed trade-off (Ecker et al., 2013; Zhao et al., 2018). We looked into the data of experiments revealing significant distracting effects, and found there was accuracy-speed trade-off. However, there were not strong clues supporting the accuracy-speed trade-off alternative. Particularly, relative to the same-color condition, there were three, four, six, six, and six participants using longer response time to reach better performance in the different-color condition under the 2-BM condition of Experiment 1, 5-BM condition of Experiment 1, different-identity condition of Experiment 2, old-identity condition of Experiment 3, and new-identity condition of Experiment 3, respectively. Meanwhile, relative to the same-color condition, there were also one, three, three, four, and two participants using shorter response time and reaching worse performance in the different-color condition under the 2-BM condition of Experiment 1, 5-BM condition of Experiment 1, different-identity condition of Experiment 2, old-identity condition of Experiment 3, and new-identity condition of Experiment 3, respectively. Therefore, we argued that the change of irrelevant color significantly slowed down the WM processing of action at the test phase, yet may not have significantly impaired the stored BM representations in WM, which, to some extent, implies that the BM representations in WM are rather stable. 
On the other hand, one may argue that since most of the existing WM studies adopt accuracy (instead of RT) as the main index, and the detection accuracy of BM in Experiments 13 was only slightly impaired by an irrelevant color change, the current study cannot effectively inform us of the encoding manner of irrelevant feature. Although accuracy is a key index in deducing the WM mechanisms, we argued that RT is also valuable and can shed light on WM mechanisms (e.g., Carlisle & Woodman, 2011; Gilchrist & Cowan, 2014; Lepsien & Nobre, 2007; Rerko, Souza, & Oberauer, 2014). Specific to the distracting effect, RT is a sensitive measure in most cases because if the change of an irrelevant feature affects an “old” judgment, it is more likely that its change slows down its acceptance instead of leading to rejection of the item (see footnote 1 for exception). Moreover, the distracting effect has been well-accepted in examining object-based encoding in WM (e.g., Ecker et al., 2013; Gao, Gao, Sun, & Shen, 2011; Shen et al., 2013; Yin et al., 2012), which has been confirmed by other paradigms (e.g., Gao et al., 2016; Shin & Ma, 2016, 2017; Swan et al., 2016) and the neural index (e.g., Gao et al., 2010; Yin et al., 2012). For instance, Yin et al. (2012) found that there was no significant difference on accuracy between irrelevant-color change and irrelevant-color no change when memorizing four shapes; however, RT was significantly longer in irrelevant-color change condition than in irrelevant-color no change condition. Critically, in line with the implication of behavioral results, the irrelevant-color change evoked a more negative anterior N2 relative to the irrelevant-color no change condition, and activated the frontal theta. Gao et al. (2010) showed a similar pattern: The irrelevant change impaired accuracy and evoked a more negative anterior N2 relative to irrelevant no-change condition. In line with those findings, our recent event related potential (ERP) study found that the change of irrelevant color of BM elicited a more negative N2 using a similar setting with that of Experiment 2 (Zhu, Gu, & Gao, event-based encoding of biological motion in working memory: an ERP study; poster at Asia-Pacific Conference on Vision (APCV) 2018, Hanghzou, China). Therefore, we considered that the current design offered a sensitive means to explore the processing manner of irrelevant-features when storing BM into WM. 
That being said, considering that the current key findings exhibited on RT and the participants were explicitly informed of the identity status of the memory array, we argue that more empirical evidence is required to further verify the current adaptive encoding manner. There are at least three avenues to explore in future studies. First, a new paradigm could be used to test the current issue. For instance, we recently demonstrated that a task irrelevant-feature in WM could capture attention in a visual search task, which contained an item matching the irrelevant feature in WM but always as a distractor and was conducted during the WM maintenance phase (Gao et al., 2016). This study offered converging evidence supporting the existence of object-based attention. Future study may consider adopting this paradigm to test the current hypothesis. Second, because the current study was based on a common phenomenon in our daily life (while memorizing the actions of distinct people on a street, whether their clothing colors will be extracted into WM involuntarily), we argue that the observers in reality have no difficulty in knowing that the observed agents have different identities or that the perceived BMs in the outer environment are considered by default to have distinct identities. Participants can extract identity information from BM (e.g., Barclay et al., 1978; Beardsworth & Buckner, 1981; Cutting & Kozlowski, 1977; Loula et al., 2005; Runeson & Frykholm, 1983, 1986; Stevenage et al., 1999; Troje et al., 2005). Even if the participants are not informed, they may notice this information when they actively process BM. However, in case certain participants fail to extract the identity information from BM considering that our pilot study found that the identity recognition accuracy was approximately 62% (see Stimuli and apparatus session of Experiment 1), we considered that it was better/reliable to explicitly inform the participants of the identity status of the memory array before they performed the experiment, such that we can ensure the BM processing is close to the one in our daily life. Additionally, in the current Experiment 4, we informed participants that all the memorized BMs had distinct identities. Critically, we did not find an event-based encoding manner, implying that simply informing participants of the identity status did not affect the processing manner of the memorized BM. However, it is still possible that the instruction given to the participants may modulate the processing manner of BM, and the current finding was limited to a situation where the participants were explicitly informed of the identities. To draw a more convincing conclusion regarding event-based encoding, it would be ideal to make the identity information completely implicit to the participants. It is worth reporting that we recently tested this idea in an electroencephalography study (Gu, Shen, & Gao, agent identity affects the encoding of biological motion into visual working memory: an EEG study; accepted poster at APCV 2019, Osaka, Japan), and revealed a similar conclusion as reported here, implying that the current conclusion is reliable. Third, the current study assumed that WM processed BMs with distinct identities in an event-based encoding manner and tested that view by focusing on the fate of irrelevant colors when memorizing actions. Although the current study offered evidence suggesting that identity affects BM processing, the current study only examined one aspect of event-based encoding in terms of establishing a complete event-based encoding manner of BM. To fully test the event-based encoding of BM, it is necessary to test another aspect of event-based encoding: requiring participants to memorize the colors of colored BMs, while manipulating the change of irrelevant actions. An event-based encoding of BM predicts that the change of irrelevant action will also significantly affect the processing of BM when memorizing BMs of distinct identities. Moreover, this event-based encoding may be replaced by element-based encoding when memorizing BMs sharing the same identity. 
Acknowledgments
This research was supported by National Natural Science Foundation of China Grants 31771202 and 31571119, MOE Project of Humanities and Social Sciences (No. 17YJA190005), and Project of Ministry of Science and Technology of the People's republic of China (2016YFE0130400). 
Commercial relationships: none. 
Corresponding authors: Mowei Shen; Zaifeng Gao. 
Address: Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China. 
References
Alaerts, K., Nackaerts, E., Meyns, P., Swinnen, S. P., & Wenderoth, N. (2011). Action and emotion recognition from point light displays: An investigation of gender differences. PLoS One, 6 (6): e20989.
Allred, S. R., & Flombaum, J. I. (2014). Relating color working memory and color perception. Trends in Cognitive Sciences, 18 (11), 562–565.
Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33 (6), 717–746.
Baddeley, A., & Hitch, G. (1974). Working memory. In Bower G. A. (Ed.). The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47–89), New York, NY: Academic Press.
Balas, B., & Pearson, H. (2017). Intra- and extra-personal variability in person recognition. Visual Cognition, 25 (4-6), 456–469.
Barclay, C. D., Cutting, J. E., & Kozlowski, L. T. (1978). Temporal and spatial factors in gait perception that influence gender recognition. Perception & Psychophysics, 23, 145–152.
Barwise, J.and Perry, J. (1983). Situations and Attitudes. Cambridge, MA: MIT Press.
Bavel, J. J. V., Hackel, L. M., & Xiao, Y. J. (2014). The group mind: The pervasive influence of social identity on cognition. Research and Perspectives in Neurosciences, 21 (1), 41–56.
Beardsworth, T., & Buckner, T. (1981). The ability to recognize oneself from a video recording of one's movements without seeing one's body. Bulletin of the Psychonomic Society, 18 (1), 19–22.
Blake, R., & Shiffrar, M. (2007). Perception of human motion. Annual Review of Psychology, 58, 47–73.
Blakemore, S. J. (2008). The social brain in adolescence. Nature Reviews Neuroscience, 9, 267–277.
Cai, Y., Urgolites, Z., Wood, J., Chen, C., Li, S., & Chen, A., & Xue, G. (2018). Distinct neural substrates for visual short-term memory of actions. Human Brain Mapping, 39 (10), 4119–4133.
Carlisle, N. B., & Woodman, G. F. (2011). Automatic and strategic effects in the guidance of attention by working memory representations. Acta Psychologica, 137 (2), 217–225.
Cutting, J. E., & Kozlowski, L. T. (1977). Recognizing friends by their walk: Gait perception without familiarity cues. Bulletin of the Psychonomic Society, 9 (5), 353–356.
Cutting, J. E., Proffitt, D. R., & Kozlowski, L. T. (1978). A biomechanical invariant for gait perception. Journal of Experimental Psychology Human Perception & Performance, 4 (3), 357.
Ding, X., Gao, Z., & Shen, M. (2017). Two equals one: Two human actions during social interaction are grouped as one unit in working memory. Psychological Science, 28 (9), 1311–1320.
Ding, X., Zhao, Y., Wu, F., Lu, X., Gao, Z., & Shen, M. (2015). Binding biological motion and visual features in working memory. Journal of Experimental Psychology Human Perception & Performance, 41, 850–865.
Downing, P. E., Jiang, Y., Shuman, M., & Kanwisher, N. (2001). A cortical area selective for visual processing of the human body. Science, 293 (5539), 2470–2473.
Downing, P. E., Peelen, M. V., Wiggett, A. J., & Tew, B. D. (2006). The role of the extrastriate body area in action perception. Social Neuroscience, 1 (1), 52–62.
Duncan, J. (1984). Selective attention and the organization of visual information. Journal of Experimental Psychology: General, 113 (4), 501–517.
Ecker, U. K., Maybery, M., & Zimmer, H. D. (2013). Binding of intrinsic and extrinsic features in working memory. Journal of Experimental Psychology: General, 142 (1), 218–234.
Gao, Z., Bentin S., & Shen, M. (2015). Rehearsing biological motion in working memory: An EEG study. Journal of Cognitive Neuroscience, 27 (1), 198–209.
Gao, T., Gao Z., Li, J., Sun, Z., & Shen, M. (2011). The perceptual root of object-based storage: An interactive model of perception and visual working memory. Journal of Experimental Psychology: Human Perception and Performance, 37 (6), 1803–1823.
Gao, Z., Li, J., Yin, J., & Shen, M. (2010). Dissociated mechanisms of extracting perceptual information into visual working memory. PLoS One, 5 (12): e14273.
Gao, Z., Yu, S., Zhu, C., Shui, R., Weng, X., & Li, P., et al. (2016). Object-based encoding in visual working memory: Evidence from memory-driven attentional capture. Scientific Reports, 6, 22822.
Gilchrist, A. L., & Cowan, N. (2014). A two-stage search of visual working memory: Investigating speed in the change-detection paradigm. Attention, Perception, & Psychophysics, 76 (7), 2031–2050.
Hemeren, P. E. (2008). Mind in action. Lund University Cognitive Studies, 140.
Hill, H., & Pollick, F. E. (2000). Exaggerating temporal differences enhances recognition of individuals from point light displays. Psychological Science, 11 (3), 223–228.
Ikeda, H., Blake, R., & Watanabe, K. (2005). Eccentric perception of biological motion is unscalably poor. Vision Research, 45 (15), 1935–1943.
Team. JASP (2018). JASP (Version 0.9.2.0) [Computer software]. Retrieved from https://jasp-stats.org
Jeffreys, H. (1961). Theory of probability. Oxford, UK: Oxford University Press.
Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14, 201–211.
Kirmsse, A., Zimmer, H. D., & Ecker, U. K. H. (2018). Age-related changes in working memory: Age affects relational but not conjunctive feature binding. Psychology & Aging, 3 (33), 512–526.
Lepsien, J., & Nobre, A. C. (2007). Attentional modulation of object representations in working memory. Cerebral Cortex, 17 (9), 2072–2083.
Lin, S. H., & Yeh, Y. Y. (2014). Domain-specific control of selective attention. PLoS One, 9 (5): e98260.
Lindner, I., Echterhoff, G., Davidson, P. S., & Brand, M. (2010). Observation inflation: Your actions become mine. Psychological Science, 21 (9), 1291–1299.
Liu, Y., Lu, X., Wu, F., Shen, M. & Gao, Z. (2019). Biological motion is stored independently from bound representation in working memory. Visual Cognition. Advance online publication. https://doi.org/10.1080/13506285.2019.1638479
Loula, F., Prasad, S., Harber, K., & Shiffrar, M. (2005). Recognizing people from their movement. Journal of Experimental Psychology: Human Perception & Performance, 31 (1), 210–220.
Lu, X., Huang, J., Yi, Y., Shen, M., Weng, X., & Gao, Z. (2016). Holding biological motion in working memory: An fMRI Study. Frontiers in Human Neuroscience, 10, 251.
Lu, X., Ma, X., Zhao, Y., Gao, Z. & Shen, M. (2019). Retaining event files in working memory requires extra object-based attention than the constituent elements. Quarterly Journal of Experimental Psychology, 72 (9): 2225–2239.
Macmillan, N. A., & Creelman, C. D. (1990). Response bias: Characteristics of detection theory, threshold theory, and “nonparametric” indexes. Psychological Bulletin, 107 (107), 401–413.
Manera, V., Schouten, B., Becchio, C., Bara, B. G., & Verfaillie, K. (2010). Inferring intentions from biological motion: A stimulus set of point-light communicative interactions. Behavior Research Methods, 42 (1), 168–178.
Morrison, E. R., Hannah, B., Louise, P., & Hannah, W. S. (2018). Something in the way she moves: Biological motion, body shape, and attractiveness in women. Visual Cognition, 1–7.
Neri, P., Luu, J. Y., & Levi, D. M. (2006). Meaningful interactions can enhance visual discrimination of human agents. Nature Neuroscience, 9 (9), 1186.
Pavlova, M. A. (2012). Biological motion processing as a hallmark of social cognition. Cerebral Cortex, 22 (5), 981–995.
Pavlova, M., & Sokolov, A. (2000). Orientation specificity in biological motion perception. Attention, Perception, & Psychophysics, 62 (5), 889–899.
Peelen, M. V., Wiggett, A. J., & Downing, P. E. (2006). Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion. Neuron, 49 (6), 815–822.
Pilz, K. S., & Thornton, I. M. (2017). Idiosyncratic body motion influences person recognition. Visual Cognition, 25 (4-6), 539–549.
Poljac, E., Verfaillie, K., Wagemans, J. (2011) Integrating biological motion: The role of grouping in the perception of point-light actions. PLoS One 6 (10): e25867.
Pollick, F. E., Lestou, V., Ryu, J., & Cho, S. B. (2002). Estimating the efficiency of recognizing gender and affect from biological motion. Vision Research, 42 (20), 2345–2355.
Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological motion. Philosophical Transactions of the Royal Society of London Series B, Biological Sciences, 358, 435–445.
Rerko, L., Souza, A. S., & Oberauer, K. (2014). Retro-cue benefits in working memory without sustained focal attention. Memory and Cognition, 42 (5), 712–728.
Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Opinion: Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2 (9), 661.
Roether, C. L., Omlor, L., Christensen, A., & Giese, M. A. (2009). Critical features for the perception of emotion from gait. Journal of Vision, 9 (6): 15, 1–32, https://doi.org/10.1167/9.6.15. [PubMed] [Article]
Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56 (5), 356–374.
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16 (2), 225–237.
Runeson, S., & Frykholm, G. (1983). Kinematic specification of dynamics as an informational basis for person-and-action perception: Expectation, gender recognition, and deceptive intention. Journal of Experimental Psychology General, 112 (112), 585–615.
Runeson, S., & Frykholm, G. (1986). Kinematic specification of gender and gender expression. In McCabe, V. & Balzano G. J. (Eds.) Event cognition: An ecological perspective (pp. 259–273). Hillsdale, NJ: Lawrence Erlbaum.
Schain, C., Lindner, I., Beck, F., & Echterhoff, G. (2012). Looking at the actor's face: Identity cues and attentional focus in false memories of action performance from observation. Journal of Experimental Social Psychology, 48 (5), 1201–1204.
Serences, J. T., Ester, E. F., Vogel, E. K., & Awh, E. (2009). Stimulus-specific delay activity in human primary visual cortex. Psychological Science, 20 (2), 207–214.
Shen, M., Gao, Z., Ding, X., Zhou, B., & Huang, X. (2014). Holding biological motion information in working memory. Journal of Experimental Psychology: Human Perception and Performance, 40, 1332–1345.
Shen, M., Tang, N., Wu, F., Shui, R., & Gao, Z. (2013). Robust object-based encoding in visual working memory. Journal of Vision, 13 (2): 1, 1–11, https://doi.org/10.1167/13.2.1. [PubMed] [Article]
Shi, J., Weng, X., He, S., & Jiang, Y. (2010). Biological motion cues trigger reflexive attentional orienting. Cognition, 117 (3), 348–354.
Shin, H., & Ma, W. J. (2016). Crowdsourced single-trial probes of visual working memory for irrelevant features. Journal of Vision, 16 (5): 10, 1–8, https://doi.org/10.1167/16.5.10. [PubMed] [Article]
Shin, H., & Ma, W. J. (2017). Visual short-term memory for oriented, colored objects. Journal of Vision, 17 (9): 12, 1–19, https://doi.org/10.1167/17.9.12. [PubMed] [Article]
Shipley, T. F., & Zacks, J. M. (Eds.). (2008). Understanding events: From perception to action. Oxford, UK: Oxford University Press.
Shi, Y., Ma, X., Ma, Z., Wang, J., Yao, N., & Gu, Q.,… Gao, Z. (2017). Using a kinect sensor to acquire biological motion: Toolbox and evaluation. Behavior Research Methods, 1–12.
Simhi, N., & Yovel, G. (2017). The role of familiarization in dynamic person recognition. Visual Cognition, 25 (4-6), 550–562.
Sokolov, A. A., Erb, M., Gharabaghi, A., Grodd, W., Tatagiba, M. S., & Pavlova, M. A. (2012). Biological motion processing: The left cerebellum communicates with the right superior temporal sulcus. NeuroImage, 59 (3), 2824–2830.
Song, C. Liu, W., Lu, X., & Gu, Q. (2016). Building blocks of visual working memory: Objects, features, or hybrid?. Chinese Journal of Applied Psychology, 22 (2), 112–126.
Steel, K., Ellem, E., & Baxter, D. (2015). The application of biological motion research: Biometrics, sport, and the military. Psychonomic Bulletin & Review, 22 (1), 78–87.
Stevenage, S. V., Nixon, M. S., & Vince, K. (1999). Visual analysis of gait as a cue to identity. Applied Cognitive Psychology, 13 (6), 513–526.
Swan, G., Collins, J., & Wyble, B. (2016). Memory for a single object has differently variable precisions for relevant and irrelevant features. Journal of Vision, 16 (3): 32, 1–12, https://doi.org/10.1167/16.3.32. [PubMed] [Article]
Thornton, I. M. (2018). Stepping into the genetics of biological motion processing. Proceedings of the National Academy of Sciences, USA, 115 (8), 1687–1689. doi:10.1073/pnas. 1722625115
Troje, N. F. (2013). What is biological motion? Definition, stimuli, and paradigms. In Rutherford, M. D. & Kuhlmeier. V. A. (2013). Social perception: Detection and interpretation of animacy, agency, and intention (pp. 13–36). Cambridge, MA: MIT Press.
Troje, N. F., Westhoff, C., & Lavrov, M. (2005). Person identification from biological motion: Effects of structural and kinematic cues. Perception & Psychophysics, 67 (4), 667–675.
Urgesi, C., Candidi, M., Ionta, S., & Aglioti, S. M. (2007). Representation of body identity and body actions in extrastriate body area and ventral premotor cortex. Nature Neuroscience, 10 (1), 30.
Urgolites, Z. J., & Wood, J. N. (2013). Visual long-term memory stores high-fidelity representations of observed actions. Psychological Science, 24 (4), 403–411.
Vanrie, J., & Verfaillie, K. (2004). Perception of biological motion: A stimulus set of human point-light actions. Behavior Research Methods, Instruments, & Computers, 36, 625–629.
van Boxtel, J. J., & Lu, H. (2012). Signature movements lead to efficient search for threatening actions. PLoS One, 7 (5): e37085.
Verfaillie, K. (1993). Orientation-dependent priming effects in the perception of biological motion. Journal of Experimental Psychology: Human Perception and Performance, 19 (5), 992–1013.
Wagenmakers, E. J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J.,… Morey, R. D. (2018). Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review, 25 (1), 58–76.
Wetzels, R., & Wagenmakers, E. J. (2012). A default Bayesian hypothesis test for correlations and partial correlations. Psychonomic Bulletin & Review, 19 (6), 1057–1064.
Wood, J. N. (2007). Visual working memory for observed actions. Journal of Experimental Psychology: General, 136 (4), 639–652.
Wood, J. N. (2008). Visual memory for agents and their actions. Cognition, 108, 522–532.
Wood, J. N. (2011). A core knowledge architecture of visual working memory. Journal of Experimental Psychology: Human Perception and Performance, 37 (2), 357–381.
Woodman, G. F., & Vogel, E. K. (2008). Selective storage and maintenance of an object's features in visual working memory. Psychonomic Bulletin & Review, 15, 223–229.
Yin, J., Gao, Z., Jin, X., Ding, X., Liang, J., & Shen, M. (2012). The neural mechanisms of percept-memory comparison in visual working memory. Biological Psychology, 90 (1), 71–79.
Yin, J., Gao, Z., Jin, X., Ye, L., Shen, M., & Shui, R. (2011). Tracking the mismatch information in visual short term memory: an event-related potential study. Neuroscience Letters, 491 (1), 26–30.
Zacks, J. M., & Tversky, B. (2001). Event structure in perception and conception. Psychological Bulletin, 127, 3–21.
Zhao, G., Chen, F., Zhang, Q., Shen, M., & Gao, Z. (2018). Feature-based information filtering in visual working memory is impaired in Parkinson's disease. Neuropsychologia, 1111, 317–323.
Zhang, W., & Luck, S. J. (2011). The number and quality of representations in working memory. Psychological Science, 22 (11), 1434–1441.
Zhang, Q., Shen, M., Tang, N., Zhao, G., & Gao, Z. (2013). Object-based encoding in visual working memory: A lifespan study. Journal of Vision, 13 (10): 11, 1–10, https://doi.org/10.1167/. [PubMed] [Article]
Footnotes
1  Object-based encoding is not equal to binding. Object-based encoding takes place due to the mechanism of object-based attention (Duncan, 1984) at perception. Once the target feature and irrelevant feature are extracted into WM, they may be stored as independent features. However, two studies implied that the binding between target and irrelevant features of an object were retained in WM (Ecker, Maybery, & Zimmer, 2013; Song, Liu, Lu, & Gu, 2016).
Footnotes
2  It is worth noting that the PLD-format BM stimuli is used to minimize the availability of structural cues and isolate kinetic information from other sources (e.g., head, hair, etc.; Blake & Shiffrar, 2007; Johansson, 1973). The dots used in constructing the PLD stimuli hence do not belong to the components of an event. Instead, the agent information conveyed by the PLD stimuli, such as action, clothing color, location, and height of agent, belongs to the components of an event.
Footnotes
3  We thank an anonymous reviewer for indicating this.
Footnotes
4  We thank an anonymous reviewer for indicating this.
Figure 1
 
Example frames for the biological motion (BM) stimuli used in the current study. Here, all BM stimuli are from the same female actor. The BM stimuli from left to right are waving, walking, chopping, spading, jumping, and drinking.
Figure 1
 
Example frames for the biological motion (BM) stimuli used in the current study. Here, all BM stimuli are from the same female actor. The BM stimuli from left to right are waving, walking, chopping, spading, jumping, and drinking.
Figure 2
 
A schematic illustration of a single trial in Experiment 1 (old-action & different-color condition), in which the color changes. The two biological motions (BMs) in the memory array are of yellow and red color; the BM in the probe is of magenta color. It is worth noting that since the current task is a high-level cognitive task, we did not calibrate and equalize the luminance of the used colors, which is commonly performed in low-level vision studies (Allred & Flombaum, 2014). However, the used colors of BMs on the screen were reported to be clear and differentiable according by the participants.
Figure 2
 
A schematic illustration of a single trial in Experiment 1 (old-action & different-color condition), in which the color changes. The two biological motions (BMs) in the memory array are of yellow and red color; the BM in the probe is of magenta color. It is worth noting that since the current task is a high-level cognitive task, we did not calibrate and equalize the luminance of the used colors, which is commonly performed in low-level vision studies (Allred & Flombaum, 2014). However, the used colors of BMs on the screen were reported to be clear and differentiable according by the participants.
Figure 3
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 1. The error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Figure 3
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 1. The error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Figure 4
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 2. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Figure 4
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 2. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. **p < 0.01.
Figure 5
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 3. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Figure 5
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 3. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Figure 6
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 4. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Figure 6
 
The averaged accuracy (A) and reaction time (RT) (B) in Experiment 4. Error bar stands for standard error. The gray bar stands for the distracting effect: Accuracysame-colorAccuracydifferent-color for accuracy, RTdifferent-colorRTsame-color for RT. *p < 0.05. **p < 0.01.
Table 1
 
Mean accuracy and reaction time (RT) of the task in each factor combination (distinguished between old action and new action) of Experiments 1 to 4. The data in parentheses stand for standard error. The reported partial analysis in the main text focused on the data of the old action conditions. A full analysis is reported in Supplementary File S1, in which we compared the effect of color change by combining the old-action and new-action data.
Table 1
 
Mean accuracy and reaction time (RT) of the task in each factor combination (distinguished between old action and new action) of Experiments 1 to 4. The data in parentheses stand for standard error. The reported partial analysis in the main text focused on the data of the old action conditions. A full analysis is reported in Supplementary File S1, in which we compared the effect of color change by combining the old-action and new-action data.
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×