Open Access
Article  |   December 2016
A behavioral task sets an upper bound on the time required to access object memories before object segregation
Author Affiliations
Journal of Vision December 2016, Vol.16, 26. doi:https://doi.org/10.1167/16.15.26
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joseph L. Sanguinetti, Mary A. Peterson; A behavioral task sets an upper bound on the time required to access object memories before object segregation. Journal of Vision 2016;16(15):26. https://doi.org/10.1167/16.15.26.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Traditional theories of vision assume that object segregation occurs before access to object memories. Yet, behavioral evidence shows that familiar configuration is a prior for segregation, and electrophysiological experiments demonstrate these memories are accessed rapidly. A behavioral index of the speed of access is lacking, however. Here we asked how quickly behavior is influenced by object memories that are accessed in the course of object segregation. We investigated whether access to object memories on the groundside of a border can slow behavior during a rapid categorization task. Participants viewed two silhouettes that depicted a real-world and a novel object. Their task was to saccade toward the real-world object as quickly as possible. Half of the nontarget novel objects were ambiguous in that a portion of a real-world object was suggested, but not consciously perceived, on the groundside of their borders. The rest of the nontargets were unambiguous. We tested whether saccadic reaction times were perturbed by the real-world objects suggested on the groundside of ambiguous novel silhouettes. In Experiments 1 and 2, saccadic reaction times were slowed when nontargets were ambiguous rather than unambiguous. Experiment 2 set an upper limit of 190 ms on the time required for object memories in grounds to influence behavior. Experiment 3 ruled out factors that could have produced longer latencies other than access to object memories. These results provide the first behavioral index of how quickly memories of objects suggested in grounds can influence behavior, placing the upper limit at 190 ms.

Introduction
Consider two regions of the visual field sharing a border, like the black and white regions in Figure 1. Typically, only one of those regions is perceived to be an object (e.g., in Figure 1, the black region, which depicts a novel silhouette). The other region appears to continue behind the object at their shared border and is perceived as a locally shapeless ground to the object (e.g., in Figure 1, the white surrounding region). This process of segregating objects from grounds is essential for our ability to interact with the world. 
Figure 1
 
A novel object is depicted in the black region; the ground region in white suggests portions of standing women on the outside of the left and right contours of the novel object.
Figure 1
 
A novel object is depicted in the black region; the ground region in white suggests portions of standing women on the outside of the left and right contours of the novel object.
There has been much debate regarding when and where object segregation occurs in the visual hierarchy, what type of processing precedes it, and which types of processing are restricted to objects versus grounds. The traditional view, in place for a century, is that object segregation occurs early in processing, and that memories of object structure and semantics are accessed afterward and only by those regions of the visual field determined to be objects (for review, see Peterson, 1999). Many modern theorists continue to hold this assumption (e.g., Firestone & Scholl, 2016; Pylyshyn, 1999; Zhou, Friedman, & von der Heydt, 2000). Moreover, this view fits with feedforward models of perception that assume that processing proceeds in a bottom-up fashion from the retina to increasingly higher cortical areas (e.g., Riesenhuber & Poggio, 1999; Serre, Oliva, & Poggio, 2007). 
Fast semantic categorization responses have been taken as support of the feedforward view (e.g., Kirchner & Thorpe, 2006; Thorpe, Fize, & Marlot, 1996). For example, using a forced-choice saccadic task, Kirchner and Thorpe (2006) presented two photographs of natural scenes and instructed participants to move their eyes to the one containing an animal as quickly as possible. Participants initiated accurate saccades extremely quickly, within 120 ms (100 ms for faces; Crouzet, Kirchner, & Thorpe, 2010). “Ultra-rapid categorization” performance in animals and humans like that reported by Kirchner and Thorpe (2006) led theorists to conclude that much of visual perception is supported by fast feedforward processing through the visual hierarchy (Riesenhuber & Poggio, 1999; Serre et al., 2007; Thorpe, 2002). 
Feedforward processing, even to high levels like the prefrontal cortex, does not always result in the conscious experience of a percept, however (e.g., Van Gaal & Lamme, 2012). For example, the N400 event-related potential (ERP), a neural marker of high-level semantic processing, has been observed for stimuli of which participants are unaware due to temporal interference by either a noise mask or a subsequent target (Holcomb, Reder, Misr, & Grainger, 2005; Kiefer, 2002; Vogel, Luck, & Shapiro, 1998) or by perceptual organization processes (Sanguinetti, Allen, & Peterson, 2014). Accordingly, other theorists propose that fast feedforward processing is insufficient for conscious perception; for the latter, recurrent (feedback) processing is necessary (e.g., Bullier, 2001; Di Lollo, Enns, & Rensink, 2000; Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006; Fahrenfort, Scholte, & Lamme, 2007; Lamme & Roelfsema, 2000). This notion is supported by several lines of evidence (Tapia & Beck, 2014). For example, the contribution of lower visual areas like V1/V2 is critical after the activity of higher areas like MT/V5 for motion perception, suggesting that recurrent feedback from MT/V5 to V1/V2 is necessary (Koivisto, Kastrati, & Revonsuo, 2014; Pascual-Leone & Walsh, 2001; Silvanto, Lavie, & Walsh, 2005). By using transcranial magnetic stimulation, researchers have shown that recurrent feedback is also crucial for the perception of basic visual stimuli like colored disks and annuli (Ro, Breitmeyer, Burton, Singhal, & Lane, 2003), subjective contours (Wokke, Vandenbroucke, Scholte, & Lamme, 2012), and complex visual scenes (Camprodon, Zohary, Brodbeck, & Pascual-Leone, 2010; Koivisto, Railo, Revonsuo, Vanni, & Salminen-Vaparanta, 2011). 
Despite growing evidence that feedforward activation is insufficient to explain perception, the claim that object memories and/or semantics are accessed before object segregation remains controversial. However, in the past two decades, accumulating evidence suggests that memories of object structure are activated before and influence object segregation (Navon, 2011; Peterson & Gibson, 1994a, 1994b; Peterson, Harvey, & Weidenbacher, 1991; Peterson & Skow, 2008; Ralph, Seli, Cheng, Solman, & Smilek, 2014; Trujillo, Allen, Schnyer, & Peterson, 2010). For example, Peterson et al. (1991) showed participants black enclosed silhouettes surrounded by a white region. They manipulated the white regions such that some had portions of familiar objects suggested along the vertical border (e.g., see Figure 1, where portions of standing women are suggested on the outside of the left and right silhouette borders). In the other silhouettes, either the familiar configurations were inverted or the parts were rearranged to suggest novel objects. Peterson and colleagues asked participants to report reversals of perceived object status and found that the white region was more likely to be perceived as the object when the object was suggested in an upright (familiar) orientation rather than an inverted or part rearranged version. Inverting stimuli holds all configural properties identical and causes longer recognition times (Jolicoeur, 1985). Therefore, this seminal finding showed that familiar configuration can influence object segregation but only when memories of familiar objects are accessed rapidly. Thus, familiar configuration is an object prior. Like other object priors (e.g., symmetry, convexity, small area, enclosure), familiar configuration does not necessarily dominate perception (e.g., Peterson, 1994; Peterson & Gibson, 1994a). Instead, object properties on opposite sides of borders compete for perceived object status; the object is perceived on the side of the border where the evidence is greater (Peterson & Cacciamani, 2013). 
In order to assess the temporal dynamics of access to object memories in the course of object segregation, a number of authors recently used stimuli like those in Figure 2 designed so that familiar configuration would lose the competition for object status (e.g., Cacciamani, Mojica, Sanguinetti, & Peterson, 2014; Peterson, Cacciamani, Mojica, & Sanguinetti, 2012; Peterson & Skow, 2008; Sanguinetti et al., 2014; Sanguinetti, Trujillo, Schnyer, Allen, & Peterson, 2016; Trujillo et al., 2010). This method allowed them to assess access to object memories in the course of object segregation without contamination by conscious object recognition, in that the latter occurs only for the side of a border determined to be the object (e.g., Grill-Spector & Kanwisher, 2005). 
Figure 2
 
(A) Real-world silhouettes (targets in the present experiments). From left to right, a telephone, a turtle, and a clover leaf. Second and third rows: Novel silhouettes (Nontargets in the present experiments). (B) Unambiguous novel silhouettes. These novel silhouettes do not have portions of real-world objects suggested on the outside of their borders. (C) Ambiguous novel silhouettes. These novel silhouettes have portions of real-world objects suggested on the outside of their left and right borders. From left to right, portions of seahorses, palm trees, and bells are suggested on the outside (white side) of the black silhouette. Note that observers were unaware of the real-world objects suggested on the outside of the silhouettes in (C); they perceived the outsides as shapeless grounds.
Figure 2
 
(A) Real-world silhouettes (targets in the present experiments). From left to right, a telephone, a turtle, and a clover leaf. Second and third rows: Novel silhouettes (Nontargets in the present experiments). (B) Unambiguous novel silhouettes. These novel silhouettes do not have portions of real-world objects suggested on the outside of their borders. (C) Ambiguous novel silhouettes. These novel silhouettes have portions of real-world objects suggested on the outside of their left and right borders. From left to right, portions of seahorses, palm trees, and bells are suggested on the outside (white side) of the black silhouette. Note that observers were unaware of the real-world objects suggested on the outside of the silhouettes in (C); they perceived the outsides as shapeless grounds.
One important question addressed by the previous experiments is how long it takes to access object memories in the course of object perception. Trujillo et al. (2010; Sanguinetti et al., 2014) obtained a neurophysiological marker of the time required by recording ERPs while participants viewed silhouettes like those in Figure 2 and categorized them as depicting real-world (Figure 2A) or novel objects (Figure 2B and C). All silhouettes were closed, symmetric, and smaller in area than the surround—factors that biased perception toward assigning object status to the insides (Peterson & Kimchi, 2013), which is what observers perceived. Half of the novel silhouettes were unambiguous in that they suggested novel objects on both the inside and the outside of their borders (Figure 2B). For the other half, however, real-world objects were suggested (albeit not consciously perceived) along the outside of their left and right borders, in the region perceived as shapeless ground to the object; these were “ambiguous” novel silhouettes. An example is that portions of seahorses are suggested on the outside—the white groundside—of the leftmost black silhouette in Figure 2C. The ambiguous and unambiguous novel silhouettes were matched in low-level features (see Methods); hence, the only difference between the two sets of novel silhouettes was the suggestion of portions of meaningful objects on the perceived groundside of the borders of ambiguous but not unambiguous novel silhouettes. Participants accurately classified both the ambiguous and unambiguous novel silhouettes as “novel.” 
Trujillo et al. (2010; experiments 1 and 2) and Sanguinetti et al. (2014, experiment 1) found that the amplitude of the P1 component of the ERP was larger for ambiguous than for unambiguous novel silhouettes starting approximately 110 ms after stimulus onset (peaking at ∼120 ms). They concluded that these P1 differences measured, at least, access to object memories in the former but not the latter condition.1 Furthermore, Sanguinetti et al. (2016) found increased power in the alpha band for ambiguous compared to unambiguous novel silhouettes starting approximately 100 ms after stimulus onset. Because the ambiguous and unambiguous novel silhouettes were equated for stimulus features, Sanguinetti et al. (2016) related these differences in alpha power to greater inhibitory competition mediating object segregation for the ambiguous compared to the unambiguous novel silhouettes because the familiar configuration prior was present on the outside of the former but not the latter. It is critical to note that these two sets of electroencephalogram (EEG) results were obtained under conditions where the outsides of the displays were perceived as shapeless grounds; hence, they could not have reflected access to object memories for the side of the border perceived as the object following object segregation. They are therefore consistent with the claim that access to the high-level properties of objects that might be perceived on both sides of borders occurs very quickly and before object segregation. 
The experiments reported here were designed to complement the ERP evidence regarding how long it takes to access object memories in the course of object segregation with behavioral evidence. Specifically, we asked when activation of memories for objects suggested but not perceived on the groundside of a border reaches a functional threshold to influence behavior. In order to achieve this goal, we investigated whether fast categorization performance is perturbed by the suggestion of real-world objects on the outside of ambiguous novel silhouettes, even though participants were unaware of those objects in virtue of their ultimate perceptual status as shapeless grounds. If it is, such a finding will provide initial evidence of how quickly object memories accessed on the outside of figure borders—on the side ultimately perceived as a shapeless ground—can affect behavior. 
To assess fast categorization, we adapted the forced-choice saccadic categorization task used by Kirchner and Thorpe (2006; Crouzet et al., 2010) for use with silhouettes like those tested by Sanguinetti et al. (2014; Sanguinetti et al., 2016; Trujillo et al., 2010). On each trial two silhouettes were displayed, one above and one below fixation. One silhouette—the target—depicted a common object that exists in the real world (Figure 2A), whereas the other silhouette—the nontarget—depicted a novel object (Figure 2B and C). The participants' task was to saccade to the silhouette that depicted a real-world object as quickly as possible. As in previous experiments, participants were not informed of the difference between the two types of novel silhouettes. 
We expect performance to be fast, because saccadic reaction times (SRTs) can be initiated within 100 ms (Busettini, Masson, & Miles, 1997; Fischer & Weber, 2010). However, we expect the SRTs obtained in our experiments to be slower than those reported by Thorpe and colleagues (Crouzet et al., 2010; Kirchner & Thorpe, 2006), for a number of reasons. First, in the present experiments the real-world targets were drawn from many different categories (see Appendix A). Therefore, target detection could not be based on template matching or a simple search for features of one category of stimuli (e.g., a face template or animal features), which would normally speed search. In previous experiments, one category of objects served as the target for the entire experiment (e.g., in different experiments, targets were animals, faces, or vehicles; Crouzet et al., 2010). Second, the only information participants have available to recognize silhouettes are the borders—silhouettes lack both surface detail and cues for tridimensionality, which makes them more difficult to recognize than stimuli that contain more surface detail (e.g., Wagemans et al., 2008). Thus, we expected SRTs obtained in the current experiments to be slower than SRTs obtained with photographs of natural scenes. Third, we placed targets above and below fixation rather than left and right of fixation, as Kirchner and Thorpe (2006; Crouzet et al., 2010) did. SRTs to locations above and below fixation are known to be slower than left/right SRTs (Crouzet et al., 2010), so this factor will also slow our SRTs. Nonetheless, we considered this change important because (a) a left bias has been reported in forced-choice SRT experiments (e.g., Crouzet et al., 2010), introducing noise into the results, and (b) given that our ambiguous silhouettes suggested real-world objects on the outside of the left and right borders, an attentional scan originating at fixation would encounter the real-world objects suggested in the grounds first in stimuli placed to the right and left of fixation, and this might have caused observers to perceive the object on the outside of the border between the black and white regions. Placing the silhouettes above/below fixation removed these concerns. Fourth, the choice between novel versus real-world object is different from the detection tasks used in previous studies, and may have inflated SRTs. In general, it takes longer to determine if stimuli are novel rather than familiar (Kroll & Potter, 1984). 
For these reasons, we don't intend to directly compare our SRTs to those obtained by Thorpe and colleagues (e.g., Crouzet et al., 2010; Kirchner & Thorpe, 2006). We are adapting their saccadic categorization paradigm in order to obtain faster responses than have been found in previous tasks in which subjects categorized individual stimuli as depicting real-world or novel objects (where mean RTs were ≥ 480 ms; Trujillo et al., 2010). Our goal was to use a behavioral task to attempt to set an upper bound on how long it takes for object memories accessed before object segregation to reach a functional threshold (i.e., sufficient activation to affect behavior). 
SRTs obtained on trials with unambiguous nontarget silhouettes will provide a baseline index of how long it takes to make saccadic categorization responses with the silhouette stimuli and conditions in our experiment. The presence of statistically longer SRTs on trials with ambiguous rather than unambiguous nontargets will be evidence that memories for objects suggested on the groundside of a border can be accessed within the time frame established on baseline trials and can interfere with performance. Thus, SRTs on baseline trials in these experiments can set an upper bound on how long it takes to access object memories for regions ultimately perceived as grounds. Because many factors can contribute to SRTs in these experiments, only an upper bound can be set by our measure, and the size of the difference per se cannot be interpreted (see above, and see General discussion for additional discussion). 
Experiment 1
Methods
Participants
Thirty-six volunteers (19 females; mean age 19.7 years) participated for class credit for introductory-level psychology classes at the University of Arizona (UA). All participants agreed to participate by signing a consent form approved by the UA Human Subjects Protection Program. All participants reported normal or corrected-to-normal vision and spoke English fluently. Data from three participants were unusable due to difficulties with eye-tracker calibration (N = 2) or coding errors (N = 1). 
Stimuli
The stimuli were 160 mirror-symmetric, enclosed black silhouettes. Object properties such as symmetry, enclosure, and small area and subjective factors such as expectation worked to favor the perception of the object on the inside of the silhouettes. Of these silhouette figures, 80 portrayed meaningful, real-world objects (animals, plants, manmade objects, cartoons, and symbols). Samples of real-world object silhouettes are shown in Figure 2A; the 80 objects are listed in Appendix A. These were the target silhouettes; for these silhouettes, familiar configuration also favored assigning object status to the inside of the silhouette. The remaining 80 silhouettes were the nontargets; these portrayed meaningless novel shapes (i.e., geometric shapes participants had never encountered before the experiment). Of these novel nontarget silhouettes, 40 suggested novel objects on the outside of their borders as well as on the inside; these are the unambiguous novel silhouettes (Figure 2B). The remaining 40 novel nontarget silhouettes suggested portions of common, real-world objects on the outside of their borders (ambiguous novel silhouettes). Samples of the ambiguous novel nontarget silhouettes are shown in Figure 2C; the 40 real-world objects suggested on the outside of the ambiguous novel silhouettes are listed in Appendix A). Although memories of the real-world objects suggested on the outside of these ambiguous nontarget silhouettes and the semantics of these objects are activated before object segregation (e.g., Cacciamani et al., 2014; Peterson et al., 2012; Peterson & Gibson, 1994a, 1994b; Sanguinetti et al., 2014; Trujillo et al., 2010), the preponderance of object properties (e.g., symmetry, enclosure, small area) favored assigning the object to the inside rather than the outside of these silhouettes. Accordingly, participants perceived them as novel and were unaware of the objects suggested on the outside/groundside. Indeed, participants perceived the outsides of both the unambiguous and ambiguous novel silhouettes as shapeless grounds (confirmed by postexperiment questioning, see below). 
Silhouettes were (4.7° high × 1.8°−9.4° wide). Unambiguous and ambiguous novel silhouettes were equated on low-level features known to affect object perception (symmetry, enclosure, local convexity; Peterson & Gibson, 1994a; Peterson & Kim, 2001), and other low-level stimulus features (curvilinearity, contour length, luminance, spatial frequency, and horizontal span; Trujillo et al., 2010). 
Apparatus
Stimuli were presented on a 19-in. cathode ray tube monitor using Experiment Builder (SR Research, LLC, Kanata, ON, Canada). Participants sat 90 cm from the screen; they used a chin rest to maintain a fixed viewing distance. Eye movements were monitored with a desktop-mounted EyeLink 1000 eye tracker system recorded at 1000 Hz, with a spatial resolution of 0.24°–0.5 °, tracking the right eye. The experimenter sat behind a room divider in the room with the participant. SRTs were extracted with EyeLink Data Viewer software (SR Research, LLC). Saccades were defined based on velocity, acceleration, and positive thresholds. An eye movement was defined as a saccade when its velocity reached 22°/s, an acceleration threshold of 4000°/s was exceeded, and it traversed more than 0.2° of visual angle from the fixation point (Goldberg & Wichansky, 2002). 
Procedure
At the beginning of the experiment, participants took part in a calibration procedure to ensure proper eye-tracking measurements. Once the eye tracker was calibrated, participants performed a forced-choice saccadic task that was modeled after previous studies (Crouzet et al., 2010; Kirchner & Thorpe, 2006). On each trial, a real-world silhouette and a novel silhouette (either ambiguous or unambiguous, randomly chosen) were presented simultaneously, one above and one below fixation. 
Participants were instructed that their task was to move their eyes to the location occupied by the silhouette depicting a real-world object as quickly and as accurately as possible. They were naive to the manipulation of ambiguity within the novel silhouettes, and were told to ignore the “novel/meaningless silhouettes.” Before the experimental trials participants were given 24 practice trials without feedback regarding their performance. Only unambiguous novel silhouettes (ones not used in the main experiment) were used as nontargets during the practice trials. Participants were allowed to repeat the practice trials if they had trouble understanding the task. 
Each trial began with a fixation cross (Figure 3). Participants had to keep their eyes on the cross for 1000 ms to initiate the trial, which was signaled by the disappearance of the fixation cross and the appearance of two silhouettes, one above and one below fixation. If participants glanced away during the first 1000 ms after they initially fixated the cross, the trial would recycle randomly to another trial in the sequence after at least five trials. The centers of the silhouette figures were located 5° above or 5° below fixation. Silhouettes were exposed for 300 ms and followed immediately by one of five masks exposed for 400 ms. Masks were random blotches of black and white (H: 4.7°; W: 9.4°); samples are shown in Figure 3. On each trial, the two masks (above or below fixation) were different. Masks were randomly cycled through top/bottom locations. The next trial began 1000 ms after the offset of the silhouettes. 
Figure 3
 
The trial structure for Experiment 1. A cross was displayed for 1000 ms; participants had to fixate the cross for 1000 ms to initiate the trial. Two black silhouettes were then displayed simultaneously for 300 ms. Finally, two rectangles filled with random black and white patterns centered on the locations of the two silhouettes were displayed for 400 ms.
Figure 3
 
The trial structure for Experiment 1. A cross was displayed for 1000 ms; participants had to fixate the cross for 1000 ms to initiate the trial. Two black silhouettes were then displayed simultaneously for 300 ms. Finally, two rectangles filled with random black and white patterns centered on the locations of the two silhouettes were displayed for 400 ms.
Postexperiment questioning
Extensive postexperiment questioning was used to determine whether participants were aware of the objects suggested on the outside of the ambiguous silhouettes; if they were, we planned to eliminate their data from analysis because we are only interested in how SRTs are perturbed on correct trials on which the inside of the novel silhouettes was perceived as the object, and the outside—where portions of familiar objects were suggested on the groundside of ambiguous silhouettes—was perceived as a shapeless ground. Toward this end, immediately after the experiment, participants were asked to indicate whether they had seen any recognizable shapes on the outside of any of the silhouettes. If they indicated they did, they were asked to verbally recall those shapes. Data from participants who indicated any awareness of any meaningful shapes on the outside of the silhouettes were removed from the analysis, regardless of whether they correctly recalled any objects (hence, our criteria were very conservative). To help participants understand the question, the experimenter showed them a unique sample ambiguous display and pointed out the object suggested on the outside of the silhouette border. This procedure identified 12 participants as being aware of the meaningful objects suggested on the outside of the ambiguous novel nontarget silhouettes. Consequently, their data were not analyzed. The data from the remaining 21 unaware participants (12 females) were analyzed. Only data from unaware participants are analyzed because we are measuring access to object memories for regions ultimately determined to be shapeless grounds. For the aware participants, object status must have been conferred on the outside of the silhouettes on at least some of the trials in order for participants to be aware of the object suggested there. 
Results
Accuracy
Data from one participant who did not understand the task were removed; that participant's accuracy was 52%. Average accuracy for the remaining 20 participants was 69%. Participants performed well above chance both when the nontargets were ambiguous (72% ± 9%), and when they were unambiguous (67% ± 10%); their accuracy in these two conditions did not differ statistically, t(19) = −1.57, p = 0.133, two-tailed. The accuracy data show that the task was difficult, however, most likely because silhouettes of real-world objects are hard to recognize, and this was probably exacerbated by the fact that the design required peripheral presentation. Nevertheless, participants were able to perform the task, and they were no less accurate with ambiguous than unambiguous nontargets. This finding is important because it shows that participants did not base their saccadic categorization response on first access to object memories; had they done so, more errors would have occurred with ambiguous than unambiguous nontargets. 
Saccadic reaction times
Two participants had mean SRTs that were 2 SDs above the average SRT; therefore, their data were not included in further analysis. Participants initiated saccades towards real-world silhouettes when the nontargets were unambiguous novel silhouettes with an average response time of 272 ms (± 88 ms). This value serves as a baseline for how quickly participants can perform this task when there is no suggestion of another real-world object in the display (i.e., when unambiguous nontargets are present). Participants took an average of 15 ms longer to initiate saccades when the nontarget novel silhouettes were ambiguous (287 ± 87 ms); this increase in RT was statistically significant, t(17) = −2.05, p = 0.028, d = 0.99 (Figure 4).2 That the suggestion of a real-world object on the outside of the border of the ambiguous silhouettes delayed SRTs beyond the baseline mean of 272 ms indicates that memories accessed for the object potentially present on the outside of the border of the ambiguous silhouettes reached a functional threshold in less than 272 ms after stimulus onset. Although the results of Experiment 1 provide some estimate of how fast object memories accessed before object detection can reach a functional threshold, the methods of Experiment 1 may not have encouraged the fastest saccades. The goal of Experiment 2 was to speed up saccades even more and to replicate Experiment 1
Figure 4
 
Mean SRTs for Experiment 1 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores.
Figure 4
 
Mean SRTs for Experiment 1 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores.
Experiment 2
To speed up reaction times, we introduced a 200-ms gap between the offset of the fixation cross and the onset of the silhouettes, a method that permits more rapid initiation of saccades (e.g., Fischer & Weber, 2010; Kirchner & Thorpe, 2006). In addition, we performed a minimum SRT (minSRT) analysis, in which the first moment in time when correct responses significantly outnumber incorrect responses is found. This allowed us to determine the minimum amount of time it takes for observers to make an accurate saccadic response when the novel nontarget silhouettes were ambiguous versus unambiguous. 
Methods
Participants
One-hundred-and-two volunteers (79 females; average age 20.1 years) participated for class credit for introductory-level psychology classes at UA. All participants reported normal or corrected-to-normal vision and spoke English fluently. Data could not be used from six participants because of technical issues with the eye-tracker hardware. 
Stimuli and apparatus
The stimuli and apparatus were the same as in Experiment 1
Procedure
The procedure was the same as Experiment 1 except that a 200-ms blank screen was displayed on each trial after the participants had maintained fixation on the cross for 1000 ms and before the silhouettes appeared. 
Postexperiment questioning
The postexperiment questioning from Experiment 1 was used, and it was determined that 25 participants were aware of the suggested meaningful objects on the outside of the ambiguous novel silhouettes. The data from these participants were removed from the analysis leaving the data from 71 participants who were unaware of the suggested real-world objects on the outside of the ambiguous novel silhouettes for the analysis. In this experiment, the number of aware subjects was sufficiently large to allow us to analyze their results separately. We report that analysis below (see “Aware Participants” section of Results). 
Minimum SRTs
To perform the minSRT analysis, we divided the saccadic latency distribution for each condition into 10-ms bins (e.g., the 200-ms bin contained latencies in the 200–209-ms range). The first bin in which participants could accurately perform the task was found by searching for bins containing significantly more correct than incorrect responses using a χ2 test with a criterion of p < 0.05. The first five consecutive statistically significant bins were found, and the first of these bins was taken as the minSRT. This analysis was performed by pooling the data across subjects and trials following Kirchner and Thorpe (2006). 
Results and discussion
Accuracy
Overall accuracy for 11 participants was 48%. As this was below our pre-established criterion of 55%, their data were excluded because they did not perform the task accurately. These 11 participants showed no evidence of generating an eye movement in the direction of an ambiguous nontarget because the semantics of the object suggested on the groundside were activated. Had they done so, one would expect them to be less accurate when ambiguous nontargets were present rather than unambiguous nontargets. Indeed, they were marginally more accurate with ambiguous nontargets (53%) than with unambiguous nontargets (44%), t(10) = 2.22, p = 0.051, two-tailed; there were no differences in reaction time, p > 0.05. Because these poor performers could not perform the task under baseline conditions with unambiguous nontargets (mean accuracy was 44%), we eliminated their data. 
For the remaining 60 participants, overall accuracy was 69%. As in Experiment 1, participants were well above chance when making saccades toward real-world silhouettes and equally accurate when ambiguous and unambiguous novel silhouettes were the nontargets (69%, ± 10% and 67% ± 10%, respectively), t(59) = −1.18, p = 0.243, two-tailed. As in Experiment 1, this finding shows that participants were able to perform the task and that any slowing when ambiguous nontargets are present is an indication that memories of the objects suggested in the ground region were accessed on a fast pass of processing and slowed the categorization decision. Although performance is well above chance levels on trials with both unambiguous and ambiguous nontarget silhouettes, it was below the levels reported in previous forced-choice SRT studies (>90%; Kirchner & Thorpe, 2006). This shows, as in Experiment 1, that overall the task reported here is harder, which is expected to slow SRTs as compared to previous experiments. 
Saccadic reaction times
The data from three participants who had mean SRTs more than 2 SDs longer than the average were excluded. As expected, SRTs in Experiment 2 were reduced relative to Experiment 1, t(77) = 3.27, p = 0.001. The mean SRT in the baseline unambiguous nontarget condition was 212 ms (±81). As in Experiment 1, SRTs were longer when ambiguous nontargets were present (218 ± 92 ms), t(59) = 2.699, p = 0.005 (Figure 5). That the suggestion of a portion of a real-world object on the outside of the ambiguous silhouette could slow responses that averaged 212 ms for unambiguous silhouettes indicates that access to the memories of those objects reached a functional threshold in less than 212 ms, and therefore provides evidence that fast access to object memories in the course of object segregation influenced behavior. The minSRT analysis, reported next, provides an even shorter estimation of the time required for activation of these object memories to exceed a functional threshold. 
Figure 5
 
SRTs for Experiment 2 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores. The scale is identical to Figure 4.
Figure 5
 
SRTs for Experiment 2 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores. The scale is identical to Figure 4.
Minimum SRTs
Finally, we performed a minSRT analysis, in order to find the first moment in time where participants began to correctly perform the task in each condition (the first of five consecutive bins where there was a significantly higher proportion of correct than incorrect responses in the reaction time distribution). The minSRT was in the 190–200-ms bin for the unambiguous condition and in the 200–210-ms bin for the ambiguous condition (Figure 6). Thus, the difference between conditions detected in mean SRTs is evident even in the fastest correct reaction times. These results show that memories for the object suggested on the outside of the ambiguous nontargets reached a functional threshold very rapidly, slowing responses that otherwise could be made within approximately 190 ms. 
Figure 6
 
Distribution of SRTs in Experiment 2. Trials with unambiguous novel silhouette nontargets are shown on top (in red) and trials with ambiguous novel silhouette nontargets are shown on the bottom (in black). Solid lines represent correct responses and dashed lines represent incorrect responses. The vertical dotted line in each graph indicates the first of five consecutive bins where correct responses significantly outnumbered incorrect responses—the 190–200-ms bin for unambiguous trials and the 200–210-ms bin for ambiguous trials.
Figure 6
 
Distribution of SRTs in Experiment 2. Trials with unambiguous novel silhouette nontargets are shown on top (in red) and trials with ambiguous novel silhouette nontargets are shown on the bottom (in black). Solid lines represent correct responses and dashed lines represent incorrect responses. The vertical dotted line in each graph indicates the first of five consecutive bins where correct responses significantly outnumbered incorrect responses—the 190–200-ms bin for unambiguous trials and the 200–210-ms bin for ambiguous trials.
The minSRTs in the unambiguous nontarget condition were ∼50 ms longer than the longest minSRTs reported by Crouzet et al. (2010) using grayscale photographs as stimuli, who reported large variability in SRTs between semantic categories (110 ms for faces, 120 ms for animal, and 140 ms for vehicle; Crouzet et al., 2010). Thus, it appears that, as we conjectured, the use of real-world and ambiguous silhouettes suggesting objects from many categories in the present experiment may have inflated overall SRTs relative to previous forced-choice experiments with similar tasks. Crouzet et al.'s observation of large between-category differences in SRTs, raises the question of whether the differences we observed between ambiguous versus unambiguous nontarget conditions could be due systematic differences in the categories of the real-world objects manipulated in those conditions. However, there were no such systematic differences in our experiments because the target real-world object and the nontarget novel silhouettes shown on a given trial were randomly assigned across participants. 
We next consider whether we are licensed to take 190 ms as an estimate of how long it takes for object memories accessed in the course of object segregation to exceed a functional threshold for the real-world versus novel object categorization task used in these experiments. After all, many factors may contribute to the SRTs measured here, including decision (evidence integration/accumulation), cognitive control, and even motor execution processes. One might question whether the SRT differences between ambiguous and unambiguous conditions could result from interference in any of those processes, which do not necessarily occur before object detection, rather than fast access to object memories in the course of object segregation as we have hypothesized.3 We hold that because we equated the ambiguous and unambiguous nontarget silhouettes on stimulus features (see Methods), and because participants consciously perceived both the ambiguous and the unambiguous nontargets as novel objects, any differences in those other processes would necessarily be attributable to the factor we manipulated—whether familiar objects were suggested on the outside of the novel silhouettes. Therefore, we can take 190 ms as an upper limit on the time required for object memories accessed in the course of object segregation to affect behavior. It is an upper limit rather than a lower limit for the reasons discussed previously—our stimuli are silhouettes and our targets were drawn from a variety of real-world object categories rather than restricted to one. 
In Experiment 3, we test another potential explanation for why our minSRTs were longer than those in previous saccadic forced-choice experiments, and find it lacking. Before we introduce Experiment 3, however, we present an analysis of the performance of aware participants, whose data were excluded from the analysis above. 
Aware participants
For completeness, we present the data from the 25 participants who were aware of at least some of the real-world objects suggested on the groundside of ambiguous novel silhouettes. Note that because participants were classified as aware if they indicated they had perceived even one of the real-word objects suggested on the groundside of the ambiguous nontargets, we do not know the number of trials on which they were aware nor can we separately analyze trials on which they were and were not aware. 
Aware participants' SRTs did not differ for trials with ambiguous (266 ± 89 ms) versus unambiguous novel silhouette nontargets (274 ± 95 ms), t(24) = 0.59, p = 0.561, two-tailed. Aware participants were marginally more accurate for trials with ambiguous (71% ± 10%) rather than unambiguous (66% ± 9%) novel silhouette nontargets, t(24) = −2.06, p = 0.0504, two-tailed. This difference is in the direction opposite to what would be expected if aware participants were making their responses prior to object segregation. Such a strategy should cause them to be less accurate on trials with ambiguous novel nontarget silhouettes because they would sometimes saccade to the location of the ambiguous nontarget silhouette (based on access to the semantics of the real-world object suggested on the groundside of the nontarget silhouette) rather than the location of the real-world silhouette. Instead, they were more accurate, suggesting that aware participants were moving their eyes after object segregation (i.e., that they indeed perceived the meaningful object on the outside of the ambiguous nontarget silhouette as they claimed), which enabled them to accurately move their eyes toward the target silhouette portraying a real-world object on the inside. Note, however, that they were not faster to saccade to the real-world object on trials with ambiguous rather than unambiguous nontargets, p = 0.561. We do not explore this effect further since it is beyond the scope of this article. However, the finding that the pattern of results obtained from aware participants was different from that obtained from unaware participants lends support to our claim that the participants whose data we analyzed were indeed unaware of the object suggested on the groundside of the ambiguous nontarget silhouettes. 
Experiment 3
In Experiments 1 and 2, it was assumed that the different mean SRTs obtained with ambiguous versus unambiguous nontargets indicated that memories for objects suggested in the ground region of ambiguous silhouettes were accessed and slowed SRTs when ambiguous compared to unambiguous targets were present. We had equated the ambiguous and unambiguous nontarget silhouettes on low-level stimulus factors (see Methods, Experiment 1) to rule out interpretations in terms of stimulus features. Nevertheless, ambiguous and unambiguous silhouettes were different silhouettes. Therefore, it is imperative to rule out the possibility that stimulus differences between the two types of nontarget silhouettes produced the differences we observed in Experiments 1 and 2 rather than access to memories of the real-world objects suggested on the outside of the borders of the ambiguous novel stimuli. Accordingly, in Experiment 3 we used inverted rather than upright versions of the ambiguous nontarget novel silhouettes. Inverting pictures of real-world objects from their typical upright orientation increases identification latencies and/or decreases accuracy of identifying briefly exposed displays, a finding that has been attributed to slowed access to object memories (e.g., Jolicoeur, 1985) or to pooling of neural responses (Oram, Földiák, Perrett, & Sengpiel, 1998; Oram & Perrett, 1992). Inverting depictions of objects with a typical upright orientation has also been shown to eliminate both effects of object memories on object segregation and evidence of semantic access prior to object segmentation (e.g., Cacciamani et al., 2014; Kimchi & Hadad, 2002; Navon, 2011; Peterson & Gibson, 1994a, 1994b; Peterson et al., 2012). These findings have been taken as evidence that access to object memories by inverted depictions of real-world objects is delayed beyond the time when object segregation occurs. 
We reasoned that, if access to object properties on the outside of ambiguous silhouettes is slowed beyond the time needed for object segregation, then no differences in SRTs between ambiguous and unambiguous nontarget conditions should be observed with inverted ambiguous silhouettes in Experiment 3. Alternatively, if the SRT differences found in Experiments 1 and 2 were due to stimulus differences between the two types of novel nontarget silhouettes, or to differences in nonorientation dependent properties, then they should be replicated in Experiment 3
Experiment 3 also allows us to address the possibility that our mean and minSRTs are longer than those obtained by Thorpe and colleagues (Crouzet et al., 2010; Kirchner & Thorpe, 2006) because participants adopted a conservative strategy of waiting until object segregation occurred on ambiguous and unambiguous trials. Participants might have noticed that fast saccades were often incorrect, given that trials with unambiguous and ambiguous novel silhouettes were intermixed in our experiments. If the longer average SRTs are due to this strategy in Experiments 1 and 2, then mean SRTs should be shorter in Experiment 3 than in Experiment 1 because object memories should not be accessed before object segregation for inverted ambiguous novel silhouettes, and there should be no reason to adopt a conservative strategy. 
Methods
Participants
Thirty-two volunteers (19 females; mean age 19.2 years) participated for class credit for introductory-level psychology classes at UA. All participants reported normal or corrected-to-normal vision and spoke English fluently. 
Stimuli and apparatus
The stimuli and apparatus were the same as those used in Experiment 1 except that the 40 ambiguous novel nontarget silhouettes were inverted (rotated 180° on their central axis). 
Procedure
The forced-choice saccadic task from Experiment 1 was used. The only difference is that participants were given 24 practice trials with feedback regarding whether they were correct. 
Postexperiment questioning
The postexperiment questioning from Experiments 1 and 2 was used. Using these methods, it was determined that six participants were aware of the real-world objects suggested on the outside of some of the novel silhouettes. The data from 26 participants who were unaware of the suggested real-world objects were kept for the analysis. 
Results and discussion
Accuracy
Overall accuracy in Experiment 3 was 68%. There was no difference in accuracy between trials with inverted ambiguous (70% ± 1%) and unambiguous silhouettes (66% ± 1%), t(25) = −1.35, p = 0.189, d = −0.39, two-tailed. 
Saccadic reaction times
Mean SRTs on trials with inverted ambiguous nontargets (280 ± 84 ms) were not longer than mean SRTs in the baseline condition in which the nontarget novel silhouettes were unambiguous (282 ± 89 ms), t(25) = 0.155, p = 0.439. The absence of a difference in mean SRTs obtained on trials with ambiguous versus unambiguous nontargets in Experiment 3 shows that low-level feature differences between the two types of novel silhouettes cannot account for the between-condition differences observed in Experiments 1 and 2. Furthermore, the lack of between-condition differences in Experiment 3 also supports our claim that the longer SRTs observed on trials with ambiguous versus unambiguous nontargets in Experiments 1 and 2 were due to the access to memories of the real-world objects suggested on the outside of the borders of the novel silhouettes in those experiments. 
We obtained no evidence that participants in the previous experiments waited to move their eyes until object segregation occurred on trials with both ambiguous and unambiguous nontargets because they detected ambiguity on some of the trials. On that explanation, SRTs should be slower in Experiment 1 than in Experiment 3. Yet, overall SRTs in Experiment 1 (279 ± 87 ms) and Experiment 3 (281 ± 109 ms) did not differ statistically, t(44) = 0.67, p = 0.506, two-tailed. Marginally statistically significant differences in SRT were found for the trials with unambiguous nontarget novel silhouettes between the two experiments: There was a trend for participants to respond faster in Experiment 1 (272 ms) than in Experiment 3 (282 ms), t(44) = 1.89, p = 0.065, two-tailed. This trend was in the opposite direction from what would be predicted if observers slowed all of their responses when they detected ambiguity on a subset of the trials in Experiment 1
General discussion
In three experiments, participants were instructed to saccade to a real-world silhouette while ignoring a novel silhouette. On half the trials, the novel nontarget was unambiguously novel—it portrayed only a meaningless novel object participants had never seen before the experiment. Unambiguous nontarget trials served as a baseline for how quickly participants could initiate a saccade under the conditions of our experiment. Participants were unaware that on the other half of trials, a portion of a real-world object was suggested on the outside of the novel nontarget silhouette, on the side of the border ultimately perceived as the shapeless ground (ambiguous nontarget trials). Our main question was whether the suggestion of real-world objects on the groundside of the contour of ambiguous nontarget silhouettes would interfere with categorization responses relative to when unambiguous nontarget silhouettes were present. Any detectible SRT difference would provide the first evidence of when object memories accessed in the course of object segregation reach a functional threshold. 
In Experiment 1, the mean baseline SRT obtained on trials with an unambiguous notarget was 272 ms. SRTs were statistically longer (by 15 ms) on trials with ambiguous nontarget silhouettes. Experiment 2 was designed to speed SRTs in both conditions and indeed, the mean baseline SRT was 60 ms faster (212 ms). Again, mean SRTs were statistically longer on trials with ambiguous nontargets (218 ms). In Experiment 2, minSRTs were calculated; these were even shorter in both conditions—190 ms in the baseline condition with unambiguous nontargets, and 200 ms in the condition with ambiguous nontargets. Thus across two experiments, rapid saccades towards real-world silhouettes were slower when ambiguous novel silhouettes served as nontargets than when unambiguous novel silhouettes served as nontargets. This SRT difference provides a behavioral index of how quickly object memories accessed in the course of object segregation exceed a functional threshold, albeit this estimation is an upper limit, for the reasons discussed. In Experiment 2, where fast saccades were encouraged, these behavioral results suggest an upper bound of 190 ms on the time it takes to access object memories in the course of object segregation to exert an influence on behavior. These findings constitute behavioral evidence that object memories matching an object suggested in a region of the visual field that is ultimately perceived as the ground are accessed rapidly and influence categorization responses. 
These results converge with previous neurophysiological experiments using the silhouettes employed here. ERP evidence revealed that object memories were accessed within approximately 110 ms of processing (Sanguinetti et al., 2014; Trujillo et al., 2010). The previous ERP experiments provided no index of how long it takes for these object memories to exert an influence on behavior—that is, to exceed a functional threshold. Since the same silhouettes were used in the present experiments as in those ERP experiments, why is the estimated functional threshold ∼80 ms longer? Some of the difference can be accounted for by the fact that it takes ∼20 ms for motor system outputs to reach the eye muscles (Schiller, Haushofer, & Kendall, 2004). Also, participants' accuracy was lower in the current experiment (∼70%) than in Trujillo et al. (2010) and Sanguinetti et al.'s (2014) ERP studies in which participants accurately categorized individual stimuli shown at fixation as depicting a real-world or a novel object on >90% of trials; hence, our two-alternative forced-choice task involving peripherally presented silhouettes was more difficult than their task. The increased difficulty caused by peripheral presentation would also have slowed SRTs and therefore increased even baseline SRTs. Therefore, the behavioral results reported here converge with the estimate based on ERP results, but provide only an upper limit on the time required for memories of objects suggested in ground regions to reach functional threshold. 
Any evidence regarding a functional threshold of the nature reported here will necessarily be tied to the type of task employed. We chose an SRT paradigm in an attempt to record the fastest behaviors possible. However, SRTs are not a pure measure of activation of object memories on the groundside of a border—many factors influence the speed of the behavior. Based on the previous evidence reported by Trujillo et al. (2010; Sanguinetti et al., 2014) we assume that object memories were accessed within about 110 ms. Our question was how long it takes for the object memories accessed in the course of object segregation to reach a functional threshold. Our estimates will necessarily be longer. We argue that our estimate places an upper limit on the time required for object memories accessed in the course of object segregation to affect behavior. Future experiments might use a task involving figure assignment with stimuli closer to fixation in order to attempt to identify a lower functional threshold. 
Factors potentially affecting the magnitude of the differences in SRT
We have argued that what matters for our hypothesis is the presence of statistically longer SRTs on trials with ambiguous versus unambiguous nontargets, not the magnitude of that difference. The presence of a statistically longer mean SRTs on trials with ambiguous versus unambiguous nontargets allows us to use the behavioral data to place an upper limit of 190 ms on how long it takes for object memories accessed on the groundside of peripherally presented stimuli to exert an influence on behavior. Nevertheless, the magnitude of the difference is small. We next discuss three reasons why the differences were small, and why we have argued that the presence, rather than the magnitude, of a statistically significant difference is the important index of the functional threshold. 
First, the difference may be small because participants could have learned to saccade to the screen location above or below fixation where a real-world object was suggested on the black inside of a contour rather than the white outside of the silhouette contour. Such a learned strategy cannot entirely explain our results because it would predict that SRTs are independent of the type of nontarget stimulus, yet we found differences in SRTs on trials with ambiguous versus unambiguous nontargets. Nevertheless, we searched for evidence of such learning in a post hoc analysis, reasoning that if learning occurred over the course of the experiment then on early trials slower SRTs might be evident for trials with ambiguous compared to unambiguous nontargets, but this difference would be obscured by fast SRTs on later trials as participants learned to move their eyes toward the black inside of the silhouettes. To investigate this possibility, for each subject for each condition in Experiments 1 and 2, SRTs were divided into four bins—with the first 20 trials for each condition in the first bin, the second 20 trials for each condition in the second bin, etc.—and performance was compared across bins. There was no evidence that learning took place in either experiment. For instance, in Experiment 2, mean SRTs for the ambiguous silhouette condition across blocks were: Block 1: M = 289.6 (±104.0 ms); Block 2: M = 304.1 (±108.3 ms); Block 3: M = 284.5 (±90.8 ms); Block 4: M = 282.7 (±95.0 ms), p > 0.05. 
Second, differences may be small because participants may have simply moved their eyes to the first real-world object they detected based on a parallel, independent race between the two silhouettes on the screen on each trial. On most, but not all trials, the real-world target silhouette (which is unambiguous) will win the race over the ambiguous nontarget silhouette because it takes time to resolve the ambiguity (see Peterson & Enns, 2005; Peterson & Lampignano, 2003).4 Yet on some trials (which may vary across subjects), it will take longer for the real-world target silhouette to win the race when an ambiguous nontarget is present. There must be enough of the latter type of trials to allow us to see a statistically significant difference between baseline RTs with unambiguous nontargets and RTs on trials with ambiguous nontargets. However, because we don't know the proportion of trials on which the target easily won the race, we cannot interpret the magnitude of the RT difference. 
Third, because the real-world objects used in our experiments (both those used as targets and those suggested on the groundside of the ambiguous nontargets) were drawn from a variety of categories, the variability in SRTs between items in different categories originally reported by Crouzet et al. (2010; see previous discussion) may have reduced the magnitude of the RT differences found between conditions in the present experiment.5 
For all of the reasons discussed above, we argue that the presence of a longer SRT obtained on trials with ambiguous versus unambiguous nontargets is important, but the magnitude of the difference is not important. The presence of a difference allows us to estimate the upper limit of when access to object memories in the course of object segregation reaches a functional threshold in this task. The present data provide the first estimate of such a threshold. 
The crucial finding in the first two experiments is that the suggestion of real-world objects on the groundside of the contours of ambiguous silhouettes delayed rapid saccades during a categorization task. These findings allowed us to place an upper limit of 190 ms on the time it takes for access to memories of objects suggested on the side of a border ultimately determined to be a shapeless ground to exert an influence on behavior. Experiment 3 supports this interpretation because inverting the real-world objects on the outside of ambiguous silhouettes, which slows object memory access beyond the time required to affect object perception, led to no SRT differences. These findings suggest that memories of objects that never reached conscious awareness were accessed on a fast pass of processing in the course of object perception and interfered with the fast categorization response. 
The longer baseline SRTs in the present experiments relative to previous forced-choice SRT experiments constrains interpretations about the nature of the cortical processing that supported behaviors in our experiments. The initial pass through the ventral visual pathway is completed within 80–100 ms (Lamme & Roelfsema, 2000). Assuming the conduction velocities of feedback signals are as quick as those of feedforward signals (Bullier, 2001), this first pass leaves time for iterative feedback to support even ultrarapid categorization responses in 150 ms or less. Since the suggestion of a real-world object on the groundside of ambiguous nontargets perturbed behavioral responses that averaged 190 ms on trials with unambiguous nontargets, it is not clear whether a strictly feedforward pass of processing supported behavioral responses or whether recursive feedback was involved. Our purpose was not to make claims about the nature of the underlying cortical processing, but instead to test the upper limit for when object memories suggested in ground regions reached a functional threshold. 
These results may reflect early stages of a dynamic visual processing architecture (Bullier, 2001; Dehaene et al., 2006; Epshtein, Lifshitz, & Ullman, 2008; Lamme & Roelfsema, 2000; Nadel & Peterson, 2013) where features as well as higher level representations (e.g., shape and perhaps semantic representations) of objects can be accessed on a fast pass of processing and enter into a competition for object status at multiple levels in the cortical hierarchy prior to object segregation and perception (Peterson, De Gelder, Rapcsak, Gerhardstein, & Bachoud-Lévi, 2000). Until now, there was no behavioral evidence regarding when memories for objects suggested on the groundside of silhouettes reached a functional threshold. Therefore, the evidence reported here is the first to reveal that memories of objects suggested in grounds can influence behavior rapidly, within 190 ms after stimulus onset. 
Acknowledgments
MAP acknowledges the support of the National Science Foundation (BCS-0960529) and the Office of Naval Research (N0014-14-1-0671). 
Commercial relationships: none. 
Corresponding author: Joseph L. Sanguinetti. 
Email: sanguine@unm.edu. 
Address: University of New Mexico, Albuquerque, NM, USA. 
References
Bullier, J. (2001). Integrated model of visual processing. Brain Research Reviews, 36 (2–3), 96–107.
Busettini, C, Masson, G. S, & Miles, F. (1997). Radial optic flow induces vergence eye movements with ultra-short latencies. Nature, 390 (6659), 512–515, doi:10.1038/37359.
Cacciamani, L, Mojica, A. J, Sanguinetti, J. L, & Peterson, M. A. (2014). Semantic access occurs outside of awareness for the ground side of a figure. Attention, Perception, & Psychophysics, 76 (8), 2531–2547, doi:10.3758/s13414-014-0743.
Camprodon, J, Zohary, E, Brodbeck, V, & Pascual-Leone, A. (2010). Two phases of V1 activity for visual recognition of natural images. Journal of Cognitive Neuroscience, 22 (6), 1262–1269, doi:10.1162/jocn.2009.21253.
Crouzet, S. M, Kirchner, H, & Thorpe, S. J. (2010). Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision, 10 (4): 16, 1–17, doi:10.1167/10.4.16. [PubMed] [Article]
Dehaene, S, Changeux, J. P, Naccache, L, Sackur, J, & Sergent, C. (2006). Conscious, preconscious, and subliminal processing: A testable taxonomy. Trends in Cognitive Sciences, 10 (5), 204–211, doi:10.1016/j.tics.2006.03.007.
Di Lollo, V, Enns, J. T, & Rensink, R. A. (2000). Competition for consciousness among visual events: The psychophysics of reentrant visual processes. Journal of Experimental Psychology: General, 129 (4), 481.
Epshtein, B, Lifshitz, I, & Ullman, S. (2008). Image interpretation by a single bottom-up top-down cycle. Proceedings of the National Academy of Sciences, USA, 105 (38), 14298–14303, doi:10.1073/pnas.0800968105.
Fahrenfort, J. J, Scholte, H. S, & Lamme, V. A. (2007). Masking disrupts reentrant processing in human visual cortex. Journal of Cognitive Neuroscience, 19 (9), 1488–1497.
Fischer, B, & Weber, H. (2010). Express saccades and visual attention. Behavioral and Brain Sciences, 16 (03), 553, doi:10.1017/S0140525X00031575.
Firestone, C, & Scholl, B. J. (2016). ‘Moral perception' reflects neither morality nor perception. Trends in Cognitive Sciences, 20 (2), 75.
Goldberg, J. H, & Wichansky, A. M. (2002). Eye tracking in usability evaluation: A practitioner's guide. In Hyona, J. Radach, R. & Duebel H. (Eds.), In the mind's eye: Cognitive and applied aspects of eye movements (pp. 493–516. Amsterdam, the Netherlands: Elsevier.
Grill-Spector, K, & Kanwisher, N. (2005). Visual recognition as soon as you know it is there, you know what it is. Psychological Science, 16 (2), 152–160.
Holcomb, J, Reder, L, Misr, M, & Grainger, J. (2005). The effects of prime visibility on ERP measures of masked priming. Cognitive Brain Research, 24 (1), 155–172.
Jolicoeur, P. (1985). The time to name disoriented natural objects. Memory & Cognition, 13 (4), 289–303, doi:10.3758/BF03202498.
Kiefer, M. (2002). The N400 is modulated by unconsciously perceived masked words: Further evidence for an automatic spreading activation account of N400 priming effects. Cognitive Brain Research, 13 (1), 27–39.
Kimchi, R, & Hadad, B. S. (2002). Influence of past experience on perceptual grouping. Psychological Science, 13 (1), 41–47.
Kirchner, H, & Thorpe, S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46 (11), 1762–1776, doi:10.1016/j.visres.2005.10.002.
Koivisto, M, Kastrati, G, & Revonsuo, A. (2014). Recurrent processing enhances visual awareness but is not necessary for fast categorization of natural scenes. Journal of Cognitive Neuroscience, 26 (2), 223–231, doi:10.1162/jocn_a_00486.
Koivisto, M, Railo, H, Revonsuo, A, Vanni, S, & Salminen-Vaparanta, N. (2011). Recurrent processing in V1/V2 contributes to categorization of natural scenes. The Journal of Neuroscience, 31 (7), 2488–2492, doi:10.1523/JNEUROSCI.3074-10.2011.
Kroll, J. F, & Potter, M. C. (1984). Recognizing words, pictures, and concepts: A comparison of lexical, object, and reality decisions. Journal of Verbal Learning and Verbal Behavior, 23 (1), 39–66.
Lamme, V. F, & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23 (11), 571–579, doi:10.1016/S0166-2236(00)01657-X.
Nadel, L, & Peterson, M. A. (2013). The hippocampus: Part of an interactive posterior representational system spanning perceptual and memorial systems. Journal of Experimental Psychology. General, 142 (4), 1242–1254, doi:10.1037/a0033690.
Navon, D. (2011). The effect of recognizability on figure-ground processing: Does it affect parsing or only figure selection? Quarterly Journal of Experimental Psychology, 64 (3), 608–624, doi:10.1080/17470218.2010.516834.
Oram, M, & Perrett, D. (1992). Time course of neural responses discriminating different views of the face and head. Journal of Neurophysiology, 68 (1), 70–84.
Oram, M. W, Földiák, P, Perrett, D. I, & Sengpiel, F. (1998). The ideal homunculus: Decoding neural population signals. Trends in Neurosciences, 21 (6), 259–265.
Pascual-Leone, A, & Walsh, V. (2001). Fast backprojections from the motion to the primary visual area necessary for visual awareness. Science, 292 (5516), 510–512.
Peterson, M. A. (1994). The proper placement of uniform connectedness. Psychonomic Bulletin and Review, 1, 509–514.
Peterson, M. A. (1999). Knowledge and intention can penetrate early vision. Behavioral and Brain Sciences, 22 (03), 389–390.
Peterson, M. A, & Cacciamani, L. (2013). Toward a dynamical view of object perception. In Shape perception in human and computer vision (pp. 443–457. London: Springer.
Peterson, M. A, Cacciamani, L, Mojica, A. J, & Sanguinetti, J. L. (2012). The ground side of a figure: Shapeless but not meaningless. Journal of Gestalt Theory, 34 (3/4), 297–314.
Peterson, M. A, De Gelder, B, Rapcsak, S. Z, Gerhardstein, P. C, & Bachoud-Lévi, A. C. (2000). Object memory effects on figure assignment: Conscious object recognition is not necessary or sufficient. Vision Research, 40 (10–12), 1549–1567, doi:10.1016/S0042-6989(00)00053-5.
Peterson, M. A, & Enns, J. T. (2005). The edge complex: Implicit memory for figure assignment in shape perception. Perception & Psychophysics, 67 (4), 727–740, doi:10.3758/BF03193528.
Peterson, M. A, & Gibson, B. S. (1994a). Must figure-ground organization precede object recognition? An assumption in peril. Psychological Science, 5 (5), 253–259.
Peterson, M. A, & Gibson, B. S. (1994b). Object recognition contributions to figure-ground organization: Operations on outlines and subjective contours. Perception & Psychophysics, 56 (5), 551–564.
Peterson, M. A, Harvey, E. M, & Weidenbacher, H. J. (1991). Shape recognition contributions to figure-ground reversal: Which route counts? Journal of Experimental Psychology: Human Perception and Performance, 17 (4), 1075–1089.
Peterson, M. A, & Kim, J. H. (2001). On what is bound in figures and grounds. Visual Cognition, 8 (3–5), 329–348, doi:10.1080/13506280143000034.
Peterson, M. A, & Kimchi, R. (2013). Perceptual organization. In Reisberg D. (Ed.), Handbook of cognitive psychology (pp. 9–31. New York: Oxford University Press.
Peterson, M. A, & Lampignano, D. W. (2003). Implicit memory for novel figure-ground displays includes a history of cross-border competition. Journal of Experimental Psychology: Human Perception and Performance, 29 (4), 808–822.
Peterson, M. A, & Skow, E. (2008). Inhibitory competition between shape properties in figure-ground perception. Journal of Experimental Psychology: Human Perception and Performance, 34 (2), 251–267, doi:10.1037/0096-1523.34.2.251.
Pylyshyn, Z. (1999). Is vision continuous with cognition? The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences, 22 (3), 341–365.
Ralph, B. C, Seli, P, Cheng, V. O, Solman, G. J, & Smilek, D. (2014). Running the figure to the ground: Figure-ground segmentation during visual search. Vision Research, 97, 65–73.
Riesenhuber, M, & Poggio, T. (1999). Hierarchical models of object recognition in cortex. Nature Neuroscience, 2 (11), 1019–1025.
Ro, T, Breitmeyer, B, Burton, P, Singhal, N. S, & Lane, D. (2003). Feedback contributions to visual awareness in human occipital cortex. Current Biology, 13 (12), 1038–1041.
Sanguinetti, J. L, Allen, J. J. B, & Peterson, M. A. (2014). The ground side of an object: Perceived as shapeless yet processed for semantics. Psychological Science, 25 (1), 256–264, doi:10.1177/0956797613502814.
Sanguinetti, J. L, Trujillo, L. T, Schnyer, D. M, Allen, J. J. B, & Peterson, M. A. (2016). Increased alpha indexes inhibitory competition across a border. Vision Research, 126, 120–130.
Schiller, P. H, Haushofer, J, & Kendall, G. (2004). An examination of the variables that affect express saccade generation. Visual Neuroscience, 21 (2), 119–127.
Serre, T, Oliva, A, & Poggio, T. (2007). A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, USA, 104 (15), 6424–6429, doi:10.1073/pnas.0700622104.
Silvanto, J, Lavie, N, & Walsh, V. (2005). Double dissociation of V1 and V5/MT activity in visual awareness. Cerebral Cortex, 15 (11), 1736–1741.
Tapia, E, & Beck, D. M. (2014). Probing feedforward and feedback contributions to awareness with visual masking and transcranial magnetic stimulation. Frontiers in Psychology, 5, 1173, doi:10.3389/fpsyg.2014.01173
Thorpe, S. (2002). Ultra-rapid scene categorization with a wave of spikes. Biologically Motivated Computer Vision, 2525(Sp. Iss. 1), 1–15, doi:10.1002/mrc.2420.
Thorpe, S, Fize, D, & Marlot, C. (1996). Speed of processing in the human visual system. Nature, 381 (6582), 520–522.
Trujillo, L. T, Allen, J. B, Schnyer, D. M, & Peterson, A. (2010). Neurophysiological evidence for the influence of past experience on figure-ground perception. Journal of Vision, 10 (2): 5, 1–21, doi:10.1167/10.2.5. [PubMed] [Article]
Van Gaal, S, & Lamme, V. A. (2012). Unconscious high-level information processing implication for neurobiological theories of consciousness. The Neuroscientist, 18 (3), 287–301.
Vogel, E. K, Luck, S. J, & Shaprio, K. L. (1998). Electrophysiological evidence for the postperceptual locus of suppression during the attentional blink. Journal of Experimental Psychology: Human Perception and Performance, 24 (6), 1656.
Wagemans, J, De Winter, J, Op de Beeck, H, Ploeger, A, Beckers, T, & Vanroose, P. (2008). Identification of everyday objects on the basis of silhouette and outline versions. Perception, 37 (2), 207–244, doi:10.1068/p5825.
Wokke, M. E, Vandenbroucke, A. R, Scholte, H. S, & Lamme, V. A. (2012). Confuse your illusion feedback to early visual cortex contributes to perceptual completion. Psychological Science, 24 (1), 63–71.
Zhou, H, Friedman, H. S, & von der Heydt, R. (2000). Coding of border ownership in monkey visual cortex. The Journal of Neuroscience, 20 (17), 6594–6611.
Footnotes
1  The P1 differences could also indicate ongoing inhibitory competition that would be larger for ambiguous than for unambiguous silhouettes, but differential competition would only occur if object memories for the object suggested on the groundside of the ambiguous silhouettes had been accessed.
Footnotes
2  One-tailed t tests are used unless otherwise specified.
Footnotes
3  We thank a reviewer for encouraging us to include this discussion.
Footnotes
4  We thank James Pomerantz for suggesting this explanation for our small between-condition SRT differences.
Footnotes
5  We thank an anonymous reviewer for making this point.
Appendix A
Objects depicted by real-world silhouettes
Apple, balloon, bat, birthday cake, boat, bottle, bug, bumblebee, cactus, candle and flame, castle, cat, clover, crown, dragonfly, frog, goldfish, graduating student, heart, ice cream cone, jellyfish, lion, lizard, lobster, missile, penguin, racecar, ram, screw, skull, spade, spider, star, steer, strawberry, telephone, tent, t-shirt, turtle, wheel. 
Objects suggested on the groundsides of the contours of ambiguous novel silhouettes
Axe, bell, bone, boot, butterfly, coffee pot, dog, eagle, face, grapes, hand, house, hydrant, lamp, pig, palm, rhino, train, women, wrench; anchor, bunny, duck, elephant, faucet, flower, foot, guitar, horn, jet, leaf, Mickey Mouse, owl, pineapple, seahorse, snowman, sprayer, teddy bear, umbrella, watering can. 
Figure 1
 
A novel object is depicted in the black region; the ground region in white suggests portions of standing women on the outside of the left and right contours of the novel object.
Figure 1
 
A novel object is depicted in the black region; the ground region in white suggests portions of standing women on the outside of the left and right contours of the novel object.
Figure 2
 
(A) Real-world silhouettes (targets in the present experiments). From left to right, a telephone, a turtle, and a clover leaf. Second and third rows: Novel silhouettes (Nontargets in the present experiments). (B) Unambiguous novel silhouettes. These novel silhouettes do not have portions of real-world objects suggested on the outside of their borders. (C) Ambiguous novel silhouettes. These novel silhouettes have portions of real-world objects suggested on the outside of their left and right borders. From left to right, portions of seahorses, palm trees, and bells are suggested on the outside (white side) of the black silhouette. Note that observers were unaware of the real-world objects suggested on the outside of the silhouettes in (C); they perceived the outsides as shapeless grounds.
Figure 2
 
(A) Real-world silhouettes (targets in the present experiments). From left to right, a telephone, a turtle, and a clover leaf. Second and third rows: Novel silhouettes (Nontargets in the present experiments). (B) Unambiguous novel silhouettes. These novel silhouettes do not have portions of real-world objects suggested on the outside of their borders. (C) Ambiguous novel silhouettes. These novel silhouettes have portions of real-world objects suggested on the outside of their left and right borders. From left to right, portions of seahorses, palm trees, and bells are suggested on the outside (white side) of the black silhouette. Note that observers were unaware of the real-world objects suggested on the outside of the silhouettes in (C); they perceived the outsides as shapeless grounds.
Figure 3
 
The trial structure for Experiment 1. A cross was displayed for 1000 ms; participants had to fixate the cross for 1000 ms to initiate the trial. Two black silhouettes were then displayed simultaneously for 300 ms. Finally, two rectangles filled with random black and white patterns centered on the locations of the two silhouettes were displayed for 400 ms.
Figure 3
 
The trial structure for Experiment 1. A cross was displayed for 1000 ms; participants had to fixate the cross for 1000 ms to initiate the trial. Two black silhouettes were then displayed simultaneously for 300 ms. Finally, two rectangles filled with random black and white patterns centered on the locations of the two silhouettes were displayed for 400 ms.
Figure 4
 
Mean SRTs for Experiment 1 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores.
Figure 4
 
Mean SRTs for Experiment 1 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores.
Figure 5
 
SRTs for Experiment 2 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores. The scale is identical to Figure 4.
Figure 5
 
SRTs for Experiment 2 as a function of whether unambiguous or ambiguous novel silhouettes served as nontargets. Error bars represent the standard error of the mean of the difference scores. The scale is identical to Figure 4.
Figure 6
 
Distribution of SRTs in Experiment 2. Trials with unambiguous novel silhouette nontargets are shown on top (in red) and trials with ambiguous novel silhouette nontargets are shown on the bottom (in black). Solid lines represent correct responses and dashed lines represent incorrect responses. The vertical dotted line in each graph indicates the first of five consecutive bins where correct responses significantly outnumbered incorrect responses—the 190–200-ms bin for unambiguous trials and the 200–210-ms bin for ambiguous trials.
Figure 6
 
Distribution of SRTs in Experiment 2. Trials with unambiguous novel silhouette nontargets are shown on top (in red) and trials with ambiguous novel silhouette nontargets are shown on the bottom (in black). Solid lines represent correct responses and dashed lines represent incorrect responses. The vertical dotted line in each graph indicates the first of five consecutive bins where correct responses significantly outnumbered incorrect responses—the 190–200-ms bin for unambiguous trials and the 200–210-ms bin for ambiguous trials.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×