Open Access
Article  |   November 2023
Object-based inhibition of return in three-dimensional space: From simple drawings to real objects
Author Affiliations
  • Qinyue Qian
    Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
    qianqinyue@163.com
  • Jingjing Zhao
    School of Psychology, Shaanxi Provincial Key Laboratory of Behavior & Cognitive Neuroscience, Shaanxi Normal University, Xi'an, China
    zhaojingjing_31@126.com
  • Huan Zhang
    Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
    zhh_1223701@163.com
  • Jiajia Yang
    Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
    yang@okayama-u.ac.jp
  • Aijun Wang
    Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
    ajwang@suda.edu.cn
  • Ming Zhang
    Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
    Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
    Department of Psychology, Suzhou University of Science and Technology, Suzhou, China
    psyzm@suda.edu.cn
Journal of Vision November 2023, Vol.23, 7. doi:https://doi.org/10.1167/jov.23.13.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Qinyue Qian, Jingjing Zhao, Huan Zhang, Jiajia Yang, Aijun Wang, Ming Zhang; Object-based inhibition of return in three-dimensional space: From simple drawings to real objects. Journal of Vision 2023;23(13):7. https://doi.org/10.1167/jov.23.13.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Cued to an object in space, inhibition of the attended location can spread to the entire object. Although object-based inhibition of return (IOR) studies in a two-dimensional plane have been documented, the IOR has not been explored when objects cross depth in three-dimensional (3D) space. In the present study, we used a virtual reality technique to adapt the double-rectangle paradigm to a 3D space, and manipulated the cue validity and target location to examine the difference in object-based IOR between far and near spaces under different object representations. The study showed that the object-based IOR of simple drawings existed only in near space, whereas object-based IOR of real objects existed only in far space at first, and as the object similarity decreases, it appeared in both far and near spaces.

Introduction
Searching for a specific location or object in a complex environment is necessary for animals in nature, and inhibition of return (IOR) is one mechanism that improves the efficiency of visual search by inhibiting saccades to previously examined locations (Klein, 1988; Klein, 2000; Klein & MacInnes, 1999; Posner & Cohen, 1984; Redden, MacInnes, & Klein, 2021; Satel, Wilson, & Klein, 2019). It is believed that IOR occurred in both space- and object-based coordinates (List & Robertson, 2007). When a location in space is cued, the response to the cued location is significantly slower than the response to the uncued location when the stimulus onset asynchrony (SOA) of the cue and target exceeds 300 ms, and location-based IOR refers to this lag when responding to a previously attended location (Klein, 2000; Posner & Cohen, 1984; Redden et al., 2021). Furthermore, cued to an object in space, inhibition of the attended location can spread to the entire object (Jordan & Tipper, 1999). 
This inhibition effect can be explored using the double-rectangle paradigm (Egly, Driver, & Rafal, 1994). Researchers used this paradigm and provided evidence for object-based IOR in the two-dimensional (2D) plane (Jordan & Tipper, 1999; List & Robertson, 2007; Smith, Ball, Swalwell, & Schenk, 2016), that is, slowest responses to target stimuli at cued locations, slower responses to target stimuli at the other end within the cued rectangle, and fastest responses to target stimuli at the equidistant end of the uncued rectangle. This object-based attention was initially thought to be due to attentional spreading (Egly et al., 1994; Jordan & Tipper, 1999; Kramer & Jacobson, 1991; Marino & Scholl, 2005; Richard, Lee, & Vecera, 2008; Vecera, 1994). Attention spreading suggests that the structure of the object influences the shifting of attention, that attention is shifted more efficiently within the object than across objects, and that when attention is drawn to a location within an object, attention automatically spreads from the cued location to the entire object (Chou & Yeh, 2011; Ernst, Boynton, & Jazayeri, 2013; Jordan & Tipper, 1999; Richard et al., 2008; Singh, Kalash, & Bruce, 2018). In other words, once attention is attracted to a spatial location, two automatic processes take place. First, the spatial gradient is constructed centered on the cued location, falling off with distance from the center of the attended region, Second, another automatic gradient is constructed representing the spatial spread of top-down facilitation, respecting object boundaries so that representations within an attended object are stronger than representations outside the attended object (Shomstein, 2012). Mozer (2002) believed that higher levels of object structure feed back into early visual areas, increasing perceptual sensitivity to stimuli within object-defined spatial areas. In Singh et al. (2018) 3D gaze study, they visualized and analyzed real-world gaze data through the use of virtual reality technology and 3D modeling. The results supported attentional spreading theory. They suggested that when very small targets are recognized on the surface of an object, attention begins to spread over the surface of the object. Moreover, attention is not limited to spread on a single surface, but can also spread on different surfaces of the object (Erlikhman, Lytchenko, Heller, Maechler, & Caplovitz, 2020). Therefore, after cueing a part of the object, as attention spreads over the surface of the object, the rest of the object is also seen as searched, showing the IOR in a longer SOA. 
In addition to attentional spreading, the attentional prioritization theory proposed by Shomstein and Yantis (2002) provides support for object-based attention. According to attentional prioritization, with a high degree of uncertainty about the spatial location of the upcoming target, and with all things being equal, the highest attentional priority is assigned to the cued location and the attended object, followed by the outside of the object, and thus it is possible to observe the object-based attention effect (Drummond & Shomstein, 2010; Hu et al., 2021; Shomstein & Yantis, 2002). Attentional prioritization stresses the biasing of attentional scanning order in visual search, which, by default, starts from the locations within an already attended object (Chen, Weidner, Vossel, Weiss, & Fink, 2012; Shomstein & Yantis, 2002; Shomstein & Yantis, 2004). Under the short SOA, the higher the prioritization, the faster the response. However, IOR, a mechanism that avoids processing duplicated irrelevant information, prevents attention from returning to previously attended locations under a longer SOA and helps search for unattended locations (Klein, 2000), so the higher the original prioritization, the slower the response. 
Object-based IOR studies in the 2D plane have been documented (Tipper, Driver, & Weaver, 1991; Jordan & Tipper, 1999; List & Robertson, 2007; Smith et al., 2016). Unlike 2D planes, objects often cross multiple depth positions in 3D space. Several studies have examined the object-based IOR at different depth in 3D space (Bourke, Partridge, & Pollux, 2006; Casagrande et al., 2012; Theeuwes & Pratt, 2003), but it is not clear how object-based IOR becomes when objects cross depth. Given that the distribution of attentional resources becomes more dispersed as the depth of attention increases (Andersen, 1990; Andersen & Kramer, 1993; Liu, Qian, Wang, Wang, & Zhang, 2021; Plewan & Rinkenauer, 2020), attention can spread in object-based representations that encode depth information (Reppa, Fougnie, & Schmidt, 2010). Moreover, 3D real objects have a higher ecological validity and provide complex perceptual and conceptual information (Brady, Störmer, & Alvarez, 2016). Therefore, it is necessary to consider the object-based IOR at different depths and how attention spreads along the object surface in 3D space. 
Chen et al. (2012) constructed a virtual 3D environment to explore the mechanisms of attention shifting in 3D space. Using a cue-target paradigm for functional magnetic resonance imaging (fMRI) studies, they examined the prerequisite for IOR to occur in 3D space – the attentional orienting/re-orienting in depth. From an attentional orienting perspective, orienting networks direct attention to an attended (cued) location (Corbetta, Kincade, Ollinger, McAvoy, & Shulman, 2000; Thiel, Zilles, & Fink, 2004), whereas re-orienting networks direct attention to unattended (uncued) locations (Corbetta et al., 2000). Chen et al. (2012) found that, at the behavioral level, attentional re-orienting to unexpectedly appearing near stimuli was faster than to far stimuli, that is, attention was more likely to shift from far to near to novel stimuli. At the neural level, in addition to the attentional re-orienting system of the right temporoparietal junction involved in attentional reorienting in depth, the bilateral premotor cortex also re-oriented visuospatial attention along the depth direction of visual space. A network of areas reminiscent of the human “default-mode network,” including posterior cingulate cortex, orbital prefrontal cortex, and left angular gyrus, was involved in the neural interaction between depth and attentional orienting (Chen et al., 2012). Other studies also found that there was an attentional advantage of near-space stimuli over far-space stimuli (Gawryszewski, Riggio, Rizzolatti, & Umiltá, 1987; Plewan & Rinkenauer, 2017), which might be due to the presence of more neural circuits controlling attention in peripheral space (Rizzolatti & Camarda, 1987). 
As far as we know, the earliest IOR study in 3D space was carried out in 2003, when Theeuwes and Pratt (2003) found that as long as the x and y coordinates of a cued location are the same, the response to that location slows down irrespective of the difference in the depth coordinate z, that is, IOR is not depth specific. Bourke et al. (2006) suggested that this was due to the masking in the experiment affecting the results. They modified and excluded the effect of masking, and found location-based IOR and object-based IOR. Casagrande et al. (2012) used a higher ecological validity perspective projection rule and also found a location-based IOR at depth locations in 3D space. However, they did not explore differences in the amount of IOR at depth. In a recent study, Wang, Liu, Chen, and Zhang (2016) investigated the IOR at a different depth. They presented a modified cue-target paradigm to participants using 3D displays and 3D shutter glasses. The results found that location-based IOR in 3D space was not entirely depth-blind, and there was only location-based IOR in near space. Limited by the experimental paradigm, it was not possible to conclude whether there were object-based IOR in 3D space. 
To explore object-based IOR in 3D space, we used virtual reality techniques, modified the typical double-rectangle paradigm and 3D spatial cue-target paradigm, and extended them to 3D space. We established the mapping of stereo scenes on binocular stereo visual images through the perspective projection matrix and viewport transformation matrix of the modeling software, and presented stereo images to participants using stereo displays and shuttered 3D glasses. Compared to the orthogonal projection used in the previous study (Bourke et al., 2006; Theeuwes & Pratt, 2003), we used a higher ecological validity perspective projection, such as that reported by Casagrande et al. (2012), and isolated the object component of IOR to examine the presence of object-based IOR in 3D space. The experiments are described in two sections. In section 1 (experiments 1 and 2), we investigated the object-based IOR under simple drawing representations. We examined whether there is an object-based IOR in the depth dimension by manipulating targets with different cue validities in near and far space in experiment 1. To exclude the effects of factors, such as retinal projection size, we removed binocular parallax in experiment 2. The difference in attentional resources in far and near spaces may lead to faster attentional shifting from far space to near space (Wang et al., 2016). Attentional spreading theory believes that attention shifts more effectively within the object (Egly et al., 1994; Richard et al., 2008). This suggests that attention may spread more easily from far to near within the object. Therefore, we hypothesized that the object-based IOR would be larger in near space in experiment 1, and we could not observe the difference in object-based IOR between the upper and lower visual fields (corresponding to near and far spaces in experiment 1) in experiment 2. 
In our real lives, differences exist in how real objects are stored, represented, and processed in 3D space compared to simple drawings (Collegio, Nah, Scotti, & Shomstein, 2019; Gerhard, Culham, & Schwarzer, 2016; Gomez, Skiba, & Snow, 2018; Korisky & Mudrik, 2021; Marini, Breeding, & Snow, 2019; Snow et al., 2011; Snow, Skiba, Coleman, & Berryhill, 2014). Participants have a greater attentional bias for real objects (Gerhard et al., 2016; Gomez et al., 2018; Snow et al., 2014), which may be due to the fact that real objects provide richer visual information, as well as their actionable physical actions leading to the activation of motor programs (Gerhard et al, 2016; Gomez et al, 2018; Korisky & Mudrik, 2021). Compared to simple drawings, real objects contain high-level semantic properties and low-level feature properties (Malcolm & Shomstein, 2015), which independently allocate attention to the real object (Hu, Liu, Song, Wang, & Zhao, 2020). Real objects may be more easily perceived as two parts of a single object both in perceptual properties and semantic properties, and this high-level information is derived in the parahippocampal and retrosplenial cortices (Livne & Bar, 2016), and affects the spatially organized attentional priority map in inferior parietal sulcus (IPS; Malcolm, Rattinger, & Shomstein, 2016; Sheremata & Silver, 2015). 
Because section 1 (experiments 1 and 2) examined object-based IOR under simple drawing representations and excluded confounding variables, we examined object-based IOR under real object representations in section 2 (experiments 3, 4, and 5). Experiment 3 altered object representations to examine object-based IOR under real object representations. It was shown that real object studies support attentional prioritization theory and may be modulated by object similarity (Hu et al., 2020; Song et al., 2020). We hypothesized that the object-based IOR would not be observed in far and near spaces in experiment 3. Experiment 4 increased object salience and strengthened cue-target-object relations to examine the hypothesis of attentional spreading. Given that the salience of the object and cue-target-object relations change the object-based IOR by influencing attentional spreading (Jordan & Tipper, 1999), previous studies on real object attention have supported attentional prioritization theory rather than attentional spreading (Hu et al., 2020; Song et al., 2020). Thus, we hypothesized that we would find the same results in experiment 4 as observed in experiment 3. Experiment 5 reduced object similarity to examine the hypothesis of attentional prioritization. When two objects no longer share the same features, attention is not distributed preferentially within the uncued object. Attentional prioritization shows the classic pattern, that is, the highest attentional prioritization in the cued location, followed by the cued object, and uncued object. We hypothesized that the object-based IOR would appear in both far and near spaces in experiment 5. 
Section 1: Object-based IOR under simple drawing representations
Experiment 1
Methods
Participants
G*Power 3.1.9.2 (Faul, Erdfelder, Lang, & Buchner, 2007) was used to estimate the sample size for a 2 × 3 two-way repeated measures analysis of variance (ANOVA; estimated effect size f = 0.25, alpha = 0.05, power = 0.95), suggesting a sample of 28 participants. Thirty-three subjects participated in experiment 1, and one of them was excluded due to difficulty in focusing attention. The final number of valid participants was 32 (16 women and 16 men). The participants were free to report their gender, and the experimenter recorded. The participants were between 18 and 26 years old (M = 20.06, standard deviation = 1.93). All participants were right-handed, with normal vision or corrected-to-normal vision. None of the participants had a history of neurological or psychiatric disorders, and they each provided written informed consent before participating. No other information (such as race or ethnicity) was collected. After the experiment, all participants were paid as stipulated by the Helsinki Declaration. The study was approved by the Ethics Committee of Soochow University, China. 
Apparatus, stimuli, and experimental setup
All stimuli in experiment 1 were presented on a 27-inch screen. The ASUS 3D monitor was driven by an NVIDIA GeForce GT730 graphics card. NVIDIA 3D shutter glasses were synchronized with the display to provide a stereoscopic image with a resolution of 960 (horizontal) × 540 (vertical) and a refresh rate of 60 Hz per eye. All 3D objects were presented on a black background, and stimulus materials were generated using Adobe Photoshop CC. The experimental procedure was written using MATLAB Psychtoolbox 3.0.16. 
In experiment 1, white (red, green, blue [RGB] = 255 255 255) rectangles measured 21.2 cm × 4.6 cm (to clearly describe 3D experiments, we used cm instead of visual angle), and double rectangles were 16.6 cm apart on the same side. The diameter of the white central fixation point was 1.0 cm. The white circular central re-orienting cue had an inner diameter of 1.5 cm and an outer diameter of 2.6 cm. The gray (RGB = 190 190 190) square peripheral cue had a side length of 3.0 cm. The diameter of the red (RGB = 255 0 0) circular target was 1.7 cm. The double rectangles are placed at an inclination of 45 degrees, and the inclination direction of the double rectangles is balanced between the blocks. In addition, the distances from the cue and target to the central fixation point were the same. To avoid occlusion while producing a better stereoscopic effect, the angle between the plane where the double rectangles were located and the plane where the screen was located was 63.5 degrees (see Figure 1). The participant was 100 cm away from the screen, the proximal end of the double rectangles (near space) was perceptually 86.6 cm away from the participant (binocular parallax < 0), and the distal end (far space) was perceptually 113.4 cm away (binocular parallax > 0). 
Figure 1.
 
Diagram of a double rectangle. (A) Left view. (B) Top view.
Figure 1.
 
Diagram of a double rectangle. (A) Left view. (B) Top view.
Experimental procedures and design
The experiment used a 4 (target position: far space versus near space versus left side versus right side) × 4 (cue validity: valid versus invalid within versus invalid between versus invalid diagonal) within-group design. The independent variables were target position and cue validity, and the dependent variable was the participant's reaction time (the low difficulty detection task resulted in at least 99% accuracy for participants, so we did not analyze the accuracy). 
Studies show that top-down factors, such as target-location probability, can have an effect on object-based attention (Zhao, Wang, Liu, Zhao, & Liu, 2015). Thus, we increased the trials in which the target appeared on the left and right sides to balance the target-location probability, although it was not of theoretical importance. In addition, in the invalid diagonal condition, the target appeared on the diagonal of the cue. This condition confounds the spatial distance and object-based effects and is not of theoretical importance. Therefore, we excluded these balancing trials from the statistical analysis (see Figure 2). 
Figure 2.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces given the size–distance constancy effect (Boring, 1964). The gray figures are the cues, and the red figures are the targets. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) The target appears on the left side. (B) The target appears on the right side. (C) The target appears in the near space under the invalid diagonal condition. (D) The target appears in far space under the invalid diagonal condition.
Figure 2.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces given the size–distance constancy effect (Boring, 1964). The gray figures are the cues, and the red figures are the targets. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) The target appears on the left side. (B) The target appears on the right side. (C) The target appears in the near space under the invalid diagonal condition. (D) The target appears in far space under the invalid diagonal condition.
Before the experiment, we first assessed participants’ stereo vision using Stereoscopic Test Charts (Yan & Zheng, 1985), and all of them passed the test. Participants wore NVIDIA 3D shutter glasses and used NVIDIA's official 3D vision test program, and all participants were able to report that they saw a stereoscopic image. Each participant was then given a stereoscopic projection of the stimulus material to acclimate them to the stereoscopic sensation. The experiment began when the participants could accurately report that the top end of the double rectangles was recessed into the screen, the bottom end was protruding from the screen, and the remaining two ends and the central fixation point were at the same depth. 
The experimental procedure is shown in Figure 2. At the beginning of each trial, the screen appeared with a double rectangle and a fixation point for 1500 ms followed by a gray square (peripheral cue) at a random end of the double rectangle for 200 ms. Then, an interstimulus interval (ISI) of 50 ms occurred. Afterward, a white circle (central re-orienting cue) appeared around the fixation point for 100 ms. Then, a-750 ms or 850-ms (randomized to prevent attentional temporal set) ISI occurred, and finally a red circle (target) appeared at a random at the end of the double rectangle for 2500 ms. The intertrial interval (ITI) was 1500 ms. Participants pressed the key to detect the target quickly and accurately within 2500 ms. The cue validity was 25%, and participants were asked to stare at the fixation point (binocular parallax = 0). Each participant was required to complete 832 trials, including 64 catch trials. The experiment lasted approximately 100 minutes. Before the formal experiment, participants were required to complete 20 practice trials to facilitate familiarization with the task. The formal experiment was divided into eight blocks of 104 trials each. Participants rested for 1 minute after each block. 
Results and discussion
The data were preprocessed using MATLAB R2016a, and the processed data were analyzed by SPSS 22. The low difficulty detection task resulted in at least a 99% accuracy (99.87% ± 0.13%) for participants (see more details in Supplementary material), so we did not analyze it. Bonferroni correction was used for multiple comparisons at three levels and above. 
Reaction time
Trials with incorrect reactions, catch trials, those with reaction times (RTs) less than 150 ms or greater than 1000 ms, and those with RTs exceeding three standard deviations were discarded. Invalid diagonal conditions and conditions to the left side and right side of the target position were discarded as equilibrium conditions with a 25% cue validity (see more details in Supplementary material). Mean RTs of correctly responded trials were then calculated and submitted to a 2 (target position: far space versus near space) × 3 (cue validity: valid versus invalid within versus invalid between) repeated-measures ANOVA (see Figure 3). 
Figure 3.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR means the combination of location- and object-based IOR. Object IOR means the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 3.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR means the combination of location- and object-based IOR. Object IOR means the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
The results showed a significant main effect of target position (F(1, 31) = 48.53, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.61) with longer reaction times for far space targets (332 ms) than near space targets (320 ms). The main effect of cue validity was significant (F(1.650, 51.164) = 66.89, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.68), and further multiple comparisons showed that the reaction time for the valid condition (336 ms) was significantly longer than that for the invalid within condition (324 ms) (t(31) = 6.19, pbonf < 0.001, Cohen's d = 1.09, 95% confidence interval [CI] = 6.65 to 15.86). The reaction time for the valid condition (336 ms) was significantly longer than that for the invalid between condition (318 ms; t(31) = 12.30, pbonf < 0.001, Cohen's d = 2.18, 95% CI = 13.66 to 20.74), and significantly longer reaction times were noted for the invalid within condition (324 ms) than for the invalid between condition (318 ms) (t(31) = 4.73, pbonf < 0.001, Cohen's d = 0.84, 95% CI = 2.76 to 9.13). The interaction between target position and cue validity was significant (F(2, 62) = 3.83, p = 0.027, \({\rm{\eta }}_{\rm{p}}^2\) = 0.11). Given differences in attentional resources and attentional orienting/re-orienting in far and near spaces, we conducted one-way repeated-measures ANOVA for reaction times in far and near spaces separately. 
For targets appearing in far space, the main effect of cue validity was significant (F(2, 62) = 44.34, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.59), and multiple comparisons indicated that the reaction time for the valid condition (343 ms) was significantly longer than that for the invalid within condition (328 ms; t(31) = 6.51, pbonf < 0.001, Cohen's d = 1.15, 95% CI = (9.16 to 20.82). The reaction time for the valid condition (343 ms) was significantly longer than that for the invalid between condition (325 ms; t(31) = 8.69, pbonf < 0.001, Cohen's d = 1.54, 95% CI = 12.70 to 23.15). The difference in reaction times between the invalid within condition (328 ms) and the invalid between condition (325 ms) was not significant (t(31) = 1.71, pbonf = 0.291). 
For targets appearing in near space, the main effect of cue validity was significant (F(2, 62) = 30.29, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.49), and multiple comparisons indicated that the reaction time for the valid condition (328 ms) was significantly longer than that for the invalid within condition (320 ms; t(31) = 3.19, pbonf = 0.010, Cohen's d = 0.56, 95% CI = 1.55 to 13.49). The reaction time for the valid condition (328 ms) was significantly longer than that for the invalid between condition (312 ms; t(31) = 8.19, pbonf < 0.001, Cohen's d = 1.45, 95% CI = 11.38 to 21.58), and the reaction time was significantly longer for the invalid within condition (320 ms) than for the invalid between condition (312 ms; t(31) = 4.55, pbonf < 0.001, Cohen's d = 0.81, 95% CI = 3.98 to 13.93). 
IOR effect size
To further investigate the relationship between target position and IOR, the IOR effect size was calculated and submitted to a 2 (target position: far space versus near space) × 2 (IOR: space versus object) repeated-measures ANOVA (see Figure 3). Space IOR effect size in far space = RT for valid condition (far space target) minus RT for invalid between condition (far space target). Object IOR effect size in far space = RT for invalid within condition (far space target) minus RT for invalid between condition (far space target). Space IOR effect size in near space = RT for valid condition (near space target) minus RT for invalid between condition (near space target). Object IOR effect size in near space = RT for invalid within condition (near space target) minus RT for invalid between condition (near space target). See more details of location-based IOR in Supplementary material
The results showed that the main effect of target position was not significant (F(1, 31) = 0.89, p = 0.354), the main effect of IOR was significant (F(1, 31) = 38.28, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.55), and the space IOR (17 ms) was larger than the object IOR (6 ms). The interaction between target position and IOR was significant (F(1, 31) = 6.56, p = 0.016, \({\rm{\eta }}_{\rm{p}}^2\) = 0.18), and further simple effects analysis showed that the difference in space IOR between the far space (18 ms) and near space (16 ms) was not significant (F(1,31) = 0.24, p = 0.630), and the object IOR was larger in near space (9 ms) than in far space (3 ms; F(1,31) = 4.97, p = 0.033, \({\rm{\eta }}_{\rm{p}}^2\) = 0.14). 
The present study only found an object-based IOR in near space, which may be due to differences in attentional spreading in depth. Previous studies have shown that the object-based representation encodes depth information in viewer-centered coordinates and that attention can spread across the representation (Reppa et al., 2010). We believe that more attentional resources in near space may lead to easier attentional spreading from far to near space and that less attentional resources in far space (Andersen, 1990; Andersen & Kramer, 1993; Liu et al., 2021; Plewan & Rinkenauer, 2020) may lead to more difficult attentional spreading from near to far space. This leads to a greater object-based IOR in near space compared with far space. 
Experiment 2
Although the results of experiment 1 were, on the one hand, due to a more concentrated distribution of attentional resources in near space, on the other hand, 2D factors, such as a larger retinal projection size of near space stimuli, the dominance effect of the lower visual field (He, Cavanagh, & Intriligator, 1996), and larger retinal eccentricity in near space, may also have influenced the IOR in both far and near spaces. Adjusting the retinal projection size of stimuli in far and near spaces to have the same visual angle may seem to solve the problem. However, due to the size-distance constancy effect (Boring, 1964), the stimuli in far space seem to be perceptually larger than normal levels. In addition, the processing of the upper and lower visual fields has ecological importance (Previc, 1990). Adjusting the correspondence between the “far and near spaces” and the “upper and lower visual fields” may seem to solve the dominance effect of the lower visual field, but it is not in accordance with the ecological view (objects in near space usually appear in the lower visual field). Therefore, experiment 2 removed binocular parallax and examined whether these 2D factors affect the object-based IOR. 
In the space within 2 m, participants perceive depth primarily based on occlusion or binocular parallax (Cutting & Vishton, 1995; Dong, Chen, Zhang, & Zhang, 2021). Therefore, in experiment 2, we hypothesized that when removing binocular parallax, participants had difficulty perceiving distance. As a result, it was not possible to observe differences between the upper and lower visual fields in object-based IOR. 
Methods
Participants
Thirty-two fresh subjects (16 women and 16 men) participated in experiment 2. The participants were between 18 and 24 years old (M = 20.50, standard deviation = 2.00). Other contents were the same as experiment 1. 
Apparatus, stimuli, and experimental setup
The stereo display function of the ASUS 3D monitor was turned off, and participants no longer wore 3D glasses and could no longer perceive the stereo effect. The other apparatus, stimuli, and experimental setup were the same as described in experiment 1. 
Experimental procedures and design
Given that binocular parallax was removed in experiment 2, there was no longer a stereo effect, and far and near spaces were replaced with upper and lower visual fields. Other experimental procedures and designs were the same as those in experiment 1. 
Results and discussion
The low difficulty detection task resulted in at least a 99% accuracy (99.88% ± 0.11%) for participants, so we did not analyze it. Because some of the variables did not meet the assumption of normal distribution, we log-transformed the data using the function f(x) = ln (x + 90) before analyzing. We transformed the data so that the absolute value of the z-scores of skewness did not exceed 1.96 to satisfy the normal distribution (Field, 2009). 
Reaction time
Trials with incorrect reactions, catch trials, those with RTs less than 150 ms or greater than 1000 ms, and those with RTs exceeding three standard deviations were discarded. Invalid diagonal conditions and conditions to the left side and right side of the target position were discarded as equilibrium conditions with a 25% cue validity. Mean RTs of correctly responded trials were then calculated and submitted to a 2 (target position: upper visual field versus lower visual field) × 3 (cue validity: valid versus invalid within versus invalid between) repeated-measures ANOVA (see Figure 4). 
Figure 4.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near means that targets appear in upper and lower visual fields, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 4.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near means that targets appear in upper and lower visual fields, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
The results showed a significant main effect of target position (F(1, 31) = 25.31, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.45) with longer reaction times for upper targets (304 ms) than lower (297 ms). The main effect of cue validity was significant (F(2, 62) = 50.10, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.62), and further multiple comparisons showed that the reaction time for the valid condition (309 ms) was significantly longer than that for the invalid within condition (299 ms; t(31) = 6.01, pbonf < 0.001, Cohen's d = 0.27, 95% CI = 0.01 to 0.04). The reaction time for the valid condition (309 ms) was significantly longer than that for the invalid between condition (294 ms; t(31) = 9.67, pbonf < 0.001, Cohen's d = 0.39, 95% CI = 0.03 to 0.05), and significantly longer reaction times were noted for the invalid within condition (299 ms) than for the invalid between condition (294 ms; t(31) = 3.61, pbonf = 0.003, Cohen's d = 0.12, 95% CI = 0.00 to 0.02). The interaction between target position and cue validity was significant (F(2, 62) = 10.13, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.25). Given differences in upper and lower visual field processing, we conducted one-way repeated-measures ANOVA for reaction times in upper and lower visual fields separately. 
For targets appearing in the upper visual field, the main effect of cue validity was significant (F(2, 62) = 60.72, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.66), and multiple comparisons indicated that the reaction time for the valid condition (315 ms) was significantly longer than that for the invalid within condition (300 ms; t(31) = 7.37, pbonf < 0.001, Cohen's d = 0.39, 95% CI = 0.02 to 0.05). The reaction time for the valid condition (315 ms) was significantly longer than that for the invalid between condition (296 ms; t(31) = 10.14, pbonf < 0.001, Cohen's d = 0.51, 95% CI = 0.04 to 0.06), and the reaction time was significantly longer for the invalid within condition (300 ms) compared with the invalid between condition (296 ms; t(31) = 2.90, pbonf = 0.020, Cohen's d = 0.12, 95% CI = 0.00 to 0.02). 
For targets appearing in the lower visual field, the main effect of cue validity was significant (F(2, 62) = 11.44, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.27), and multiple comparisons indicated that the difference in reaction times between the valid condition (302 ms) and the invalid within condition (297 ms) was not significant (t(31) = 2.16, pbonf = 0.116). The reaction time for the valid condition (302 ms) was significantly longer than that for the invalid between condition (293 ms; t(31) = 4.84, pbonf < 0.001, Cohen's d = 0.27, 95% CI = 0.01 to 0.04), and the reaction time was significantly longer for the invalid within condition (297 ms) compared with the invalid between condition (293 ms; t(31) = 2.77, pbonf = 0.028, Cohen's d = 0.13, 95% CI = 0.00 to 0.02). 
IOR effect size
The IOR effect size was calculated and submitted to a 2 (target position: upper visual field versus lower visual field) × 2 (IOR: space versus object) repeated-measures ANOVA (see Figure 4). Space IOR effect size in upper visual field = RT for valid condition (upper visual field target) minus RT for invalid between condition (upper visual field target). Object IOR effect size in upper visual field = RT for invalid within condition (upper visual field target) minus RT for invalid between condition (upper visual field target). Space IOR effect size in lower visual field = RT for valid condition (lower visual field target) minus RT for invalid between condition (lower visual field target). Object IOR effect size in lower visual field = RT for invalid within condition (lower visual field target) minus RT for invalid between condition (lower visual field target). 
The results showed that the main effect of target position was significant (F(1, 31) = 7.38, p = 0.011, \({\rm{\eta }}_{\rm{p}}^2\) = 0.19), and the IOR in the upper visual field (12 ms) was significantly larger than that in the lower visual field (7 ms). The main effect of IOR was significant (F(1, 31) = 33.71, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.52), and the space IOR (15 ms) was larger than the object IOR (4 ms). The interaction between target position and IOR was significant (F(1, 31) = 10.57, p = 0.003, \({\rm{\eta }}_{\rm{p}}^2\) = 0.25), and further simple effects analysis showed that space IOR was larger in the upper visual field (19 ms) compared with the lower visual field (10 ms; F(1,31) = 15.09, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.33). The difference in object IOR between the upper visual field (4 ms) and lower visual field (4 ms) was not significant (F(1,31) = 0.0002, p = 0.988). 
Compared to experiment 1, the object-based IOR was not significantly different in the upper and lower visual fields after removing binocular parallax in experiment 2, we did not find the influence of 2D factors on the object-based IOR in 3D space. Combining experiments 1 and 2, we can conclude that object-based IOR is greater in near space than in far space due to binocular parallax. 
Section 2: Object-based IOR under real object representations
Experiment 3
Although previous studies typically adopted simple drawings instead of 3D real objects, they found that real objects contain complex perceptual and conceptual information as opposed to meaningless simple drawings for which participants do not have background knowledge or expectations (Brady et al., 2016). Differences in how real objects are stored, represented, and processed in 3D space are noted compared to simple drawings (Collegio et al., 2019; Gerhard et al., 2016; Gomez et al., 2018; Korisky & Mudrik, 2021; Marini et al., 2019; Snow et al., 2011; Snow et al., 2014). In addition, other studies suggest that object representation interacts with feature and semantic factors to influence attentional distribution (Malcolm & Shomstein, 2015). This finding may indicate that the feature information of real objects could influence the prioritization of attentional resources (attentional prioritization) compared to simple drawings. Furthermore, attentional resources in 3D space are distributed in a viewer-centered manner (Andersen, 1990; Andersen & Kramer, 1993; Liu et al., 2021; Plewan & Rinkenauer, 2020), and objects closer to the participant have greater behavioral urgency (Franconeri & Simons, 2003; Plewan & Rinkenauer, 2021), which may affect participants' attentional distribution over the 3D object. Therefore, we used 3D real objects in experiment 3, which provided richer additional information than simple drawings, to further explore the object-based IOR under real object representations in 3D space. 
We suggest that object similarity modulates attentional prioritization (Hu et al., 2020). If two objects are very similar, it may result in more attentional resources being distributed to similar features on the uncued object rather than other features on the cued object. Therefore, we hypothesized that we may not find the object-based IOR in far and near spaces due to the attentional prioritization under real object representations in experiment 3. 
Methods
Participants
Thirty-two fresh subjects (29 women and 3 men) participated in experiment 3. The participants were between 17 and 26 years old (M = 20.84, standard deviation = 2.20). Other contents were the same as experiment 1. 
Apparatus, stimuli, and experimental setup
We used Unreal Engine 4.26 to build a virtual 3D scene with two benches instead of rectangles. The green (WoodRough_a_Mat material and BenchLeather_a_Mat material of the Edith Finch: House and Common Areas environment in Unreal Engine 4.26 were used) bench measured 21.2 cm (long) × 4.6 cm (width) × 4.5 cm (height), the white (RGB = 255 255 255) lamp measured 1.6 cm (diameter) × 1.8 cm (height), and the black (RGB = 0 0 0) table under the lamp measured 2 cm (diameter) × 3.1 cm (height). To have greater ecological validity, the main stimuli were adjusted in the present study as follows (see Figure 5): (1) the central fixation point was changed from the original white point to a lamp; (2) the central re-orienting cue was changed from the white circle to the white light from the lamp; (3) the peripheral cue was changed from a gray square to white light shining on the bench; and (4) the target was changed from a red circle to red light shining on the bench. The other apparatus, stimuli, and experimental setup were the same as those described in experiment 1. 
Figure 5.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces because of the size–distance constancy effect (Boring, 1964). The white light on the bench is the cue, and the red light is the target. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) Two benches replace rectangles. (B) The central reorienting cue is changed to white light emitted from the lamp. (C) The peripheral cue is changed to white light shining on the bench. (D) The target is changed to red light shining on the bench.
Figure 5.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces because of the size–distance constancy effect (Boring, 1964). The white light on the bench is the cue, and the red light is the target. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) Two benches replace rectangles. (B) The central reorienting cue is changed to white light emitted from the lamp. (C) The peripheral cue is changed to white light shining on the bench. (D) The target is changed to red light shining on the bench.
Experimental procedures and design
The experimental procedures and design were the same as those described in experiment 1. The diagram of experiment 3 is shown in Figure 5
Results and discussion
The low difficulty detection task resulted in at least a 99% accuracy (99.89% ± 0.13%) for participants, so we did not analyze it. Because some of the variables did not meet the assumption of normal distribution, we log-transformed the data using the function f(x) = ln (x + 90) before analyzing. We transformed the data so that the absolute value of the z-scores of skewness did not exceed 1.96 to satisfy the normal distribution (Field, 2009). 
Reaction time
Trials with incorrect reactions, catch trials, those with RTs less than 150 ms or greater than 1000 ms, and those with RTs exceeding three standard deviations were discarded. Invalid diagonal conditions and conditions to the left side and right side of the target position were discarded as equilibrium conditions with a 25% cue validity. Mean RTs of correctly responded trials were then calculated and submitted to a 2 (target position: far space versus near space) × 3 (cue validity: valid versus invalid within versus invalid between) repeated-measures ANOVA (see Figure 6). 
Figure 6.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR indicates the object-based IOR. The error bars correspond to the standard error of the mean. **p < 0.01 and ***p < 0.001.
Figure 6.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR indicates the object-based IOR. The error bars correspond to the standard error of the mean. **p < 0.01 and ***p < 0.001.
The results showed a significant main effect of target position (F(1, 31) = 40.25, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.57) with longer reaction times for far space targets (344 ms) than near space targets (334 ms). The main effect of cue validity was significant (F(2, 62) = 65.32, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.68), and further multiple comparisons showed that the reaction time for the valid condition (350 ms) was significantly longer than that for the invalid within condition (336 ms; t(31) = 8.45, pbonf < 0.001, Cohen's d = 0.28, 95% CI = 0.02 to 0.04). The reaction time for the valid condition (350 ms) was significantly longer than that for the invalid between condition (331 ms; t(31) = 9.80, pbonf < 0.001, Cohen's d = 0.37, 95% CI = 0.03 to 0.06), and significantly longer reaction times were noted for the invalid within condition (336 ms) than for the invalid between condition (331 ms; t(31) = 2.97, pbonf = 0.017, Cohen's d = 0.09, 95% CI = 0.00 to 0.02). The interaction between target position and cue validity was significant (F(2, 62) = 4.07, p = 0.022, \({\rm{\eta }}_{\rm{p}}^2\) = 0.12). Given differences in attentional resources and attentional orienting/reorienting in far and near spaces, we conducted 1-way repeated-measures ANOVA for reaction times in far and near spaces separately. 
For targets appearing in far space, the main effect of cue validity was significant (F(1.690, 52.395) = 55.74, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.64), and multiple comparisons indicated that the reaction time for the valid condition (358 ms) was significantly longer than that for the invalid within condition (340 ms; t(31) = 6.69, pbonf < 0.001, Cohen's d = 0.34, 95% CI = 0.03 to 0.06). The reaction time for the valid condition (358 ms) was significantly longer than that for the invalid between condition (334 ms; t(31) = 9.27, pbonf < 0.001, Cohen's d = 0.47, 95% CI = 0.04 to 0.07), and the reaction time for the invalid within condition (340 ms) was significantly longer than that for the invalid between condition (334 ms; t(31) = 3.82, pbonf = 0.002, Cohen's d = 0.13, 95% CI = 0.01 to 0.03). 
For targets appearing in near space, the main effect of cue validity was significant (F(2, 62) = 13.98, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.31), and multiple comparisons indicated that the reaction time for the valid condition (342 ms) was significantly longer than that for the invalid within condition (331 ms; t(31) = 4.40, pbonf < 0.001, Cohen's d = 0.22, 95% CI = 0.01 to 0.04). The reaction time for the valid condition (342 ms) was significantly longer than that for the invalid between condition (329 ms; t(31) = 4.68, pbonf < 0.001, Cohen's d = 0.27, 95% CI = 0.02 to 0.05). The difference in reaction times between the invalid within condition (331 ms) and the invalid between condition (329 ms) was not significant (t(31) = 0.81, pbonf = 1.000). 
IOR effect size
The IOR effect size was calculated and submitted to a 2 (target position: far space versus near space) × 2 (IOR: space versus object) repeated-measures ANOVA (see Figure 6). 
The results showed that the main effect of target position was significant (F(1, 31) = 6.44, p = 0.016, \({\rm{\eta }}_{\rm{p}}^2\) = 0.17), and the IOR in far space (16 ms) was significantly larger than that in near space (8 ms). The main effect of IOR was significant (F(1, 31) = 74.13, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.71), and the space IOR (19 ms) was larger than the object IOR (4 ms). The interaction between target position and IOR was not significant (F(1, 31) = 1.15, p = 0.292). 
In experiment 3, the object-based IOR was only found in far space but not in near space, which is opposite to the results obtained from experiment 1 and different from the hypothesis. On the one hand, the real object provided stronger salience of the object (easy to perceive as an object) compared to the simple drawings, and stronger salience can increase attentional spreading (Jordan & Tipper, 1999), especially in the far space untouched by the ceiling effect. We speculated that stronger object representations of real objects may have made attentional spreading break through the depth limit. On the other hand, participants were more likely to pay attention to near space due to attentional resource asymmetry and the tendency to seek benefits and avoid harm (Andersen, 1990; Andersen & Kramer, 1993; Franconeri & Simons, 2003; Liu et al., 2021; Plewan & Rinkenauer, 2020; Plewan & Rinkenauer, 2021). In addition, we suggested that cueing to the real object may also include attentional inhibition to cued features (of both cued and uncued objects). This could be the reason why the results are different from the hypothesis. 
Experiment 4
In experiment 3, we speculated that the combination of attentional spreading and attentional prioritization influenced the object-based IOR under real object representations, but the extent to which these factors dominate have not been examined. Previous studies have found that the salience of the object (whether it is easy to perceive as an object) and cue-target-object relations (whether the cue and target are related to objects) change the object-based IOR by influencing attentional spreading (Jordan & Tipper, 1999). Thus, in experiment 4, we improved the salience of the object (the wooden bench on a black background can be easily perceived as an object) and enhanced the cue-target-object relation (the cue and target grooved on the object). These studies examined whether increased attentional spreading affected object-based IOR under real object representations. We hypothesized that modulation of attentional spreading may not affect object-based IOR under the real-object representation, and we expected to find the same results as noted in experiment 3. 
Methods
Participants
Thirty-two fresh subjects (21 women and 11 men) participated in experiment 4. The participants were between 18 and 26 years old (M = 20.31, standard deviation = 2.22). Other contents were the same as experiment 1. 
Apparatus, stimuli, and experimental setup
We changed the main stimuli (see Figure 7): (1) the material of the bench was changed to wood (WoodRough_a_Mat material of the Edith Finch: House and Common Areas environment in Unreal Engine 4.26 was used), which was more salient on the black background; (2) the peripheral cue was changed from white light to a gray (50% transparency) groove (0.5 cm depth) on the bench to increase the cue-object relation, and participants were more likely to perceive the cue as part of the object; and (3) the target was changed from a red light to a red groove on the bench to increase the cue-target-object relation, and participants were more likely to perceive the target as part of the object. The other apparatus, stimuli, and experimental setup were the same as described in experiment 3. 
Figure 7.
 
(A) Two benches replace rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is changed to the gray groove on the bench. (D) The target is changed to the red groove on the bench.
Figure 7.
 
(A) Two benches replace rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is changed to the gray groove on the bench. (D) The target is changed to the red groove on the bench.
Experimental procedures and design
The experimental procedures and design were the same as those described in experiment 3. 
Results and discussion
The low difficulty detection task resulted in at least a 99% accuracy (99.89% ± 0.13%) for participants, so we did not analyze it. Because some of the variables did not meet the assumption of normal distribution, we log-transformed the data using the function \(f( x ) = \sqrt {( {x + 200} )} \) before analyzing. We transformed the data so that the absolute value of the z-scores of skewness did not exceed 1.96 to satisfy the normal distribution (Field, 2009). 
Reaction time
Trials with incorrect reactions, catch trials, those with RTs less than 150 ms or greater than 1000 ms, and those with RTs exceeding three standard deviations were discarded. Invalid diagonal conditions and conditions to the left side and right side of the target position were discarded as equilibrium conditions with a 25% cue validity. Mean RTs of correctly responded trials were then calculated and submitted to a 2 (target position: far space versus near space) × 3 (cue validity: valid versus invalid within versus invalid between) repeated-measures ANOVA (see Figure 8). 
Figure 8.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 8.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
The results showed a significant main effect of target position (F(1, 31) = 78.37, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.72) with longer reaction times for far space targets (346 ms) than near space targets (330 ms). The main effect of cue validity was significant (F(2, 62) = 55.58, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.64), and further multiple comparisons showed that the reaction time for the valid condition (348 ms) was significantly longer than that for the invalid within condition (335 ms; t(31) = 7.72, pbonf < 0.001, Cohen's d = 0.29, 95% CI = 0.19 to 0.38). The reaction time for the valid condition (348 ms) was significantly longer than that for the invalid between condition (331 ms; t(31) = 9.02, pbonf < 0.001, Cohen's d = 0.40, 95% CI = 0.28 to 0.49), and significantly longer reaction times for the invalid within condition (335 ms) were noted compared with the invalid between condition (331 ms; t(31) = 2.99, pbonf = 0.016, Cohen's d = 0.10, 95% CI = 0.02 to 0.19). The interaction between target position and cue validity was significant (F(2, 62) = 9.99, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.24). Given differences in attentional resources and attentional orienting/re-orienting in far and near spaces, we conducted 1-way repeated-measures ANOVA for reaction times in far and near spaces separately. 
For targets appearing in far space, the main effect of cue validity was significant (F(2, 62) = 55.81, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.64), and multiple comparisons indicated that the reaction time for the valid condition (360 ms) was significantly longer than that for the invalid within condition (342 ms; t(31) = 6.65, pbonf < 0.001, Cohen's d = 0.39, 95% CI = 0.24 to 0.54). The reaction time for the valid condition (360 ms) was significantly longer than that for the invalid between condition (336 ms; t(31) = 10.49, pbonf < 0.001, Cohen's d = 0.53, 95% CI = 0.40 to 0.66), and the reaction time for the invalid within condition (342 ms) was significantly longer than that for the invalid between condition (336 ms; t(31) = 3.02, pbonf = 0.015, Cohen's d = 0.14, 95% CI = 0.02 to 0.25). 
For targets appearing in near space, the main effect of cue validity was significant (F(2, 62) = 13.14, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.30), and multiple comparisons indicated that the reaction time for the valid condition (336 ms) was significantly longer than that for the invalid within condition (328 ms; t(31) = 4.22, pbonf < 0.001, Cohen's d = 0.19, 95% CI = 0.07 to 0.29). The reaction time for the valid condition (336 ms) was significantly longer than that noted for the invalid between condition (325 ms; t(31) = 4.49, pbonf < 0.001, Cohen's d = 0.26, 95% CI = 0.11 to 0.38). The difference in reaction times between the invalid within condition (328 ms) and the invalid between condition (325 ms) was not significant (t(31) = 1.25, pbonf = 0.663). 
IOR effect size
The IOR effect size was calculated and submitted to a 2 (target position: far space versus near space) × 2 (IOR: space versus object) repeated-measures ANOVA (see Figure 8). 
The results showed that the main effect of target position was significant (F(1, 31) = 11.24, p = 0.002, \({\rm{\eta }}_{\rm{p}}^2\) = 0.27), and the IOR in far space (16 ms) was significantly larger than that in near space (7 ms). The main effect of IOR was significant (F(1, 31) = 59.48, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.66), and the space IOR (18 ms) was larger than the object IOR (5 ms). The interaction between target position and IOR was significant (F(1, 31) = 8.43, p = 0.007, \({\rm{\eta }}_{\rm{p}}^2\) = 0.21), and further simple effects analysis showed that the space IOR was larger in far space (25 ms) than in near space (11 ms; F(1, 31) = 22.79, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.42). The difference in object IOR between far space (6 ms) and near space (3 ms) was not significant (F(1, 31) = 1.31, p = 0.259). 
In experiment 4, an object-based IOR was noted under real object representations in far space, whereas no object-based IOR was found in near space, revealing the same results as noted in experiment 3. Although we increased the salience and cue-target-object relations by changing the real object material, this did not affect the object-based IOR under real object representations. This finding is consistent with the previous hypothesis that attentional spreading does not dominate the process (Hu et al., 2020; Song et al., 2020). 
Experiment 5
To date, only the dominant influence of attentional spreading can be excluded, and the dominance of attentional prioritization under real object presentation still remains to be confirmed. Based on the results of experiments 3 and 4, it is reasonable to assume that if attentional prioritization plays an important role in object-based IOR under real object representations, it is possible to modulate it by reducing the real object similarity in experiment 5. The cued feature on the uncued object would not have higher attentional prioritization than the cued object. Therefore, in experiment 5, we hypothesized that attentional prioritization still showed the classic pattern due to the smaller object similarity. Specifically, hypothesized that attentional prioritization was highest in the valid condition, second highest in the invalid within condition, and lowest in the invalid between condition. Both far and near spaces can show object-based IOR under real object representations. 
Methods
Participants
Thirty-two fresh subjects (21 women and 11 men) participated in experiment 5. The participants were between 18 and 25 years old (M = 20.44, standard deviation = 2.23). Other contents were the same as experiment 1. 
Apparatus, stimuli, and experimental setup
We changed the object similarity in experiment 5 (see Figure 9). One of the two objects was changed from a wooden bench to a stone bench (M_Brick_Clay_Beveled material in Unreal Engine 4.26 was used). The other apparatus, stimuli, and experimental setup were the same as those described in experiment 4. 
Figure 9.
 
(A) Wooden and stone benches replacing rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is the gray groove on the bench. (D) The target is the red groove on the bench.
Figure 9.
 
(A) Wooden and stone benches replacing rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is the gray groove on the bench. (D) The target is the red groove on the bench.
Experimental procedures and design
The placement of the wooden and stone benches was counterbalanced between participants. Here, half of the participants performed the experiment where the wooden bench was near (relative to the stone bench), and the other half of the participants performed the experiment where the wooden bench was far (relative to the stone bench). Other experimental procedures and designs were the same as those noted in experiment 4. 
Results and discussion
The low difficulty detection task resulted in at least a 99% accuracy (99.80% ± 0.26%) for participants, so we did not analyze it. Because some of the variables did not meet the assumption of normal distribution, we log-transformed the data using the function f(x) = ln (x + 90) before analyzing. We transformed the data so that the absolute value of the z-scores of skewness did not exceed 1.96 to satisfy the normal distribution (Field, 2009). 
Reaction time
Trials with incorrect reactions, catch trials, those with RTs less than 150 ms or greater than 1000 ms, and those with RTs exceeding three standard deviations were discarded. Invalid diagonal conditions and conditions to the left side and right side of the target position were discarded as equilibrium conditions with a 25% cue validity. Mean RTs of correctly responded trials were then calculated and submitted to a 2 (target position: far space versus near space) × 3 (cue validity: valid versus invalid within versus invalid between) repeated-measures ANOVA (see Figure 10). 
Figure 10.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 10.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
The results showed a significant main effect of target position (F(1, 31) = 39.69, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.56) with longer reaction times for far space targets (352 ms) than near space targets (333 ms). The main effect of cue validity was significant (F(1.578, 48.928) = 89.86, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.74), and further multiple comparisons showed that the reaction time for the valid condition (356 ms) was significantly longer than that for the invalid within condition (339 ms) (t(31) = 9.43, pbonf < 0.001, Cohen's d = 0.32, 95% CI = 0.03 to 0.05). The reaction time for the valid condition (356 ms) was significantly longer than that for the invalid between condition (333 ms; t(31) = 10.79, pbonf < 0.001, Cohen's d = 0.43, 95% CI = 0.04 to 0.06), and significantly longer reaction times were noted for the invalid within condition (339 ms) than for the invalid between condition (333 ms; t(31) = 4.60, pbonf < 0.001, Cohen's d = 0.11, 95% CI = 0.01 to 0.02). The interaction between target position and cue validity was not significant (F(1.538, 47.684) = 2.33, p = 0.070). Given differences in attentional resources and attentional orienting/reorienting in far and near spaces, we conducted 1-way repeated-measures ANOVA for reaction times in far and near spaces separately. 
For targets appearing in far space, the main effect of cue validity was significant (F(1.499, 46.479) = 77.37, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.71), and multiple comparisons indicated that the reaction time for the valid condition (367 ms) was significantly longer than that for the invalid within condition (347 ms; t(31) = 10.53, pbonf < 0.001, Cohen's d = 0.37, 95% CI = 0.03 to 0.06). The reaction time for the valid condition (367 ms) was significantly longer than that for the invalid between condition (343 ms; t(31) = 9.31, pbonf < 0.001, Cohen's d = 0.45, 95% CI = 0.04 to 0.07), and the reaction time for the invalid within condition (347 ms) was significantly longer than that for the invalid between condition (343 ms; t(31) = 2.58, pbonf = 0.045, Cohen's d = 0.08, 95% CI = 0.00 to 0.02). 
For targets appearing in near space, the main effect of cue validity was significant (F(1.647, 51.070) = 40.49, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.57), and multiple comparisons indicated that the reaction time for the valid condition (345 ms) was significantly longer than that for the invalid within condition (331 ms; t(31) = 5.80, pbonf < 0.001, Cohen's d = 0.26, 95% CI = 0.02 to 0.05). The reaction time for the valid condition (345 ms) was significantly longer than that for the invalid between condition (324 ms; t(31) = 7.46, pbonf < 0.001, Cohen's d = 0.41, 95% CI = 0.03 to 0.07), and the reaction time for the invalid within condition (331 ms) was significantly longer than that for the invalid between condition (324 ms; t(31) = 4.07, pbonf < 0.001, Cohen's d = 0.15, 95% CI = 0.01 to 0.03). 
IOR effect size
The IOR effect size was calculated and submitted to a 2 (target position: far space versus near space) × 2 (IOR: space versus object) repeated-measures ANOVA (see Figure 10). 
The results showed that the main effect of target position was not significant (F(1, 31) = 0.0006, p = 0.980). The main effect of IOR was significant (F(1, 31) = 87.58, p < 0.001, \({\rm{\eta }}_{\rm{p}}^2\) = 0.74), and the space IOR (23 ms) was larger than the object IOR (6 ms). The interaction between target position and IOR was significant (F(1, 31) = 9.25, p = 0.005, \({\rm{\eta }}_{\rm{p}}^2\) = 0.23), and further simple effects analysis showed that the difference in space IOR between the far space (24 ms) and near space (21 ms) was not significant (F(1,31) = 1.16, p = 0.291). The difference in object IOR between the far space (4 ms) and near space (7 ms) was not significant (F(1,31) = 1.88, p = 0.180). 
In experiment 5, we found object-based IOR under real object representations in both far and near spaces, which is consistent with the hypothesis. Due to the decrease in object similarity, the cued feature on the uncued object no longer has a higher attentional prioritization (faster reaction time at long SOA), but the cued object has a higher attentional prioritization (slower reaction time at long SOA). These factors make attentional prioritization dominate the object-based IOR under real object representations. Despite the lack of a significant difference in object-based IOR between near space and far space, the patterns in mean values were the same as those noted in experiment 1 (the mean value of object-based IOR effect size in near space was larger than in far space). Whether the difference remains significant as object similarity decreases can be explored in future studies. 
General discussion
The present study used modeling software and virtual reality techniques to extend the double-rectangle paradigm to 3D space and explored object-based IOR. Section 1 (experiments 1 and 2) examined the object-based IOR under simple drawing representations. Experiment 1 found that the object-based IOR effect size in near space was larger than that in far space, and no object-based IOR was found in far space. Experiment 2 suggested that object-based IOR in 3D space is not “depth blind.” Section 2 (experiments 3, 4, and 5) examined object-based IOR under real object representations. Experiment 3 found object-based IOR in far space instead of near space. Experiment 4 found the same results as experiment 3 by increasing attentional spreading. Experiment 5 changed attentional prioritization by decreasing the similarity of objects and found object-based IOR in both far and near spaces. These results showed that object-based IOR was only in far space when the two real objects were identical, whereas object-based IOR appeared in both far and near spaces as the similarity of the real objects decreased. The results of object-based IOR under simple drawing and real object representations reflect the difference in object-based attention mechanisms in 3D space. In simple drawing representations, this finding was mainly attributed to attentional spreading. However, in real object representations, it was influenced by a combination of attentional spreading and attentional prioritization. 
Simple drawing representation: 3D attentional spreading
The results of experiment 1 found that the object-based IOR in 3D space is due to attentional spreading under object-based representations. Reppa et al. (2010) showed that the object-based representation encodes depth information in viewer-centered coordinates and that attention can spread across the representation. When attention is drawn to a location within an object, the attention automatically spreads from the cued location to the entire object (Ernst et al., 2013; Jordan & Tipper, 1999; Richard et al., 2008; Singh et al., 2018; Zhao, Kong, & Wang, 2013). According to attentional spreading theory, when attending to a location in an object, not only a spatial gradient that decays with distance but also an automatic gradient constrained by the object's boundaries is constructed (Shomstein, 2012). These results resulted in participants' attentional inhibition within the object being affected by both spatial and automatic gradients, whereas inhibition outside the object was affected only by a spatial gradient. Given that the target in the valid condition is at the attentional focus and within the cued object, participants' attention to the target location is affected by large spatial and automatic gradients, which leads to the strongest attentional inhibition and slowest response of participants in the valid condition. The target in the invalid within condition is not at the attentional focus, but still within the cued object. Thus, participants' attention to the target location is affected by small spatial and automatic gradients, which make the attentional inhibition at other locations within the cued object the second strongest and participants' responses in the invalid within condition slower. In contrast, the target in the invalid between condition is neither in the attentional focus nor within the cued object, and participants' attention to the target location is only affected by a small spatial gradient. Thus, attentional inhibition at other locations outside the cued object is weakest and participants' responses in the invalid between condition are fastest. In the present study, the object-based IOR (RTinvalid within minus RTinvalid between) reflects the effect of an automatic gradient on attentional inhibition, and the space-based IOR (RTvalid minus RTinvalid between) is the overlap of the object- and location-based IOR, reflecting the combined effect of automatic and spatial gradients on attentional inhibition. These factors led to the object-based IOR in 3D space, and the object-based IOR effect size was smaller than the space-based IOR effect size. 
Significant differences in object-based IOR effect sizes were noted in far and near spaces before removing the binocular parallax. This finding is mainly due to participants being affected by a viewer-centered gradient distribution of attentional resources (Andersen, 1990; Andersen & Kramer, 1993; Liu et al., 2021; Plewan & Rinkenauer, 2020). According to the distribution, more attentional resources in near space leading to easier attentional spreading from far to near space. When the target appears in near space under invalid within conditions (cue and target on the same object but different locations), attention can better spread from cue to target. This situation makes it difficult for attention to return to the previously attended location (Klein, 2000; Klein & MacInnes, 1999; Posner & Cohen, 1984; Redden et al., 2021; Satel et al., 2019), thus producing the object-based IOR in near space. However, fewer attentional resources in far space lead to more difficult attentional spreading from near to far space. When the target appears in far space under the invalid within condition (cue and target on the same object but different locations), it is difficult for attention to spread from the cue to the target. The target is seen as in a new unsearched location, drawing attention there and making object-based IOR hard to observe in far space. 
After removing the binocular parallax in experiment 2, the participants were no longer affected by the asymmetry in the depth distribution of attentional resources. Attentional distribution concentrates on a 2D plane of attentional focus with a decreasing gradient with distance (Bennett & Pratt, 2001; Henderson & Macquistan, 1993). This allows the object-based IOR effect to appear in both upper and lower visual fields. Combining experiments 1 and 2, it is reasonable to assume that object-based IOR is larger in near space than in far space under simple drawing representations. This finding is related to the asymmetry of attentional spreading in 3D space rather than due to the influence of 2D factors. Previous studies have found that attention can spread in object-based representations (Reppa et al., 2010). We show attention can spread easier along the object from far to near space than vice versa. This causes object-based IOR to only occur in near space and be larger there than in far space. 
Real object representation: 3D attentional prioritization
Studies of real object visual attention have found that the attention mechanisms of real objects are more flexible and complex (Hu et al., 2020). Additional information provided by real objects may influence cognition and behavior (Snow et al., 2014), and object representations of real objects interact with features, semantics, and other factors to influence attentional distribution (Malcolm & Shomstein, 2015). Compared to simple drawing stimuli, attention to real objects may involve the prioritization of participants' attentional distribution, and attentional prioritization theory is more flexible and effective in explaining object-based attention under real object representations (Hu et al., 2020). 
The results of experiment 3 revealed that the object-based IOR under real object and simple drawing representations were reversed in far and near spaces. There was an object-based IOR effect in far space but not in near space under real object representations. This finding is due to the stronger attentional spreading in far and near spaces as well as the attentional prioritization to near-space cued features under the influence of feature information offsetting the object-based IOR in near space. In more detail, using real objects instead of rectangles made the object-based IOR effect stronger, even though the high salience of the objects allowed attention to spread across the object surface in both spaces. However, additional information on real objects also influenced how attention was prioritized. We believe that under real object representations, cued locations are given the most attention followed by features identical to the cue (parts outside the cued object with the same features), features within the cued object, and finally the uncued object (parts with different features). Real object attentional prioritization made participants attend more to the invalid between than invalid within location when the target was near, reducing object-based IOR in near space. Lower attentional prioritization in far space did not make participants attend more to the invalid between location, so it had little impact on object-based IOR in far space. 
In real life, the attentional prioritization theory of real objects is more consistent with our experience. If there were two cats on the ground, we would likely notice the salient component (the first cat's face) at first and then more likely notice the other cat's face rather than the first cat's tail. If the two objects were no longer similar (e.g. one is a cat and the other is a dog), we would be more likely to prioritize the whole object. Similarity can limit the deployment object-based attention in a grouped way, with similar real objects more likely to be seen as two parts of one object and dissimilar objects more likely to be seen as two different objects (Hu et al., 2020). This results in less attention being deployed within the dissimilar unattended object, producing the smaller object-based attention effect (Hu et al., 2020). In other words, if there are two cats, we are more likely to think of them as one group, and if there is one cat and one dog, we are likely to think of them as two groups. Studies of object-based attention have found that attention can be rapidly allocated not only within objects defined by contours but also to disconnected members of a perceptual group, as demonstrated in feature-based attention, where blinking red cues facilitated the processing of red targets in other locations (Lin, Hubert-Walander, Murray, & Boynton, 2011). Although the contours of most objects are closed, object-based attention allows for the selection of multiple-region objects (Matsukura & Vecera, 2006), such as grouping based on similarity (Driver & Baylis, 1998). This means that for two similar real objects, after being considered as two parts of one object, it is possible that they will be grouped on the basis of the same features, instead of the closure. 
Going back to our previous example, for a group of cats, we could group them based on the same features among them, for example, blue eyes in a group and pink noses in a group, rather than all the features of each cat in a group. When a cat with blue eyes blinks, we may be more likely to notice the eyes of the other cat next to it than the nose of that cat. In the current study, the cueing of features may have led to a higher attentional prioritization for the same features outside the attended object (invalid between condition) than for different features within the attended object (invalid within condition). Similar to a previous study on object representations affecting attentional distribution, the object-based effect could be reduced if a nearby object has similar visual qualities compared to a situation with two visually different objects (Malcolm & Shomstein, 2015). It is important to note that our attentional prioritization theory of real objects is slightly different from classic attention prioritization (Shomstein & Yantis, 2002), but we supported our hypothesis about the attentional prioritization of the real object in the following experiments. 
Therefore, in experiment 3 and experiment 4, for two similar real objects, we speculated that they might no longer be grouped according to Gestalt's closure principle, but rather that similar features were grouped together. Cueing a feature within an object makes similar features elsewhere (invalid between condition) more prioritized than dissimilar features within the object (invalid within condition), which with attentional spreading by the object's closed structure cancels out any effect in near space. When experiment 5 changed the color and texture of the rectangles to reduce similarity, they may have been grouped according to the Gestalt's closure principle rather than according to similar features. Thus, the attentional prioritization within the object was higher than outside the object, showing the object-based IOR in near space. 
Simple drawings versus real objects
In object-based attention studies using the double-rectangle paradigm as well as many studies in psychology, simple drawings are typically used instead of real objects, and the results are generalized to reality. However, real objects contain complex perceptual and conceptual information as opposed to meaningless simple drawings for which participants do not have background knowledge or expectations (Brady et al., 2016). Other studies have shown differences in recognition and memory between real objects and simple drawings. Specifically, real-world graspable objects are stored, represented, and processed differently than simple drawings (Snow et al., 2011; Snow et al., 2014) because for participants, real objects provide information about geometric structure, size, distance, and location, which can influence the way participants perceive stimuli and thus alter neural processing of cognition, behavior, and memory (Snow et al., 2014). This notion was also demonstrated in visual attentional distribution studies, where researchers found that the inferred size of real objects influenced the visual attentional shift and was moderated by top-down cognition. Larger objects that were perceived by participants made visual attention move slower. (Collegio et al., 2019). These findings demonstrate the specificity of real objects. 
Compared to simple drawings, studies using real objects not only aim to improve ecological validity but also, more importantly, reflect the object-based attentional process that differs from simple drawings. The richer feature and semantic information of real objects lead to greater variation in different locations within the object. When two real objects are similar, attention is more likely to be preferentially allocated to the same feature location of both objects. This finding demonstrates the need to conduct real object studies. From our perspective, additional information on real objects can influence the object-based IOR, shown by stronger attentional spreading (in depth) and higher attentional prioritization for the cued feature, especially in 3D space with asymmetric attentional resources. 
Attentional spreading versus attentional prioritization
We speculated that attentional spreading and attentional prioritization play roles in the attention of real objects. The differences between attentional spreading and attentional prioritization are: (1) according to attentional spreading theory, the structure of an object affects the shifting of attention, and the shifting of attention is more effective within the object than between objects, that is, attention spreads more easily within the object (Chou & Yeh, 2011; Ernst et al., 2013; Jordan & Tipper, 1999; Richard et al., 2008; Singh et al., 2018). Because attention is more easily shifted from far to near (Chen et al., 2012; Gawryszewski et al., 1987; Plewan & Rinkenauer, 2017), attentional spreading predicts larger object-based IOR in near space and no object-based IOR in far space. (2) Attentional prioritization is the priority of attentional allocation, as reflected in the order of visual search (Chen, 2012; Richard et al., 2008; Shomstein & Yantis, 2002; Shomstein & Yantis, 2004; Yantis & Johnson, 1990). This predicts the facilitation effect of the real object in near space and no effect in far space. For far space, fewer attentional resources (Andersen, 1990; Andersen & Kramer, 1993; Liu et al., 2021; Plewan & Rinkenauer, 2020) and lower behavioral urgency (Franconeri & Simons, 2003; Plewan & Rinkenauer, 2021) lead to less allocation of attention in far space, that is, lower prioritization. Thus, when the target appears in far space, there may be no difference between invalid within condition and invalid between condition. For near space, since the two real objects are identical and the feature at the cued location is processed, the same feature at the uncued location (invalid between condition) is also considered as a cue and also has a high prioritization. This prioritization may exceed that of the uncued feature within the object (invalid within condition). With a short SOA, the higher the prioritization, the faster the response. However, IOR, a mechanism that avoids processing duplicated irrelevant information, prevents attention from returning to previously attended locations under a longer SOA and helps search for unattended locations (Klein, 2000), so the higher the original prioritization, the slower the response. Thus, attentional prioritization may predict a facilitation effect in near space. 
In experiment 3, we expected that attentional spreading and attentional prioritization may have jointly influenced the IOR of real objects. As we have just stated, the facilitation effect predicted by attentional prioritization and the inhibition effect predicted by attentional spreading may have combined to produce no effect in near space. The results of experiment 3 supported the hypothesis in near space. However, we found object-based IOR in far space, which was different from the assumption in far space. We speculated that stronger object representations of real objects may have made attentional spreading break through the depth limit, as previous research has found that stronger object representations enhance attentional spreading (Jordan & Tipper, 1999). Next, we examined the contributions by enhancing attentional spreading (experiment 4) and adjusting attentional prioritization (experiment 5). Increasing object salience can enhance attentional spreading (Jordan & Tipper, 1999), but previous studies have suggested that attentional prioritization rather than attentional spreading, explains object-based attention for real objects (Hu et al., 2020; Song et al., 2020), that is, attentional spreading may not have played a major role in real objects. Thus, we hypothesized that the results of experiment 4 might be similar to experiment 3. The results of experiment 4 replicated experiment 3, which demonstrated the stability of the findings and supported the hypothesis. In experiment 5, we made the two real objects different, which caused the cued feature to be only in the valid location, with a higher attentional prioritization within the object than outside the object (Drummond & Shomstein, 2010; Hu et al., 2021; Shomstein & Yantis, 2002). The combined effects of attentional prioritization and attentional spreading might lead to object-based IOR in both far and near spaces. We suggest that attentional spreading and attentional prioritization are likely to influence object-based attention together and that the complex interaction process changes with shifts in object representations. 
Limitation and outlook
The present study has some limitations. First, we examined the hypothesis of attentional spreading and attentional prioritization based on RT results. A recent study that cleverly measured attentional spreading using pupillary light response (PLR). Researchers found that attention spreads to uncued areas of an object only when cue-target contingencies strongly encourage such a strategy (Luzardo, Einhäuser, Michl, & Yeshurun, 2023). Because allocating attention covertly to brighter areas results in pupil constriction (Luzardo et al., 2023), changing the luminance of different locations of the object allows for the measurement of attentional spreading by pupil size. This provides a method to measure attentional spreading. In the future, this method could be used to quantify the contribution of attentional spreading of real objects. 
Second, we simply changed the texture of the real object in experiment 5, the objects may still be similar at some level. Using two objects with different appearances but the same dimensions to further explore the link between real object similarity and object-based IOR is interesting and meaningful. It is also meaningful to manipulate similarity at more complex levels (e.g. semantic, affordance, etc.) to explore its connection to object-based IOR in 3D space. Previous studies have found that apart from Gestalt representations, there are also lexical-based representations (Li & Logan, 2008; Liu, Wang, & Zhou, 2011; Yuan & Fu, 2014; Zhao et al., 2015). Examining lexical-based IOR in 3D space can provide a theoretical basis for the design of traffic signs. 
Finally, we only detected object-based IOR only in the specific task. To perform well in different tasks, the attentional system is flexible. Shomstein, Zhang, and Dubbelde (2023) suggested considering attentional allocation as a state in the dynamic priority map, which takes as input various factors that are known to influence attentional allocation (Bisley & Goldberg, 2010; Koch & Ullman, 1987; also see reviews Ptak, 2012; Shomstein & Gottlieb, 2016; Todd & Manaligod, 2018). The state dynamically changes as a function of multiple attentional factors that influence it, with corresponding weights that determine the degree of influence for that particular factor. The weight is constrained by external factors (task, salience, etc.) as well as internal factors (prior experience, current fluctuation in the system, etc.; Shomstein et al., 2023). Although we have tried to find the “default mode of operation” of the system experimentally, it is difficult to obtain generalizations in the absence of natural tasks. An example of a natural task is when scanning objects in an assembly line, object-based IOR will prevent workers from repeatedly searching for previously attended objects. This enhances the efficiency of the visual search and makes it easier for them to detect subsequent defective parts. In the current study, we found that object similarity influences the IOR of real objects. For real objects, there may be more grouping possibilities. For example, for two screws and nuts, we might either divide the connected screws and nuts into one group based on the principle of closure, or we might divide all the screws into one group and all the nuts into another group. We speculated that for two similar objects, they are more likely to be seen as two parts of the one object (Hu et al., 2020), and that object-based attention could select for multiple-region objects (Matsukura & Vecera, 2006), for example, grouping them based on similarity (Driver & Baylis, 1998). Once grouped based on similarity, after workers searched for a defective nut, object-based IOR suppressed attention to another similar nut, that is, ignored the nut in the next part and repeated the search for the previous screw. It seems that reducing the similarity of the objects might improve the accuracy of detecting defects. However, the current study can only inform other researchers who aim to generalize the understanding of IOR in more realistic conditions. The results remain to be tested in different tasks and scenarios. 
Acknowledgments
Supported by the National Natural Science Foundation of China (31871092, to M.Z.), the Japan Society for the Promotion of Science KAKENHI (20K04381, to M.Z.), the Humanities and Social Sciences Research Project of Soochow University (22XM0017, to A.W.), the Interdisciplinary Research Team of Humanities and Social Sciences of Soochow University (2022, to A.W.) and JST FOREST Program (JPM-JFR2041, to J.Y.). 
Data availability: All data, materials, and code are available at https://osf.io/amzn2/ (Qian, 2023, October 25). 
Commercial relationships: none. 
Corresponding authors: Aijun Wang, Ming Zhang. 
Emails: ajwang@suda.edu.cn, psyzm@suda.edu.cn. 
Address: Department of Psychology, Soochow University, Suzhou, P. R. China. 
References
Andersen, G. J. (1990). Focused attention in three-dimensional space. Perception & Psychophysics, 47(2), 112–120. [PubMed]
Andersen, G. J., & Kramer, A. F. (1993). Limits of focused attention in three-dimensional space. Attention, Perception, & Psychophysics, 53(6), 658–667.
Bennett, P. J., & Pratt, J. (2001). The spatial distribution of inhibition of return. Psychological Science, 12(1), 76–80. [PubMed]
Bisley, J. W., & Goldberg, M. E. (2010). Attention, intention, and priority in the parietal lobe. Annual Review of Neuroscience, 33, 1–21. [PubMed]
Boring, E. G. (1964). Size-constancy in a picture. The American Journal of Psychology, 77, 494–498. [PubMed]
Bourke, P. A., Partridge, H., & Pollux, P. M. J. (2006). Additive effects of inhibiting attention to objects and locations in three-dimensional displays. Visual Cognition, 13(5), 643–654.
Brady, T. F., Störmer, V. S., & Alvarez, G. A. (2016). Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli. Proceedings of the National Academy of Sciences of the United States of America, 113(27), 7459–7464. [PubMed]
Casagrande, M., Barbato, B., Mereu, S., Martella, D., Marotta, A., Theeuwes, J., & Collinson, S. L. (2012). Inhibition of return: A “depth-blind” mechanism? Acta Psychologica, 140(1), 75–80. [PubMed]
Chen, Q., Weidner, R., Vossel, S., Weiss, P. H., & Fink, G. R. (2012). Neural mechanisms of attentional reorienting in three-dimensional space. Journal of Neuroscience, 32(39), 13352–13362. [PubMed]
Chen, Z. (2012). Object-based attention: A tutorial review. Attention, Perception, & Psychophysics, 74(5), 784–802. [PubMed]
Chou, W. L., & Yeh, S. L. (2011). Subliminal spatial cues capture attention and strengthen between-object link. Consciousness and Cognition, 20(4), 1265–1271. [PubMed]
Collegio, A. J., Nah, J. C., Scotti, P. S., & Shomstein, S. (2019). Attention scales according to inferred real-world object size. Nature Human Behaviour, 3(1), 40–47. [PubMed]
Corbetta, M., Kincade, J. M., Ollinger, J. M., McAvoy, M. P., & Shulman, G. L. (2000). Voluntary orienting is dissociated from target detection in human posterior parietal cortex. Nature Neuroscience, 3(3), 292–297. [PubMed]
Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In Epstein, W. & Rogers, S. (Eds.), Handbook of Perception and Cognition, Vol 5; Perception of Space and Motion (pp. 69–117). Salt Lake City, UT: Academic Press.
Dong, B., Chen, A., Zhang, T., & Zhang, M. (2021). Egocentric distance perception disorder in amblyopia. Psychologica Belgica, 61(1), 173–185. [PubMed]
Driver, J., & Baylis, G. C. (1998). Attention and visual object segmentation. In Parasuraman, R. (Ed.), The attentive brain (pp. 299–325). Cambridge, MA: MIT Press.
Drummond, L., & Shomstein, S. (2010). Object-based attention: Shifting or uncertainty? Attention, Perception, & Psychophysics, 72(7), 1743–1755. [PubMed]
Egly, R., Driver, J., & Rafal, R. D. (1994). Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123(2), 161–177. [PubMed]
Erlikhman, G., Lytchenko, T., Heller, N. H., Maechler, M. R., & Caplovitz, G. P. (2020). Object-based attention generalizes to multisurface objects. Attention, Perception, & Psychophysics, 82(4), 1599–1612. [PubMed]
Ernst, Z. R., Boynton, G. M., & Jazayeri, M. (2013). The spread of attention across features of a surface. Journal of Neurophysiology, 110(10), 2426–2439. [PubMed]
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191. [PubMed]
Field, A. (Ed.). (2009). Discovering statistics using SPSS (3rd ed.). Newbury Park, CA: Sage Publications.
Franconeri, S. L., & Simons, D. J. (2003). Moving and looming stimuli capture attention. Perception & Psychophysics, 65(7), 999–1010. [PubMed]
Gawryszewski, L. G., Riggio, L., Rizzolatti, G., & Umiltá, C. (1987). Movements of attention in the three spatial dimensions and the meaning of “neutral” cues. Neuropsychologia, 25(1), 19–29. [PubMed]
Gerhard, T. M., Culham, J. C., & Schwarzer, G. (2016). Distinct visual processing of real objects and pictures of those objects in 7- to 9-month-old infants. Frontiers in Psychology, 7, 827. [PubMed]
Gomez, M. A., Skiba, R. M., & Snow, J. C. (2018). Graspable objects grab attention more than images do. Psychological Science, 29(2), 206–218. [PubMed]
Henderson, J. M., & Macquistan, A. D. (1993). The spatial distribution of attention following an exogenous cue. Perception & Psychophysics, 53, 221–230. [PubMed]
He, S., Cavanagh, P., & Intriligator, J. (1996). Attentional resolution and the locus of visual awareness. Nature, 383(6598), 334–337. [PubMed]
Hu, S., Liu, D., Song, F., Wang, Y., & Zhao, J. (2020). The influence of object similarity on real object-based attention: The disassociation of perceptual and semantic similarity. Acta Psychologica, 205, 103046. [PubMed]
Hu, S., Zhang, T., Wang, Y., Song, F., Zhao, J., & Wang, Y. (2021). The modulation of object-based attentional selection by facial expressions. Quarterly Journal of Experimental Psychology, 74(7), 1244–1256.
Jordan, H., & Tipper, S. P. (1999). Spread of inhibition across an object's surface. British Journal of Psychology, 90(4), 495–507.
Klein, R. M. (1988). Inhibitory tagging system facilitates visual search. Nature, 334(6181), 430–431. [PubMed]
Klein, R. M. (2000). Inhibition of return. Trends in Cognitive Sciences, 4(4), 138–147. [PubMed]
Klein, R. M., & MacInnes, W. J. (1999). Inhibition of return is a foraging facilitator in visual search. Psychological Science, 10(4), 346–352.
Koch, C., & Ullman, S. (1987). Shifts in selective visual attention: Towards the underlying neural circuitry. In Vaina, L. M. (Ed.), Matters of Intelligence: Conceptual Structures in Cognitive Neuroscience (pp. 115–141). New York, NY: Springer.
Korisky, U., & Mudrik, L. (2021). Dimensions of perception: 3D real-life objects are more readily detected than their 2D images. Psychological Science, 32(10), 1636–1648. [PubMed]
Kramer, A. F., & Jacobson, A. (1991). Perceptual organization and focused attention: The role of objects and proximity in visual processing. Perception & Psychophysics, 50(3), 267–284. [PubMed]
Li, X., & Logan, G. D. (2008). Object-based attention in Chinese readers of Chinese words: Beyond Gestalt principles. Psychonomic Bulletin & Review, 15, 945–949. [PubMed]
Lin, J. Y., Hubert-Wallander, B., Murray, S. O., & Boynton, G. M. (2011). Rapid and reflexive feature-based attention. Journal of Vision, 11(12), 12. [PubMed]
List, A., & Robertson, L. C. (2007). Inhibition of return and object-based attentional selection. Journal of Experimental Psychology: Human Perception and Performance, 33(6), 1322–1334. [PubMed]
Liu, D., Wang, Y., & Zhou, X. (2011). Lexical-and perceptual-based object effects in the two-rectangle cueing paradigm. Acta Psychologica, 138(3), 397–404. [PubMed]
Liu, X., Qian, Q., Wang, L., Wang, A., & Zhang, M. (2021). Spatial inhibition of return affected by self-prioritization effect in three-dimensional space. Perception, 50(3), 231–248. [PubMed]
Livne, T., & Bar, M. (2016). Cortical integration of contextual information across objects. Journal of Cognitive Neuroscience, 28(7), 948–958. [PubMed]
Luzardo, F., Einhäuser, W., Michl, M., & Yeshurun, Y. (2023). Attention does not spread automatically along objects: Evidence from the pupillary light response. Journal of Experimental Psychology: General, doi:10.1037/xge0001383. Advance online publication.
Malcolm, G. L., Rattinger, M., & Shomstein, S. (2016). Intrusive effects of semantic information on visual selective attention. Attention, Perception, & Psychophysics, 78(7), 2066–2078. [PubMed]
Malcolm, G. L., & Shomstein, S. (2015). Object-based attention in real-world scenes. Journal of Experimental Psychology: General, 144(2), 257–263. [PubMed]
Marini, F., Breeding, K. A., & Snow, J. C. (2019). Distinct visuo-motor brain dynamics for real-world objects versus planar images. NeuroImage, 195, 232–242. [PubMed]
Marino, A. C., & Scholl, B. J. (2005). The role of closure in defining the “objects” of object-based attention. Perception & Psychophysics, 67(7), 1140–1149. [PubMed]
Matsukura, M., & Vecera, S. P. (2006). The return of object-based attention: Selection of multiple-region objects. Perception & Psychophysics, 68(7), 1163–1175. [PubMed]
Mozer, M. C. (2002). Frames of reference in unilateral neglect and visual perception: A computational perspective. Psychological Review, 109(1), 156–185. [PubMed]
Plewan, T., & Rinkenauer, G. (2017). Simple reaction time and size-distance integration in virtual 3D space. Psychological Research, 81(3), 653–663. [PubMed]
Plewan, T., & Rinkenauer, G. (2020). Allocation of attention in 3D space is adaptively modulated by relative position of target and distractor stimuli. Attention, Perception, & Psychophysics, 82(3), 1063–1073. [PubMed]
Plewan, T., & Rinkenauer, G. (2021). Visual search in virtual 3D space: The relation of multiple targets and distractors. Psychological Research, 85(6), 2151–2162. [PubMed]
Posner, M. I., & Cohen, Y. (1984). Components of visual orienting. In Bouma, H. & Bowhuis, D. G. (Eds.), Attention and performance X: Control of language processes (pp. 531–556). Mahwah, NJ: Lawrence Erlbaum Associates, Inc..
Previc, F. (1990). Functional specialization in the lower and upper visual fields in humans: Its ecological origins and neurophysiological implications. Behavioral and Brain Sciences, 13(3), 519–542.
Ptak, R. (2012). The frontoparietal attention network of the human brain: Action, saliency, and a priority map of the environment. The Neuroscientist, 18(5), 502–515. [PubMed]
Qian, Q. (October 25, 2023). qian2023_3dobior. Retrieved from osf.io/amzn2
Redden, R. S., MacInnes, W. J., & Klein, R. M. (2021). Inhibition of return: An information processing theory of its natures and significance. Cortex, 135, 30–48. [PubMed]
Reppa, I., Fougnie, D., & Schmidt, W.C. (2010). How does attention spread across objects oriented in depth? Attention, Perception, & Psychophysics, 72, 912–925. [PubMed]
Richard, A. M., Lee, H., & Vecera, S. P. (2008). Attentional spreading in object-based attention. Journal of Experimental Psychology: Human Perception and Performance, 34(4), 842–853. [PubMed]
Rizzolatti, G., & Camarda, R. (1987). Neural circuits for spatial attention and unilateral neglect. Advances in Psychology, 45, 289–313.
Satel, J., Wilson, N. R., & Klein, R. M. (2019). What neuroscientific studies tell us about inhibition of return. Vision, 3(4), 58. [PubMed]
Sheremata, S. L., & Silver, M. A. (2015). Hemisphere-dependent attentional modulation of human parietal visual field representations. Journal of Neuroscience, 35(2), 508–517. [PubMed]
Shomstein, S. (2012). Object-based attention: Strategy versus automaticity. WIREs Cognitive Science, 3(2), 163–169. [PubMed]
Shomstein, S., & Gottlieb, J. (2016). Spatial and non-spatial aspects of visual attention: Interactive cognitive mechanisms and neural underpinnings. Neuropsychologia, 92, 9–19. [PubMed]
Shomstein, S., & Yantis, S. (2002). Object-based attention: Sensory modulation or priority setting? Perception & Psychophysics, 64(1), 41–51. [PubMed]
Shomstein, S., & Yantis, S. (2004). Configural and contextual prioritization in object-based attention. Psychonomic Bulletin & Review, 11(2), 247–253. [PubMed]
Shomstein, S., Zhang, X., & Dubbelde, D. (2023). Attention and platypuses. WIREs Cognitive Science, 14(1), e1600. [PubMed]
Singh, K., Kalash, M., & Bruce, N. (2018). Capturing real-world gaze behaviour: Live and unplugged. In Spencer, S. N. (Ed.), Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. Association for Computing Machinery.
Smith, D. T., Ball, K., Swalwell, R., & Schenk, T. (2016). Object-based attentional facilitation and inhibition are neuropsychologically dissociated. Neuropsychologia, 80, 9–16. [PubMed]
Snow, J. C., Pettypiece, C. E., McAdam, T. D., McLean, A. D., Stroman, P. W., Goodale, M. A., & Culham, J. C. (2011). Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects. Scientific Reports, 1, 130. [PubMed]
Snow, J. C., Skiba, R. M., Coleman, T. L., & Berryhill, M. E. (2014). Real-world objects are more memorable than photographs of objects. Frontiers in Human Neuroscience, 8, 837. [PubMed]
Song, F., Zhou, S., Gao, Y., Hu, S., Kong, F., & Zhao, J. (2020). Different temporal dynamics of object-based attentional allocation for reward and non-reward objects. Journal of Vision, 20(9), 17. [PubMed]
Theeuwes, J., & Pratt, J. (2003). Inhibition of return spreads across 3-D space. Psychonomic Bulletin & Review, 10(3), 616–620. [PubMed]
Thiel, C. M., Zilles, K., & Fink, G. R. (2004). Cerebral correlates of alerting, orienting and reorienting of visuospatial attention: An event-related fMRI study. NeuroImage, 21(1), 318–328. [PubMed]
Tipper, S. P., Driver, J., & Weaver, B. (1991). Short report: Object-centred inhibition of return of visual attention. The Quarterly Journal of Experimental Psychology Section A, 43(2), 289–298.
Todd, R. M., & Manaligod, M. G. M. (2018). Implicit guidance of attention: The priority state space framework. Cortex, 102, 121–138. [PubMed]
Vecera, S. P. (1994). Grouped locations and object-based attention: Comment on Egly, Driver, and Rafal (1994). Journal of Experimental Psychology: General, 123(3), 316–320.
Wang, A, Liu, X., Chen, Q., & Zhang, M. (2016). Effect of different directions of attentional shift on inhibition of return in three-dimensional space. Attention, Perception, & Psychophysics, 78(3), 838–847. [PubMed]
Yan, S., & Zheng, Z. (1985). Stereoscopic Test Charts (1st ed.). Chaoyang District, Beijing, China: People's Medical Publishing House.
Yantis, S., & Johnson, D. N. (1990). Mechanisms of attentional priority. Journal of Experimental Psychology: Human Perception and Performance, 16(4), 812–825. [PubMed]
Yuan, J., & Fu, S. (2014). Attention can operate on semantic objects defined by individual Chinese characters. Visual Cognition, 22, 770–788.
Zhao, J., Kong, F., & Wang, Y. (2013). Attentional spreading in object-based attention: The roles of target-object integration and target presentation time. Attention, Perception, & Psychophysics, 75(5), 876–887. [PubMed]
Zhao, J., Wang, Y., Liu, D., Zhao, L., & Liu, P. (2015). Strength of object representation: Its key role in object-based attention for determining the competition result between Gestalt and top-down objects. Attention, Perception, & Psychophysics, 77, 2284–2292. [PubMed]
Figure 1.
 
Diagram of a double rectangle. (A) Left view. (B) Top view.
Figure 1.
 
Diagram of a double rectangle. (A) Left view. (B) Top view.
Figure 2.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces given the size–distance constancy effect (Boring, 1964). The gray figures are the cues, and the red figures are the targets. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) The target appears on the left side. (B) The target appears on the right side. (C) The target appears in the near space under the invalid diagonal condition. (D) The target appears in far space under the invalid diagonal condition.
Figure 2.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces given the size–distance constancy effect (Boring, 1964). The gray figures are the cues, and the red figures are the targets. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) The target appears on the left side. (B) The target appears on the right side. (C) The target appears in the near space under the invalid diagonal condition. (D) The target appears in far space under the invalid diagonal condition.
Figure 3.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR means the combination of location- and object-based IOR. Object IOR means the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 3.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR means the combination of location- and object-based IOR. Object IOR means the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 4.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near means that targets appear in upper and lower visual fields, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 4.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near means that targets appear in upper and lower visual fields, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 5.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces because of the size–distance constancy effect (Boring, 1964). The white light on the bench is the cue, and the red light is the target. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) Two benches replace rectangles. (B) The central reorienting cue is changed to white light emitted from the lamp. (C) The peripheral cue is changed to white light shining on the bench. (D) The target is changed to red light shining on the bench.
Figure 5.
 
Left panel: Front view of the exemplary trial in the experimental paradigm. All participants reported the same stimulus size in far and near spaces because of the size–distance constancy effect (Boring, 1964). The white light on the bench is the cue, and the red light is the target. Far indicates that the target appears in the far space. Near indicates that the target appears in the near space. A valid condition is noted when the cue and target are in the same location; an invalid within condition is noted when the cue and target are in the same object but in different locations; an invalid between condition is noted when the cue and target are in different objects but their distances are equal to the distance of the invalid within condition; an invalid diagonal condition is noted when the cue and target are in diagonal locations. Right panel: (A) Two benches replace rectangles. (B) The central reorienting cue is changed to white light emitted from the lamp. (C) The peripheral cue is changed to white light shining on the bench. (D) The target is changed to red light shining on the bench.
Figure 6.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR indicates the object-based IOR. The error bars correspond to the standard error of the mean. **p < 0.01 and ***p < 0.001.
Figure 6.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR refers to the combination of location- and object-based IOR. Object IOR indicates the object-based IOR. The error bars correspond to the standard error of the mean. **p < 0.01 and ***p < 0.001.
Figure 7.
 
(A) Two benches replace rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is changed to the gray groove on the bench. (D) The target is changed to the red groove on the bench.
Figure 7.
 
(A) Two benches replace rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is changed to the gray groove on the bench. (D) The target is changed to the red groove on the bench.
Figure 8.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 8.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 9.
 
(A) Wooden and stone benches replacing rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is the gray groove on the bench. (D) The target is the red groove on the bench.
Figure 9.
 
(A) Wooden and stone benches replacing rectangles. (B) The central reorienting cue is white light emitted from the lamp. (C) The peripheral cue is the gray groove on the bench. (D) The target is the red groove on the bench.
Figure 10.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
Figure 10.
 
(A) RT for each condition. (B) IOR effect size for each condition. Far and near indicate that targets appear in far and near spaces, respectively. Space IOR indicates the combination of location- and object-based IOR. Object IOR refers to the object-based IOR. The error bars correspond to the standard error of the mean. *p < 0.05 and ***p < 0.001.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×