January 2017
Volume 17, Issue 1
Open Access
Article  |   January 2017
Asymmetric representations of upper and lower visual fields in egocentric and allocentric references
Author Affiliations & Notes
  • Yang Zhou
    State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
    Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai, China
  • Gongchen Yu
    Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai, China
  • Xuefei Yu
    Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Shanghai, China
  • Si Wu
    State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
  • Mingsha Zhang
    State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
  • Footnotes
    *  The authors' contributions to this work were equal.
Journal of Vision January 2017, Vol.17, 9. doi:10.1167/17.1.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Yang Zhou, Gongchen Yu, Xuefei Yu, Si Wu, Mingsha Zhang; Asymmetric representations of upper and lower visual fields in egocentric and allocentric references. Journal of Vision 2017;17(1):9. doi: 10.1167/17.1.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Two spatial reference systems, i.e., the observer-centered (egocentric) and object-centered (allocentric) references, are most commonly used to locate the position of the external objects in space. Although we sense the world as a unified entity, visual processing is asymmetric between upper and lower visual fields (VFs). For example, the goal-directed reaching responses are more efficient in the lower VF. Such asymmetry suggests that the visual space might be composed of different realms regarding perception and action. Since the peripersonal realm includes the space that one can reach, mostly in the lower VF, it is highly likely that the peripersonal realm might mainly be represented in the egocentric reference for visuomotor operation. In contrast, the extrapersonal realm takes place away from the observer and is mostly observed in the upper VF, which is presumably represented in the allocentric reference for orientation in topographically defined space. This theory, however, has not been thoroughly tested experimentally. In the present study, we assessed the contributions of the egocentric and allocentric reference systems on visual discrimination in the upper and lower VFs through measuring the manual reaction times (RTs) of human subjects. We found that: (a) the influence of a target's egocentric location on visual discrimination was stronger in the lower VF; and (b) the influence of a target's allocentric location on visual discrimination was stronger in the upper VF. These results support the hypothesis that the upper and lower VFs are primarily represented in the allocentric and egocentric references, respectively.

Introduction
The spatial location of an object in the visual world can be represented in both observer-centered (egocentric) reference (Goodale & Milner, 1992; Halligan, Fink, Marshall, & Vallar, 2003; Mendoza & Thomas, 1975) and object-centered (allocentric) reference (Burgess, 2006; Dean & Platt, 2006; Moorman & Olson, 2007; Olson, 2003; Ward & Arend, 2007). Spatial information encoded in one reference system can strongly affect the information processing in another (Bridgeman, Peery, & Anand, 1997; Neggers, Scholvinck, van der Lubbe, & Postma, 2005; Roelofs, 1935). It has been found that the egocentric and allocentric reference systems are asymmetrically distributed between left and right visual fields (VFs; Jewell & McCourt, 2000; Zhou, Liu, Zhang, & Zhang, 2013). Such asymmetric distribution might be caused by the unbalanced representation of ipsilateral visual information between the left and right hemispheres. That is, while each hemisphere receives equal visual input from contralateral visual field (Tootell, Silverman, Switkes, & De Valois, 1982), the right hemisphere receives more ipsilateral visual inputs than the left hemisphere (Ffytche, Howseman, Edwards, Sandeman, & Zeki, 2000; Zhou et al., 2013). 
In fact, the representations of the upper and lower VFs in the visual system of primates are asymmetric, too. In humans and macaques, the density of cone photoreceptors and retinal ganglia cells are significantly greater in the nasal and superior retinal quadrants (Curcio & Allen, 1990; Curcio, Sloan, Packer, Hendrickson, & Kalina, 1987), which suggests that the visual spatial resolution is higher in lower VF than that in upper VF. Unsurprisingly, a preferred representation of objects in the lower visual field has been found in the dorsal lateral geniculate nucleus (Schein & de Monasterio, 1987), V1 (Van Essen, Newsome, & Maunsell, 1984), and extrastriate visual cortex (Rossit, McAdam, McLean, Goodale, & Culham, 2012; Van Essen, Newsome, Maunsell, & Bixby, 1986). Moreover, a line of psychophysical studies found that the visually guided actions were enhanced in the lower VF compared with in the upper VF (Amenedo, Pazo-Alvarez, & Cadaveira, 2007; Danckert & Goodale, 2001; Genzano, Di Nocera, & Ferlazzo, 2001; Payne, 1967; Rubin, Nakayama, & Shapley, 1996; Thomas & Elias, 2011). Such behavioral asymmetry might be due to the asymmetric representations of the upper and lower VFs in dorsal and ventral visual pathways, respectively (Curcio & Allen, 1990; Curcio et al., 1987; Galletti, Fattori, Kutz, & Gamberini, 1999; Gamberini, Galletti, Bosco, Breveglieri, & Fattori, 2011; Previc, 1990; Rossit et al., 2012). That is, the lower VF is predominantly represented along the dorsal visual pathway—important for visually guided actions (Goodale & Milner, 1992), whereas the upper VF is predominantly represented in ventral pathway—important for perceptual identification of objects (Goodale & Milner, 1992). Since behavioral actions use various egocentric reference frames centered at each motor effector (Andersen & Buneo, 2002; Graziano, 2006; Pesaran, Nelson, & Andersen, 2006), the behavioral asymmetry between upper and lower VFs suggests that the lower VF is represented more in the egocentric reference frames than that of upper VF. 
Based on such an assumption, a theoretical model proposed that the representation of space in the brain is not uniform, but is rather divided into different behavioral realms (Previc, 1990, 1998). The peripersonal realm, which is used to guide our daily actions, is biased toward the lower VF. Thus, objects in the lower VF might primarily be represented in the egocentric reference frame. In contrast, the ambient extrapersonal realm, which is far from the observer and largely overlaps with the upper VF, is used for behavioral orientation in topographically defined space. Thus, objects in the upper VF are likely to primarily be represented in the allocentric reference frame. However, up to date, no convincing experimental evidence has been reported to support this theory. One reason is mainly caused by the fact that the spatial information is always encoded simultaneously in egocentric and allocentric reference systems. Thus, it is very difficult to confidently dissociate the effects of one reference system from another. 
To deal with this problem, we designed two visual discrimination tasks in which the egocentric information of visual target was the same while the allocentric information was different. During experiments, subjects were instructed to make a manual response depending on either the color of a visual target (color discrimination task) or the allocentric location of a visual target (allocentric discrimination task), irrespective of the target's egocentric location. Experiments were performed in a completely dark environment, eliminating the possibilities of unwanted allocentric referees. 
In the color discrimination task, the spatial information of visual target was primarily encoded in the egocentric reference frame. Even though the egocentric location of the target was task irrelevant, it still strongly influenced the manual reaction time (RT). This is known as Simon effect (Simon, 1969) or stimulus–response compatibility (SRC; Baddeley, 1961; Fitts & Deininger, 1954; Fitts & Seeger, 1953). Our working hypothesis is that if the egocentric reference systems distributed symmetrically between upper and lower VFs, then the Simon effect should be similar between upper and lower VFs as well. Otherwise, any asymmetric distribution of the egocentric reference frame across the vertical dimension of VF would lead to differed Simon effects between upper and lower VFs. 
However, we could not ignore the fact that the Simon effect in our allocentric discrimination task could originate from egocentric and allocentric reference systems (Lu & Proctor, 1995). To dissociate the combined effect of egocentric and allocentric reference systems on target discrimination in the allocentric discrimination task, we propose a simple subtraction model, i.e., subtracting RTs of the color discrimination task (egocentric effect) from RTs of the allocentric discrimination task (combined egocentric and allocentric effect). We believe that the subtracted results will reflect the contribution of the allocentric information on visual target discrimination. Determining whether there is any difference in Simon effects between upper and lower VFs after the subtraction will indicate the asymmetric, or symmetric, distribution of allocentric reference frame. 
Our data show that the influence of a target's egocentric location on RT is stronger in the lower VF than that in the upper VF; in contrast, the influence of a target's allocentric location on RT is stronger in the upper VF than that in the lower VF. Such results are unaffected by the responding hand (left versus right). Thus, we provide convincing evidence to support the hypothesis that the upper and lower VFs are primarily represented in the allocentric and egocentric references, respectively. 
Methods
Sixteen naive subjects (23–27 years; seven male and nine female) participated in the present study. All subjects were right-handed and had normal or corrected-to-normal vision. All experiments were performed in a completely dark environment. Informed consent was obtained before the experiments in accordance with procedures approved by the Research Ethics Board of the Shanghai Institutions for Biological Sciences and Beijing Normal University. All participants were reimbursed for their time. 
All visual stimuli were presented on a 21-inch fast phosphor CRT monitor (Sony Multiscan G520, 1280 × 960 pixels, 100 Hz vertical refresh rate; Sony, Tokyo, Japan) with a distance of 60 cm from the subjects' eyes. The visual stimuli appeared on a black background. We used an infrared eye tracker (Eye-Link 2000 Desktop Mount, SR Research, Ontario, Canada) to monitor the subjects' eye positions. The response keys were modified from a computer keyboard and were located in the sagittal midline of the subjects' body, where the up key was oriented farther away from subject. Subjects were asked to use their two fingers (index and middle) of either their left or right hand to press the up and “down” key as quickly as possible, according to the stimuli. To avoid subjects' fatigue, experiments were separated into four sessions a day. It took about 10–12 minutes for each subject to finish a session and there was a minimum of 8 minutes intersession interval for subjects to rest. 
Behavior tasks
Color discrimination task (Figure 1A)
The task began with a red spot appearing in the center of a CRT screen. The subjects were instructed to fixate on the spot for a random interval of 600–1000 ms. After which, an isoluminant red or blue dot (17.64 ± 0.3 cd/m2, measured by Konica Minolta LS-110, 1.2°) appeared for 200 ms at one of eight possible egocentric locations along a vertical line either left or right 6° to the fixation point. The eight locations were arranged from 7° above to 7° below the horizontal meridian, and the adjacent locations were 2° apart (Figure 1B). Within a session, the subjects were instructed to use either their right or left hand to press the up key if the target was red and to press the “down” key if the target was blue. Two types of trials were classified according to the stimulus–response compatibility (SRC) in the egocentric reference: COMP condition (target in upper VF and participant pressed up key, UU; or target in lower VF and participant pressed “down” key, LD), and INCOMP conditions (target in upper VF and participant pressed “down” key, UD; or target in lower VF and participant pressed up key, LU) (Figure 1C). In the color discrimination task, the spatial information of visual target was primarily encoded in the retinotopic coordinate, which is one of the egocentric reference systems. 
Figure 1
 
Behavior tasks and experiment conditions. (A, D) Two behavioral tasks: color discrimination and allocentric discrimination tasks. (B, E) The possible positions of stimulus in two tasks. (C, F) Two types of trials in color discrimination and allocentric discrimination tasks: compatible and incompatible condition. The shaded fingers denote the pressed key.
Figure 1
 
Behavior tasks and experiment conditions. (A, D) Two behavioral tasks: color discrimination and allocentric discrimination tasks. (B, E) The possible positions of stimulus in two tasks. (C, F) Two types of trials in color discrimination and allocentric discrimination tasks: compatible and incompatible condition. The shaded fingers denote the pressed key.
Allocentric discrimination task (Figure 1D)
The task sequence of the allocentric discrimination task was similar to the sequence of color discrimination task. However, the visual stimuli were a pair of vertically aligned green dots (1.5° and 0.6° in diameter, 2° apart) and the middle point between the centers of the two dots appeared at the same egocentric locations as in the color discrimination task (Figure 1E). The visual stimuli appeared for 200 ms. Subjects pressed the up or down key based on the location of the larger dot (target) relative to the smaller one (allocentric reference). Two types of trials were classified according to the spatial relationship between the target's egocentric and allocentric positions, as well as the subject's response (ego-allo-response compatibility): COMP conditions (egocentric upper, allocentric upper and response up, UU; egocentric lower, allocentric lower and response down, LL) and INCOMP conditions (egocentric upper, allocentric lower and response down, UL; egocentric lower, allocentric upper and response up, LU) (Figure 1F). Previous studies have found that the spatial position of an object in one spatial reference system affected the judgment of the location of the object in another spatial reference system (Bridgeman et al., 1997; Neggers et al., 2005; Roelofs, 1935). Therefore, in the allocentric discrimination task, the judgment of the target's allocentric location was not only influenced by the target's allocentric location but also by the target's egocentric location. Because the egocentric positions of the target were identical and the manual response was exactly the same between the color discrimination and allocentric discrimination tasks, we subtracted the RTs in the color discrimination task (egocentric effect) from the RTs in the allocentric discrimination task (combined ego- and allocentric effects) to assess the effect of allocentric reference on target discrimination. 
Data analysis
We calculated the manual RT and normalized the data, using the same criteria as reported previously (Zhou et al., 2013). In brief, we collected 126,208 trials in total and excluded 9,875 trials (7.8%, including error trials, fixation break trials, and trials with RTs exceeding three times of the standard deviations) for the data analysis. The excluded trials were almost evenly distributed among subjects. We employed a generalized extreme value (GEV) distribution model to fit the RT distribution data and calculated the mean RT for each experimental condition (Guan, Liu, Xia, & Zhang, 2012). 
To diminish the influence of intrinsic RT difference between two fingers of each individual subject and the RT variations among subjects on data analysis, we took two steps for the data transformations before making further analysis. First, the RT difference between two fingers was caused by at least two reasons: the Simon effect and the intrinsic difference between two digits responses. To diminish the latter effect, we defined a baseline condition in which the stimuli were closest to the horizontal meridian. In these cases, the RTs were minimally affected by the target's egocentric location. For each subject, the baseline RT difference between the two fingers was calculated by subtracting the mean RT of the up response from the mean RT of the down response in the baseline condition. The postadjusted RTs for the down responses were then calculated by subtracting 50% of the baseline RT difference from the RTs of the down responses, whereas the postadjusted RTs for the up response were calculated by adding 50% of the baseline RT difference from the RTs of the up response. The following equation denotes the calculation of postadjusted RT:   dRT(baseline) represents the baseline RT difference between the two fingers; RTd(baseline) represents the mean RT of down response in baseline condition; RTu(baseline) represents mean RT of up response in baseline condition. pRT(i) represents the postadjusted RT in the i location of VF; RT(i) represent the raw RT in the i location of the VF.  
Second, the values of postadjusted RTs were varied among subjects and between tasks. Such variations confounded the results for the comparison between COMP and INCOMP conditions in same tasks and across tasks. To minimize such confusion, we normalized the postadjusted RTs of each subject in each behavior task as denoted in the following equation.  nRT(i) represents the normalized RT in the i location of VF. pRT(i) represents the postadjusted RT in the i location of VF (calculated from equation 1 and 2). pRT(mean) represents the average postadjusted RT in all tested locations.  
Results
In the earlier phase of our experiments, we collected data of left and right hands from eight subjects. All eight subjects showed similar ego- and allocentric effects between left and right hand response in the two tasks (Table 1). Since the objective of our study is to assess the allocentric and egocentric spatial representation in upper and lower VFs rather than the response difference between two hands, we only collected data of the right-hand response from the other eight subjects in the later phase of the experiments. We will first present data that compose of RTs from left and right hands, and then show data of each hand respectively to demonstrate that the ego- and allocentric effects on RT are unaffected by the responding hand. 
Table 1
 
Postadjusted RTs of left and right hands show similar phenomenon in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
Table 1
 
Postadjusted RTs of left and right hands show similar phenomenon in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
Consistent with previous studies, the spatial compatibility between target's location (upper VF, lower VF) and response pattern (up key, down key) remarkably affected RT, known as the stimulus–response compatibility (SRC) (Baddeley, 1961; Vallesi, Mapelli, Schiff, Amodio, & Umilta, 2005) or the Simon effect (Simon, 1969). 
Effect of egocentric location on RT stronger in the lower VF
We employed a color discrimination task to explore the representation of the VF in egocentric references. In this task, even though the egocentric position of the visual target was task-irrelevant, it strongly affected the subjects' ability in target discrimination reflecting in the manual RT. The RTs of an example subject for each egocentric position are shown in Figure 2A. In five of the six off-horizontal-meridian locations, the RT in compatible (COMP) conditions was significantly shorter than the RT in the incompatible (INCOMP) condition (Wilcoxon test, Z value = −1.97, rank sum = 12,884, the maximum p = 0.049). The averaged population RT results (Figure 2B and Table 2) were consistent with the results of the example subject. In both the upper and lower VFs, there were significant RT differences between the COMP and INCOMP conditions for all egocentric locations [two-tailed paired t test, t(15) = −3.71, the maximum p = 0.002]. Additionally, the RT differences between the COMP and INCOMP conditions gradually increased following the increase in the target's egocentric eccentricity in the upper and lower VFs. More importantly, the RT difference between the COMP and INCOMP conditions in the lower VF was significantly larger than that in the upper VF for the sample subject and the averaged population data [example subject: Wilcoxon test, Z value = −6.85, rank sum = 100,531, p < 0.001; population: two-tailed paired t test; t(15) = −3.90, p = 0.001]. A detailed comparison of the RT differences between the upper and lower VFs for each subject is shown in Figure 2C, and this comparison further verified the differences between the upper and lower VF [two-tailed paired t test, t(15) = 12.18, p = 0.001]. Overall, RTs in the lower VF tend to be shorter in the COMP condition [the mean normalized RT in upper VF is 0.9768, and is 0.9626 in lower VF; two-tailed paired t test, t(15) = 1.95, p = 0.070] while significantly longer in the INCOMP condition [mean-upper = 1.0107; mean-lower = 1.0447; two-tailed paired t test, t(15) = −6.40, p < 0.001] (Figure 2D). Taken together, these results indicate that the influence of egocentric spatial information on visual discrimination was stronger in the lower VF than in the upper VF. 
Figure 2
 
The effect of egocentric locations is stronger in the lower VF than in the upper VF. (A) The normalized RTs of an example subject. Dots and short horizontal bars represent the average normalized mean RTs and the standard error of the mean, respectively. RTs are plotted in different colors: black for pressing the up key; gray for pressing the down key. (B) The normalized mean RTs of 16 subjects. Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. (C) Comparison of the normalized RT differences (INCOMP − COMP) between upper and lower VFs. (D) Comparison of the normalized RTs in INCOMP condition between upper and lower VFs for each data set of all subjects.
Figure 2
 
The effect of egocentric locations is stronger in the lower VF than in the upper VF. (A) The normalized RTs of an example subject. Dots and short horizontal bars represent the average normalized mean RTs and the standard error of the mean, respectively. RTs are plotted in different colors: black for pressing the up key; gray for pressing the down key. (B) The normalized mean RTs of 16 subjects. Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. (C) Comparison of the normalized RT differences (INCOMP − COMP) between upper and lower VFs. (D) Comparison of the normalized RTs in INCOMP condition between upper and lower VFs for each data set of all subjects.
Table 2
 
Population postadjusted RT data in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
Table 2
 
Population postadjusted RT data in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
Effect of allocentric location on RT stronger in the upper VF
To assess the contribution of allocentric references in the representation of space in the upper VFs versus the lower VFs, we employed an allocentric discrimination task. The normalized RTs in the allocentric discrimination task for the sample subject (same subject as in Figure 2A) are shown in Figure 3A and those for the averaged population data are shown in Figure 3B (Table 2 for the postadjusted RT data). Similar to the color discrimination task, the RTs were significantly shorter in ego-allo-response COMP conditions compared with the ego-allo-response INCOMP conditions for all off-center egocentric locations in the upper and lower VFs [example subject: Wilcoxon test, Z value = −3.03, rank sum = 21,555, maximum p = 0.002; population: two-tailed paired t test, t(15) = 4.97, maximum p < 0.001]. In contrast, when comparing the RTs in the same ego-allo-response compatibility condition (COMP or INCOMP) between the upper and lower VFs, the RTs were similar to each other for both the sample subject and the population data [example subject: Wilcoxon test, Z value = −1.07, rank sum = 187,738, p = 0.283; population: two-tailed paired t test, t(15) = 0.02, p = 0.982]. To determine the effects of allocentric locations on RT, we subtracted RTs in the color discrimination task from the RTs in the allocentric discrimination task under the conditions during which the visual target was in the same location in the visual field. Through this method, we attempted to dissociate the effect of target representation in the allocentric reference system from target representation in the egocentric reference system. After subtraction, the significant RT differences between COMP and INCOMP conditions were only observed in the upper VF for the example subject (Figure 3C) and the averaged population data [Figure 3D; for population: two-tailed paired t test, t(15) = 5.29, p = 0.001], but not in the lower VF [for population: two-tailed paired t test, t(15) = 0.87, p = 0.397]. The comparison of the individual subject's postsubtracted RT difference in the lower versus upper VF showed significant bias toward the upper VF [Figure 3E, for population: two-tailed paired t-test, t(15) = 2.45, p = 0.027]. These results indicated that allocentric references influence target discrimination more strongly in the upper VF than in the lower VF. 
Figure 3
 
The effect of allocentric locations is stronger in the upper VF than in the lower VF. (A) The normalized RTs from the same subject as in Figure 2A. (B) The averaged RTs of 16 subjects. Same as in Figure 2, each symbol represents data from each individual subject. (C, D) The differential RTs after subtracting RTs of color discrimination task from RTs of allocentric discrimination task (C for example subject, D for population data of 16 subjects). Asterisks denote whether the difference is statistically significant: *p < 0.05, ** p < 0.01, ***p < 0.001, two-tailed paired t test. (E) Comparison of the differential RT difference (INCOMP − COMP) between upper and lower VFs for each individual subject.
Figure 3
 
The effect of allocentric locations is stronger in the upper VF than in the lower VF. (A) The normalized RTs from the same subject as in Figure 2A. (B) The averaged RTs of 16 subjects. Same as in Figure 2, each symbol represents data from each individual subject. (C, D) The differential RTs after subtracting RTs of color discrimination task from RTs of allocentric discrimination task (C for example subject, D for population data of 16 subjects). Asterisks denote whether the difference is statistically significant: *p < 0.05, ** p < 0.01, ***p < 0.001, two-tailed paired t test. (E) Comparison of the differential RT difference (INCOMP − COMP) between upper and lower VFs for each individual subject.
Asymmetric effects of egocentric and allocentric reference frames on RT unaffected by responding hand
As illustrated in Table 1, the eight subjects showed similar ego- and allocentric effects on RTs when responding using either the left or right hand in two discrimination tasks. Such results imply that the asymmetric effects of ego- and allocentric reference frames on RTs are not affected by the motor effector. To illustrate the results more clearly, we present the detailed RT data of each hand respectively in Figure 4. The population-normalized RTs of the right hand (Figure 4AC) from the 16 subjects show a distribution pattern similar to the population-normalized RTs of the left hand (Figure 4DF) from the eight subjects in both tasks, so do the differential RTs. RTs from both hands show stronger egocentric effect in the lower VF and stronger allocentric effect in the upper VF; these results are consistent with the results in Figures 2 and 3
Figure 4
 
The asymmetric effects of ego- and allocentric reference frames on RT are unaffected by responding hand. (A, B, and C) Data of right-hand response. The normalized population RTs in two tasks (A, color discrimination task; B, allocentric discrimination task) and the differential RTs (C) between two tasks. (D, E, and F) Data of left-hand response are plotted in the same format as in panels (A, B, C) Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. Data of left and right hands show consistent results.
Figure 4
 
The asymmetric effects of ego- and allocentric reference frames on RT are unaffected by responding hand. (A, B, and C) Data of right-hand response. The normalized population RTs in two tasks (A, color discrimination task; B, allocentric discrimination task) and the differential RTs (C) between two tasks. (D, E, and F) Data of left-hand response are plotted in the same format as in panels (A, B, C) Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. Data of left and right hands show consistent results.
Discussion
The distributions of the ego- and allocentric references in the upper and lower VF have rarely been studied experimentally. To the best of our knowledge, only one psychophysical study approached this question. In that study, subjects were asked to judge either the egocentric or allocentric position of a colored dot within a white circle (Sdoia, Couyoumdjian, & Ferlazzo, 2004). The combined visual stimuli appeared either in the upper or lower VF. Although this study found RT facilitation for allocentric discrimination in the upper VF and RT facilitation for egocentric discrimination in the lower VF, there were two complications. First, the outline of the circle was a more salient allocentric reference than the center of the circle. Under such experimental conditions, it was not clear whether the center or the outline of the circle was the allocentric reference. Since the colored dot (target) was in between the center and outline of the circle, the different allocentric reference would lead to opposite allocentric judgments (left vs. right). Second, the authors did not explicitly describe the egocentric locations of the visual stimuli (circle and colored dot), which provided less evidence about the egocentric effect between the upper and lower VF. In the present study, we used paired dots (one big and one small) as allocentric visual stimuli for which the allocentric reference was clearly defined (the small dot). Also, we presented the visual stimuli in eight egocentric locations along a vertical axis of left or right VF so we were able to systematically examine the effects of the ego- and allocentric reference systems. Here, we provide clear evidence to show the asymmetric effects of allocentric and egocentric reference frame on RT between upper and lower VFs, which is partially consistent with findings of the previous study (Sdoia et al., 2004). Moreover, we also show that such asymmetric effects are unaffected by the responding hand (Figure 4), which indicates that the asymmetric distributions of allocentric and egocentric reference frames between upper and lower VFs mainly affect the process of spatial perception, but not the process of motor control. 
Theoretically, the Simon effect can originate from multiple spatial reference systems (Lu & Proctor, 1995). To eliminate the possibility of surrounding objects serving as allocentric referee, all experiments were conducted in a completely dark environment in the present study. Since there was only a single visual target (unicolor dot) on the screen in the color discrimination task, there was no other allocentric referee but the fixation point. In this case, if considering the fixation point was serving as the center of an allocentric reference frame, such a fixation point was completely overlapped with the retinotopic reference frame (one of the most important egocentric reference frames). Based on the fact that the spatial information from the environment is first encoded by the sensory receptors in inherently egocentric reference systems, we believe that the vertical Simon effect in our color discrimination task was mainly caused by the target's location in egocentric reference frame. Although we did not reverse the pair of color responses (e.g., blue meant “press the upper button” and red meant “press the lower button”), it was very unlikely that the reversed pairs would reverse the Simon effect. In fact, previous studies have found that the Simon effect was dependent on the stimulus position and not on the nature of visual stimuli (e.g., word and arrow; Whitaker, 1982). 
It has been reported that spatial information represented in one reference system can strongly influence object location judgment (Bridgeman et al., 1997; Neggers et al., 2005; Roelofs, 1935) in another reference system. In a previous study, we reported that the task-irrelevant egocentric location asymmetrically influenced the allocentric position discrimination between left and right VFs along the horizontal meridian (Zhou et al., 2013). Such results indicated that the distribution of egocentric reference system was not uniform between left and right RFs. In the present study we found that the influence of the egocentric location on target discrimination was more dominant in the lower VF, whereas the influence of allocentric location on target discrimination was more dominant in the upper VF. Taken together, findings of our studies support the hypothesis that the external space is represented asymmetrically in egocentric and allocentric reference systems (Previc, 1998). 
Acknowledgments
This study is supported by the following foundations: 973 program (2100CBA00400), Ministry of Science and Technology of the People's Republic of China; and National Natural Science Foundation of China (31471069; 91432109; 31261160495). Y. Z. and G. Y. equally contributed to this study. Y. Z. and G. Y. designed the experiments. Y. Z., G. Y., and X. Y. collected data. Y. Z., G. Y., and X. Y. analyzed the data. M. Z. and S. W. supervised the experiments and wrote the paper. The authors declare no competing financial interests. 
Commercial relationships: none. 
Corresponding author: Mingsha Zhang. 
Address: Beijing Normal University, Beijing, China. 
References
Amenedo E., Pazo-Alvarez P., & Cadaveira F. (2007). Vertical asymmetries in pre-attentive detection of changes in motion direction. International Journal of Psychophysiology, 64 (2), 184–189.
Andersen R. A., & Buneo C. A. (2002). Intentional maps in posterior parietal cortex. Annual Review of Neuroscience, 25, 189–220.
Baddeley A. D. (1961). Stimulus-response compatibility in the paired-associate learning of nonsense syllables. Nature, 191, 1327–1328.
Bridgeman B., Peery S., & Anand S. (1997). Interaction of cognitive and sensorimotor maps of visual space. Perception & Psychophysics, 59 (3), 456–469.
Burgess N. (2006). Spatial memory: how egocentric and allocentric combine. Trends in Cognitive Sciences, 10 (12), 551–557.
Curcio C. A., & Allen K. A. (1990). Topography of ganglion cells in human retina. Journal of Comparative Neurology, 300 (1), 5–25.
Curcio C. A., Sloan K. R., Packer O., Hendrickson A. E., & Kalina R. E. (1987). Distribution of cones in human and monkey retina - individual variability and radial asymmetry. Science, 236 (4801), 579–582.
Danckert J., & Goodale M. A. (2001). Superior performance for visually guided pointing in the lower visual field. Experimental Brain Research, 137 (3-4), 303–308.
Dean H. L., & Platt M. L. (2006). Allocentric spatial referencing of neuronal activity in macaque posterior cingulate cortex. Journal of Neuroscience, 26 (4), 1117–1127.
Ffytche D. H., Howseman A., Edwards R., Sandeman D. R., & Zeki S. (2000). Human area V5 and motion in the ipsilateral visual field. European Journal of Neuroscience, 12 (8), 3015–3025.
Fitts P. M., & Deininger R. L. (1954). S-R compatibility - correspondence among paired elements within stimulus and response codes. Journal of Experimental Psychology, 48 (6), 483–492.
Fitts P. M., & Seeger C. M. (1953). S-R compatibility: spatial characteristics of stimulus and response codes. Journal of Experimental Psychology, 46 (3), 199–210.
Galletti C., Fattori P., Kutz D. F., & Gamberini M. (1999). Brain location and visual topography of cortical area V6A in the macaque monkey. European Journal of Neuroscience, 11 (2), 575–582.
Gamberini M., Galletti C., Bosco A., Breveglieri R., & Fattori P. (2011). Is the medial posterior parietal area V6A a single functional area? Journal of Neuroscience, 31 (13), 5145–5157.
Genzano V. R., Di Nocera F., & Ferlazzo F. (2001). Upper/lower visual field asymmetry on a spatial relocation memory task. NeuroReport, 12 (6), 1227–1230.
Goodale M. A., & Milner A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15 (1), 20–25.
Graziano M. S. (2006). Progress in understanding spatial coordinate systems in the primate brain. Neuron, 51 (1), 7–9.
Guan S., Liu Y., Xia R., & Zhang M. (2012). Covert attention regulates saccadic reaction time by routing between different visual-oculomotor pathways. Journal of Neurophysiology, 107 (6), 1748–1755.
Halligan P. W., Fink G. R., Marshall J. C., & Vallar G. (2003). Spatial cognition: Evidence from visual neglect. Trends in Cognitive Sciences, 7 (3), 125–133.
Jewell G., & McCourt M. E. (2000). Pseudoneglect: a review and meta-analysis of performance factors in line bisection tasks. Neuropsychologia, 38 (1), 93–110.
Lu C. H., & Proctor R. W. (1995). The influence of irrelevant location information on performance: A review of the Simon and spatial Stroop effects. Psychonomic Bulletin & Review, 2 (2), 174–207.
Mendoza J. E., & Thomas R. K.,Jr. (1975). Effects of posterior parietal and frontal neocortical lesions in the squirrel monkey. Journal of Comparative and Physiological Psychology, 89 (2), 170–182.
Moorman D. E., & Olson C. R. (2007). Impact of experience on the representation of object-centered space in the macaque supplementary eye field. Journal of Neurophysiology, 97 (3), 2159–2173.
Neggers S. F., Scholvinck M. L., van der Lubbe R. H., & Postma A. (2005). Quantifying the interactions between allo- and egocentric representations of space. Acta Psychologica (Amsterdam), 118 (1-2), 25–45.
Olson C. R. (2003). Brain representation of object-centered space in monkeys and humans. Annual Review of Neuroscience, 26, 331–354.
Payne W. H. (1967). Visual reaction times on a circle about the fovea. Science, 155 (3761), 481–482.
Pesaran B., Nelson M. J., & Andersen R. A. (2006). Dorsal premotor neurons encode the relative position of the hand, eye, and goal during reach planning. Neuron, 51 (1), 125–134.
Previc F. H. (1990). Functional specialization in the lower and upper visual-fields in humans - its ecological origins and neurophysiological implications. Behavioral and Brain Sciences, 13 (3), 519–541.
Previc F. H. (1998). The neuropsychology of 3-D space. Psychological Bulletin, 124 (2), 123–164.
Roelofs C. (1935). Optische localisation. Archivesfur Augenheilkunde, 109, 395–415.
Rossit S., McAdam T., McLean D. A., Goodale M. A., & Culham J. C. (2012). fMRI reveals a lower visual field preference for hand actions in human superior parieto-occipital cortex (SPOC) and precuneus. Cortex, 49 (9), 2525–2541.
Rubin N., Nakayama K., & Shapley R. (1996). Enhanced perception of illusory contours in the lower versus upper visual hemifields. Science, 271 (5249), 651–653.
Schein S. J., & de Monasterio F. M. (1987). Mapping of retinal and geniculate neurons onto striate cortex of macaque. Journal of Neuroscience, 7 (4), 996–1009.
Sdoia S., Couyoumdjian A., & Ferlazzo F. (2004). Opposite visual field asymmetries for egocentric and allocentric spatial judgments. NeuroReport, 15 (8), 1303–1305.
Simon J. R. (1969). Reactions toward the source of stimulation. Journal of Experimental Psychology, 81 (1), 174–176.
Thomas N. A., & Elias L. J. (2011). Upper and lower visual field differences in perceptual asymmetries. Brain Research, 1387, 108–115.
Tootell R. B., Silverman M. S., Switkes E., & De Valois R. L. (1982). Deoxyglucose analysis of retinotopic organization in primate striate cortex. Science, 218 (4575), 902–904.
Vallesi A., Mapelli D., Schiff S., Amodio P., & Umilta C. (2005). Horizontal and vertical Simon effect: different underlying mechanisms? Cognition, 96 (1), B33–43.
Van Essen D. C., Newsome W. T., & Maunsell J. H. (1984). The visual field representation in striate cortex of the macaque monkey: Asymmetries, anisotropies, and individual variability. Vision Research, 24 (5), 429–448.
Van Essen D. C., Newsome W. T., Maunsell J. H., & Bixby J. L. (1986). The projections from striate cortex (V1) to areas V2 and V3 in the macaque monkey: Asymmetries, areal boundaries, and patchy connections. Journal of Comparative Neurology, 244 (4), 451–480.
Ward R., & Arend I. (2007). An object-based frame of reference within the human pulvinar. Brain, 130 (Pt 9), 2462–2469.
Whitaker L. A. (1982). Stimulus-response compatibility for left-right discriminations as a function of stimulus position. Journal of Experimental Psychology: Human Perception & Performance, 8 (6), 865–874.
Zhou Y., Liu Y., Zhang W., & Zhang M. (2013). Asymmetric influence of egocentric representation onto allocentric perception. Journal of Neuroscience, 32 (24), 8354–8360.
Figure 1
 
Behavior tasks and experiment conditions. (A, D) Two behavioral tasks: color discrimination and allocentric discrimination tasks. (B, E) The possible positions of stimulus in two tasks. (C, F) Two types of trials in color discrimination and allocentric discrimination tasks: compatible and incompatible condition. The shaded fingers denote the pressed key.
Figure 1
 
Behavior tasks and experiment conditions. (A, D) Two behavioral tasks: color discrimination and allocentric discrimination tasks. (B, E) The possible positions of stimulus in two tasks. (C, F) Two types of trials in color discrimination and allocentric discrimination tasks: compatible and incompatible condition. The shaded fingers denote the pressed key.
Figure 2
 
The effect of egocentric locations is stronger in the lower VF than in the upper VF. (A) The normalized RTs of an example subject. Dots and short horizontal bars represent the average normalized mean RTs and the standard error of the mean, respectively. RTs are plotted in different colors: black for pressing the up key; gray for pressing the down key. (B) The normalized mean RTs of 16 subjects. Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. (C) Comparison of the normalized RT differences (INCOMP − COMP) between upper and lower VFs. (D) Comparison of the normalized RTs in INCOMP condition between upper and lower VFs for each data set of all subjects.
Figure 2
 
The effect of egocentric locations is stronger in the lower VF than in the upper VF. (A) The normalized RTs of an example subject. Dots and short horizontal bars represent the average normalized mean RTs and the standard error of the mean, respectively. RTs are plotted in different colors: black for pressing the up key; gray for pressing the down key. (B) The normalized mean RTs of 16 subjects. Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. (C) Comparison of the normalized RT differences (INCOMP − COMP) between upper and lower VFs. (D) Comparison of the normalized RTs in INCOMP condition between upper and lower VFs for each data set of all subjects.
Figure 3
 
The effect of allocentric locations is stronger in the upper VF than in the lower VF. (A) The normalized RTs from the same subject as in Figure 2A. (B) The averaged RTs of 16 subjects. Same as in Figure 2, each symbol represents data from each individual subject. (C, D) The differential RTs after subtracting RTs of color discrimination task from RTs of allocentric discrimination task (C for example subject, D for population data of 16 subjects). Asterisks denote whether the difference is statistically significant: *p < 0.05, ** p < 0.01, ***p < 0.001, two-tailed paired t test. (E) Comparison of the differential RT difference (INCOMP − COMP) between upper and lower VFs for each individual subject.
Figure 3
 
The effect of allocentric locations is stronger in the upper VF than in the lower VF. (A) The normalized RTs from the same subject as in Figure 2A. (B) The averaged RTs of 16 subjects. Same as in Figure 2, each symbol represents data from each individual subject. (C, D) The differential RTs after subtracting RTs of color discrimination task from RTs of allocentric discrimination task (C for example subject, D for population data of 16 subjects). Asterisks denote whether the difference is statistically significant: *p < 0.05, ** p < 0.01, ***p < 0.001, two-tailed paired t test. (E) Comparison of the differential RT difference (INCOMP − COMP) between upper and lower VFs for each individual subject.
Figure 4
 
The asymmetric effects of ego- and allocentric reference frames on RT are unaffected by responding hand. (A, B, and C) Data of right-hand response. The normalized population RTs in two tasks (A, color discrimination task; B, allocentric discrimination task) and the differential RTs (C) between two tasks. (D, E, and F) Data of left-hand response are plotted in the same format as in panels (A, B, C) Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. Data of left and right hands show consistent results.
Figure 4
 
The asymmetric effects of ego- and allocentric reference frames on RT are unaffected by responding hand. (A, B, and C) Data of right-hand response. The normalized population RTs in two tasks (A, color discrimination task; B, allocentric discrimination task) and the differential RTs (C) between two tasks. (D, E, and F) Data of left-hand response are plotted in the same format as in panels (A, B, C) Asterisks denote whether the difference is statistically significant: *p < 0.05, **p < 0.01, ***p < 0.001, two-tailed paired t test. Data of left and right hands show consistent results.
Table 1
 
Postadjusted RTs of left and right hands show similar phenomenon in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
Table 1
 
Postadjusted RTs of left and right hands show similar phenomenon in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
Table 2
 
Population postadjusted RT data in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
Table 2
 
Population postadjusted RT data in both tasks. Notes: COMP = compatibility; INCOMP = incompatibility. Results reported as mean ± SEM.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×