Free
Research Article  |   December 2009
Pattern matching is assessed in retinotopic coordinates
Author Affiliations
Journal of Vision December 2009, Vol.9, 19. doi:https://doi.org/10.1167/9.13.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ayelet McKyton, Yoni Pertzov, Ehud Zohary; Pattern matching is assessed in retinotopic coordinates. Journal of Vision 2009;9(13):19. https://doi.org/10.1167/9.13.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We typically examine scenes by performing multiple saccades to different objects of interest within the image. Therefore, an extra-retinotopic representation, invariant to the changes in the retinal image caused by eye movements, might be useful for high-level visual processing. We investigate here, using a matching task, whether the representation of complex natural images is retinotopic or screen-based. Subjects observed two simultaneously presented images, made a saccadic eye movement to a new fixation point, and viewed a third image. Their task was to judge whether the third image was identical to one of the two earlier images or different. Identical images could appear either in the same retinotopic position, in the same screen position, or in totally different locations. Performance was best when the identical images appeared in the same retinotopic position and worst when they appeared in the opposite hemifield. Counter to commonplace intuition, no advantage was conferred from presenting the identical images in the same screen position. This, together with performance sensitivity for image translation of a few degrees, suggests that image matching, which can often be judged without overall recognition of the scene, is mostly determined by neuronal activity in earlier brain areas containing a strictly retinotopic representation and small receptive fields.

Introduction
The visual image is heavily blurred in the periphery. One of our means to compensate for this limitation is to constantly scan the visual scene, thereby generating a different retinal image with every new eye movement. Incredibly, our brain seamlessly generates a stable representation of the visual scene in spite of this jerky and incomplete visual information (Hayhoe, Shrivastava, Mruczek, & Pelz, 2003; Melcher, 2005) and also store some task-relevant information across saccades (Van Eccelpoel, Germeys, De Graef, & Verfaillie, 2008). How this stability is achieved is a matter of debate: one idea is that an efference copy signal, indicating the upcoming eye position, is the most critical component (Sommer & Wurtz, 2002; Von Holst & Mittelstaedt, 1950). According to this theory, this efference copy signal, originating in oculomotor areas of the brain, is sent to visual areas, thereby allowing for a remapping of the new visual input (Bays & Husain, 2007). Such an eye position signal can potentially lead to an extra-retinal spatial representation (i.e., a representation that is not retinotopic, but rather in head or screen coordinate frames). 
A search for such an extra-retinal representation was focused mainly on the dorsal stream. It has been known for some time that the eye's position in the orbit can modulate the activity of a sizeable fraction of neurons in the parietal cortex (Andersen, Bracewell, Barash, Gnadt, & Fogassi, 1990; Andersen, Essick, & Siegel, 1985; Andersen & Mountcastle, 1983; Andersen, Snyder, Bradley, & Xing, 1997) as well as in occipital areas such as V3A (Galletti & Battaglini, 1989; Nakamura & Colby, 2000). Furthermore, the receptive field of neurons in the lateral intraparietal area (LIP) often shifts, before an impending saccade, from the original retinal location to the future retinal location, thereby taking into account the upcoming change in eye position (Duhamel, Colby, & Goldberg, 1992). In fact, some populations of neurons in the ventral intraparietal area (VIP) represent the spatial location explicitly in a head-based reference frame, independently of the direction in which the eyes are looking (Duhamel, Bremmer, BenHamed, & Graf, 1997). Similar evidence for the updating of receptive fields in the parietal cortex was also found in humans using fMRI (Medendorp, Goltz, Vilis, & Crawford, 2003; Merriam, Genovese, & Colby, 2003). 
There is also some evidence that ventral regions in the monkey visual cortex are not strictly retinotopic. For example, eye position modulates the activity of retinotopic neurons in V4/V8 (Bremmer, 2000; Dobbins, Jeo, Fiser, & Allman, 1998) and in the inferotemporal cortex (IT; Nowicka & Ringo, 2000). Indeed, Bremmer (2000) found that almost half of the neurons in area V4 are modulated by eye position in the awake, behaving monkey. Similar evidence for an eye position signal or an extra-retinal representation was found in the human ventral areas, using fMRI (DeSouza, Dukelow, & Vilis, 2002; McKyton & Zohary, 2007; Merriam, Genovese, & Colby, 2007). We found that the fMRI signal in the lateral occipital complex (LOC) showed clear signs of adaptation when objects remained in the same screen position. It also displayed a recovery from adaptation when the screen position was changed even though the objects remained in the same retinotopic position (McKyton & Zohary, 2007). 
Surprisingly, and somewhat counter to our intuition that our world is represented in an allocentric reference frame, most behavioral tests until now have suggested that visual information is encoded in strictly retinotopic coordinates. Numerous perceptual learning studies have shown that performance gains are often location specific (Ahissar & Hochstein, 1997; Crist, Kapadia, Westheimer, & Gilbert, 1997; Fahle, Edelman, & Poggio, 1995; Karni & Sagi, 1991; Schoups, Vogels, & Orban, 1995; Shiu & Pashler, 1992). However, very few studies actually tested whether the improvement is specific to the location on the retina or to the location relative to the screen or the head. The few studies designed to tackle this issue usually indicated that simple shapes (like Gabor images) are represented retinotopically (Irwin, 1991; Jonides, Irwin, & Yantis, 1983; McKyton & Zohary, 2008). Psychophysical evidence for an extra-retinal representation was found for the case of motion perception (Melcher & Morrone, 2003), for adaptation of faces, but not for the known contrast adaptation (Melcher, 2005) or when an extra-retinal representation was essential for completing the task (Golomb, Chun, & Mazer, 2008; Hayhoe, Lachter, & Feldman, 1991). 
In this study, we try to infer what spatial representation is used by subjects while performing a matching task using natural scene images. By using relatively complex images, combined with a matching task that does not require mapping in any specific coordinate frame, one could test whether the retinotopic representation of information that characterized previous experiments was possibly due to the choice of stimuli (e.g., matching the preference of early visual areas; see also Melcher, 2005). To that end, in this study we manipulate eye and visual stimulus positions so that identical pictures can appear in the same location on the screen (but not on the retina), as opposed to conditions where they appear in the same retinal location (but in different screen positions). Another condition, in which both the retinal and screen positions were changed, served as a baseline. Our paradigm was therefore suited to detect even a small advantage conferred by presenting the target at the same screen position, and still we found none of it. This suggests that image matching is determined in retinotopic coordinates, even when using complex natural images (which preferentially activate higher order visual areas). 
Methods
Subjects
Twenty-eight subjects participated in Experiment 1 and additional 20 subjects participated in Experiment 2. Subjects were 19–31 years old, naive and with normal or corrected-to-normal eyesight. All experiments were undertaken with the understanding and written consent of each subject. 
Stimuli and procedure
In all experiments, the stimuli were square pictures, sized 4.7 × 4.7 degrees, of animals (birds, mammals, fish) embedded in a scenery, and non-animals (sceneries, flowers, trees; Figure 1). All images had the same mean intensity. No feedback was given to the subjects on their performance level in all experiments. 
Figure 1
 
Experiment 1: design and results. (a) Four examples of base and test images used in the experiment. (b) Procedure: at the start of every trial, subjects fixated on a fixation point in the middle of the screen. After pressing a start button, a base picture appeared 2.8 degrees to the left or to the right of the fixation point for 25-ms duration. After another 500 ms, a test picture appeared also for duration of 25 ms. The images could appear either in the same (“same position”) or in opposite positions (“different positions”). Subjects had to respond whether the two images were the same (“match”) or different (“non-match”). (c) Results from the original experiment and from the experiment with the inverted images. Asterisks denote significant differences between “same position” and “different position” performance levels ( p < 0.001).
Figure 1
 
Experiment 1: design and results. (a) Four examples of base and test images used in the experiment. (b) Procedure: at the start of every trial, subjects fixated on a fixation point in the middle of the screen. After pressing a start button, a base picture appeared 2.8 degrees to the left or to the right of the fixation point for 25-ms duration. After another 500 ms, a test picture appeared also for duration of 25 ms. The images could appear either in the same (“same position”) or in opposite positions (“different positions”). Subjects had to respond whether the two images were the same (“match”) or different (“non-match”). (c) Results from the original experiment and from the experiment with the inverted images. Asterisks denote significant differences between “same position” and “different position” performance levels ( p < 0.001).
Experiment 1: The temporal sequence of each trial was given as follows: each trial started with a fixation point in the middle of the screen. Subjects were required to respond by pressing the ready key, leading to the base stimulus appearance ( Figure 1b) after 12.5 to 25 ms. The base stimulus appeared 2.8 degrees to the left or to the right of the fixation. The image was shown for 25 ms. After an additional 500 ms, a second picture appeared (“test stimulus”) for 25 ms in the same (“same position”) or in an opposite position (“different position”) relative to the fixation point. Subjects were instructed to respond whether the second picture was identical to the first (“match”) or was it a different picture (“non-match”). 
Per subject, each image appeared in two trials, once in “same position” and once in “different position,” so that differences in recognition performance between conditions could not simply be due to variance in image salience. For a similar reason, the specific images in the “match” and “non-match” conditions were counterbalanced across subjects. There were 300 trials, half of which were “match” trials (constructed from one image) and the other half were “non-match” trials (constructed from 2 different images). All together there were 450 images. 
Categorization experiment: 24 subjects from Experiment 1 participated in a categorization experiment. This experiment was identical to Experiment 1 in all temporal aspects, only subjects were asked to respond (by a key press) whether they saw an animal in the pictures (“animal”) or a scenery without an animal (“non-animal”). The aim of this experiment was to check if subjects are able to recognize the gist of the scene in such short viewing conditions. In twelve of the subjects, two identical images were shown in a trial, while in 12 other subjects the two images were different but from the same category. 
Inversion experiment: 16 subjects from Experiment 1 participated in the inversion experiment. The experiment was identical to Experiment 1, apart from the fact that the pictures were inverted (upside down). 
Experiment 2: The temporal sequence of each trial was given as follows: each trial started with a fixation point on the right side of the screen ( Figure 2). Subjects were required to respond by pressing the ready key, leading to the base stimulus appearance after 300 to 312.5 ms. The base stimulus consisted of two simultaneously presented images that appeared 2.8 and 8.4 degrees to the left of the fixation point. The base stimulus was on for 150 ms. After another 100 ms, the fixation point changed its screen position to the center of the screen, 11.2 degrees away from its original position, indicating to the subject to make a saccade to the new fixation location. After another 612.5 ms, the test stimulus appeared for 150 ms. The test stimulus could appear in one of four positions, 2.8 or 8.4 degrees left or right of the central fixation point. Subjects were instructed to report whether the third picture was identical to one of the two previous pictures (“match”) or was it a new picture (“non-match”). After their answer (using the mouse), the fixation point returned to the right side of the screen, and the subjects initiated the next trial in their own pace by pressing the ready key. To avoid any possible effects due to a motor bias, half of the subjects pressed the right mouse key to indicate a “match” and the left mouse key for “non-match,” while the other half had the opposite key-press assignment. The experiment was divided to 4 blocks. Two blocks were as described above and the other two were the mirror image trials, starting with the fixation point on the left. 
Figure 2
 
Procedures of Experiment 2: at the start of every trial, subjects fixated on a fixation point on the right side of the screen (depicted here as a black dot, inside a rectangle illustrating the screen). After pressing a start button, a base stimulus appeared, constructed from two pictures 2.8 and 8.4 degrees to the left of the fixation point for 150-ms duration. One hundred milliseconds after the base stimulus disappeared, the fixation point was relocated to the middle of the screen, instructing the subjects to make a saccade to its new location. Six hundred twelve milliseconds after that, the test stimulus (one picture 2.8 or 8.4 degrees to the left or right of the fixation point) appeared for 150-ms duration. Subjects had to respond whether the test image was identical (“match”) to one of the base images or a different third image (“non-match”).
Figure 2
 
Procedures of Experiment 2: at the start of every trial, subjects fixated on a fixation point on the right side of the screen (depicted here as a black dot, inside a rectangle illustrating the screen). After pressing a start button, a base stimulus appeared, constructed from two pictures 2.8 and 8.4 degrees to the left of the fixation point for 150-ms duration. One hundred milliseconds after the base stimulus disappeared, the fixation point was relocated to the middle of the screen, instructing the subjects to make a saccade to its new location. Six hundred twelve milliseconds after that, the test stimulus (one picture 2.8 or 8.4 degrees to the left or right of the fixation point) appeared for 150-ms duration. Subjects had to respond whether the test image was identical (“match”) to one of the base images or a different third image (“non-match”).
In this experiment, there were 640 trials, 448 “match” trials (constructed from 2 images per trial) and 192 “non-match” trials (3 images per trial). In every subject, each image from the “match” trials was repeated 4 times during the experiment, such that it appeared once in each of the 4 “match” conditions (“same retina,” “same hemifield,” “opposite hemifield,” and “same screen”). Each image from the “non-match” trials was repeated twice, to appear in each of the 2 non-match conditions (“same hemifield,” “opposite hemifield”). The assignment of a specific image to a specific condition was counterbalanced among the different subjects. Thus, there were 512 images, 288 of them appeared in the “non-match” trials, and 224 in the “match” trials. The specific images to be shown as “match” or “non-match” were randomized among subjects. 
The analysis of the results of all experiments was done using t-tests and was corrected for multiple comparisons using the Bonferroni correction. 
Eye tracking
Experiment 2 was conducted in a dimly lit room. The subjects sat in a chair with back support, used a chin rest, and faced a monitor. Throughout the experiment, the observer's right eye position was recorded at 1000 Hz using an Eyelink-1000 (SR Research, Osgoode, ON, Canada) non-invasive infrared eye-tracking system. We used the manufacturer's software for calibration, validation, drift correction, and determining periods of fixation. Only trials in which subjects maintained correct fixation during the appearance of the base and test stimuli and made the proper saccade to new fixation point location between their appearances were considered (63%, STD = 15%). Note that a significant proportion of the trials was disqualified offline as a result of ineligible eye movements, this is probably a result of the lack of online feedback. Still, the results rely on about 400 trials per subject. 
Results
Experiment 1: Testing for position specificity
The first experiment was designed to reveal whether a task requiring matching of complex images shows position specificity. For this purpose, we used grayscale pictures of animals and scenery with an equal mean intensity ( Figure 1a). In the beginning of every trial, subjects fixated on a fixation point and pressed a start button to begin. After a variable period, the first picture (base) briefly appeared either on the left or right of the fixation point ( Figure 1b). Half a second later, the second picture (test) appeared for the same duration either in the same location (“same position”) or in the opposite position (at an equal eccentricity, “different position”). Subjects were asked to report whether the two images were identical (“match”) or different from one another (“non-match”). The images were presented for such a brief duration to assure that any saccade to the peripheral stimulus would occur long after the stimulus was extinguished. Since there was no mask, stimuli were fairly visible (see following recognition task for details). 
When the two images were identical, performance was clearly better if they appeared in the same position than if they were shown in different positions ( Figure 1c, “original match” condition, p < 0.001 corrected). One concern is that subjects may simply report “match” for repetition of any image at the same location and “non-match” for image presentation at different locations. This should lead to tendency to falsely report “match” (on the basis of location) also in the “non-match”/“same position” condition. Importantly, no significant difference was observed between the two “non-match” conditions ( Figure 1c, “original non-match” condition, p = 0.66). This indicates that matching was based on the identity of the image and not on its location. Furthermore, the severe reduction in matching performance when the matching images were presented in opposite hemifields (at less than 3 degrees eccentricity) suggests that performance in this task is largely dependent on an image representation at early visual areas whose receptive fields map the contralateral visual field and do not invade deep into the ipsilateral side. 
Image matching in our test could be done through explicit image recognition (even at the semantic level), or in its absence, by comparison of lower level features (e.g., the presence of a certain “blob” in a specific position). To check whether subjects recognized the images, 24 subjects out of the 28 from the original experiment observed two images (from the same category, either “animal” or “scene”) in the same spatiotemporal presentation conditions as in Experiment 1 (25 ms, 2.8 degrees eccentricity) and were asked to report whether they saw an animal or a scenery without an animal. The images could appear in the same or in opposite positions. Performance level was significantly above chance but was far from being perfect (74% correct, p < 0.001). This suggests that explicit knowledge about the image identity could be extracted in some cases, despite the fact that the stimuli were presented for a brief duration. Presumably, cortical areas involved in “the gist of the scene” were still activated. 
We reasoned that if image recognition is useful for matching in our brief presentation conditions, task performance for inverted images is expected to be much lower, as it is generally much harder to recognize inverted images than upright ones (i.e., the inversion effect; see Epstein, Higgins, Parker, Aguirre, & Cooperman, 2006). We therefore repeated Experiment 1 (with 16 subjects out of the original 28) using inverted images. The results for the inverted images (shown in Figure 1c, inverted conditions) were very similar to the original upright experiment; no significant differences were observed when comparing the same conditions in the two experiments (p > 0.15 in all cases). 
Furthermore, there was no correlation between the individual's performance in image category identification (“animal” or “non-animal”) and performance levels in the matching task (“match” or “non-match”) across subjects that performed both tasks. Together, these results suggest that in our experimental conditions, matching did not rely on image identification. Therefore, the results are likely to be the same for any pattern matching task and do not involve sophisticated object extraction or detailed scene analysis. 
Experiment 2: The coordinate frame for matching
Having found clear evidence for position specificity for image matching, we set to study the relevant coordinate frame in which matching is performed. Two principle representations were tested: a retinotopic one and a spatiotopic one (head or screen based). To achieve this, we created conditions in which the image position changed in one coordinate frame while remaining the same in the other. In passive viewing conditions (as in Experiment 1), translation of the image results in a change in both retinal and spatiotopic coordinates. To distinguish between the two, one must introduce a change in the gaze position (by performing a saccade to a new fixation position) between presentation of the base and target images. Such a design can expose whether performance is sensitive to the position of the image on the retina or its position on the screen. 
Therefore, in this experiment after the presentation of the base stimulus, the fixation point jumped to the middle of the screen, instructing the subjects to fixate on the new position ( Figure 2); 612.5 ms later (well after establishing fixation in the new position), the third picture (test) appeared (at 2.8 or 8.4 degrees to the left or right of the new fixation point) and subjects were asked to respond whether the third picture was identical to one of the base pictures (“match”) or not (“non-match”). Finally, the fixation point was repositioned back to the side of the screen, to signal the availability of beginning the next trial (by mouse press). Note that in this experiment the base and the test stimuli were presented for 150-ms duration (rather than the 25 ms in Experiment 1). This was done to insure that performance will remain above chance level, since the intervening guided saccade and the multiple pictures in the base stimulus in Experiment 2 substantially increased the difficulty of the task. Trials in which subjects broke fixation during the presentation of the base or test images or failed to execute a proper saccade at the right timing (see Methods section) were excluded (offline) from further analysis. 
We split the experimental conditions into six different categories, with an equal number of trials, according to the position and identity of the test image. These are numbered below (1–6, Figure 3): When the test image was identical to one of the base images (i.e., “match”), it could appear in 4 positions relative to the location of its matched image in the base: (1) “same retina”—same retinotopic position but different screen position; (2) “same hemifield”—5.6 degrees away from its retinotopic position (but in the same hemifield) and in a different screen position; (3) “opposite hemifield”—5.6 or 16.8 degrees away from its retinotopic position (in the opposite hemifield) and in different screen position; and (4) “same screen”—11.2 degrees away from its retinotopic position (in the opposite hemifield) but in the same screen position. If the test image was a different third image (“non-match”), it could appear in 2 positions relative to the location of the base images: (5) “same hemifield”—in the same hemifield as the two test images; and (6) “opposite hemifield”—in an opposite hemifield than the two test images. Notice that each of the categories (1–6) above contains data from two types of trials: One in which the test was present in the close eccentricity (“option 1,” 2.8 degrees) and one in which it was shown further from the new fixation point (“option 2,” 8.4 degrees). The results of these two types of trials were pooled to generate the average performance per category. This was done in order to factor out the effects of eccentricity and lateral masking, which clearly affect behavioral performance (see the following section). 
Figure 3
 
Testing the coordinate frame for scene matching. Experimental conditions and results: (a) For any given base stimulus, there could be 6 different conditions: 4 match and 2 non-match, according to the test stimulus identity and position. The screen is depicted by the black elongated rectangle frame, and the location of the fixation point and the images on the screen is illustrated by their relative position inside the rectangle. In all conditions, the test stimulus could appear in one of two eccentricities (near: “option 1”, or far: “option 2”). The conditions were: (1) “same retina”—the test (match) picture appeared in the same retinotopic position but in a different screen position; (2) “same hemifield”—the test (match) stimulus appeared in the same hemifield (5.6 degrees from its original retinotopic position) in a different screen position; (3) “opposite hemifield” condition—the third match picture appeared in the opposite hemifield and in a different screen position; (4) “same screen” condition—third match picture appeared in the opposite hemifield but in the same screen position. Two non-match conditions: (5) “same hemifield”—third non-match picture appear in the same hemifield but in a different screen position as the base pictures; (6) “opposite hemifield”—third non-match picture appeared in the opposite hemifield but in the same screen position as the base pictures. (b) Performance level for the various “match” and the “non-match” conditions. Asterisks denote significant differences between marked “match” performance levels (* p < 0.05; ** p < 0.005).
Figure 3
 
Testing the coordinate frame for scene matching. Experimental conditions and results: (a) For any given base stimulus, there could be 6 different conditions: 4 match and 2 non-match, according to the test stimulus identity and position. The screen is depicted by the black elongated rectangle frame, and the location of the fixation point and the images on the screen is illustrated by their relative position inside the rectangle. In all conditions, the test stimulus could appear in one of two eccentricities (near: “option 1”, or far: “option 2”). The conditions were: (1) “same retina”—the test (match) picture appeared in the same retinotopic position but in a different screen position; (2) “same hemifield”—the test (match) stimulus appeared in the same hemifield (5.6 degrees from its original retinotopic position) in a different screen position; (3) “opposite hemifield” condition—the third match picture appeared in the opposite hemifield and in a different screen position; (4) “same screen” condition—third match picture appeared in the opposite hemifield but in the same screen position. Two non-match conditions: (5) “same hemifield”—third non-match picture appear in the same hemifield but in a different screen position as the base pictures; (6) “opposite hemifield”—third non-match picture appeared in the opposite hemifield but in the same screen position as the base pictures. (b) Performance level for the various “match” and the “non-match” conditions. Asterisks denote significant differences between marked “match” performance levels (* p < 0.05; ** p < 0.005).
Results are shown in Figure 3b. Similar to Experiment 1, performance was high in the “non-match” conditions (i.e., subjects tended to report “non-match”), and there was no difference in performance when a novel (non-matching) third image appeared in the opposite hemifield or in the same hemifield ( p = 0.55). In the “match” conditions, when the third picture appeared in the same retinotopic position (“same retina” condition) performance was significantly better than when the picture appeared in a different retinal hemifield (“opposite hemifield” condition, p < 0.005 corrected) even when the screen position was fixed (“same screen” condition, p < 0.05 corrected). Note that the “same screen” condition and the “opposite hemifield” condition shared the same retinotopic displacement of the image, but the test target in the “same screen” condition maintained its original screen position. Therefore, an advantage in presenting the test image in the same screen position (as its matched base image) should be seen in the difference between these two conditions. Our results, however, show no such difference ( p = 0.84). In fact, there was a small advantage to the “opposite hemifield” condition (0.546 fraction correct vs. 0.542 fraction correct in the “same screen” condition). While we acknowledge that a null result can never be conclusive proof for the lack of an effect, the power of the two statistical tests used to detect the effects of retinotopic and spatiotopic mapping was equal. We conclude that image matching in brief presentation conditions relies primarily on a retinotopic reference frame (rather than on a spatiotopic one). 
To confirm that these results do not simply arise from a speed/accuracy tradeoff (between the reaction time and level of accuracy), we computed the average reaction time for each condition. Results show no such tradeoff (see Supplementary Figure 1). In fact, responses in the “same retina” condition in which performance was best were also the fastest. 
Stimulus eccentricity and lateral masking effects
Stimulus eccentricity and lateral masking are both known to affect performance. Here we find evidence for strong interactions between these two factors. Figure 4a shows the eight possible “match” conditions, sorted to two categories according to the test stimulus eccentricity. Note also that the base picture to be repeated could either be close to or far from fixation. The effect of the two aspects (eccentricity of the base stimulus to be repeated, and test stimulus eccentricity) was assessed. Somewhat counterintuitively, performance levels were the same for the two test stimulus eccentricities ( Figure 4b, left column; p = 0.23). Thus, under our experimental conditions, image matching can be done quite efficiently for a lone target, independent of its retinal eccentricity. However, subjects could judge the familiarity of the test image much better if it previously appeared as the more central image of the two pictures shown in the base. This can be explained by a lateral masking effect, a phenomenon in which perception of a peripheral stimulus is impaired when other stimuli are presented in its adjacent surroundings. Importantly, this lateral masking is asymmetric: the central image masks the peripheral one to a much greater extent than vice versa. 
Figure 4
 
Eccentricity and lateral masking effects in the “match” conditions. (a) There were two images in the base stimulus in two eccentricities (“far” and “close”) at the same side of the fixation point. The test stimulus could appear in one of two eccentricities (“close” and “far”) to the left or to the right of the fixation point. (b) Results for the “match” conditions divided by test and base eccentricities.
Figure 4
 
Eccentricity and lateral masking effects in the “match” conditions. (a) There were two images in the base stimulus in two eccentricities (“far” and “close”) at the same side of the fixation point. The test stimulus could appear in one of two eccentricities (“close” and “far”) to the left or to the right of the fixation point. (b) Results for the “match” conditions divided by test and base eccentricities.
Lateral masking could also explain why there is such a difference in performance levels between the “match” and the “non-match” conditions in Experiment 2. The answer may be that lateral masking affects the stored image for later comparison, such that subjects tend to report that the base and test images are different even when they are repeated (resulting in overall low performance levels in the “match” conditions). This would also lead to high “correct rejection” rates, when the images are indeed different (i.e., in “non-match” conditions). 
To summarize, we find (in Experiment 1) that image matching, which does not require explicit object recognition, is position dependent. Importantly, the image's retinal location, not its position on the screen, is the factor affecting performance level (see Experiment 2). In addition, matching capabilities are degraded by asymmetric lateral masking when two pictures are simultaneously presented but are largely insensitive to eccentricity in the case of a single picture. 
Discussion
Our experiment was specifically designed to reveal any advantage for image comparison conferred by presenting an image in the same spatiotopic coordinates. Somewhat counter to common intuition, our results show that performance on this task depends only on the retinotopic position of the image. This may imply that the matching task is based on the activity of neuronal populations in various areas, which represent the visual world in strictly retinotopic coordinates. There could be, however, other reasons for this finding. 
(1) The advantage of repetition in retinal coordinates is merely due to a retinotopic after-image. 
One disturbing possibility is that the superior performance when the target image and base image share the same retinotopic position may merely be due to the presence of a powerful positive retinal after-image, which is maintained long after the base stimulus is no longer present. To test this, we conducted a supporting experiment (see Supplementary data) in which the time between a high contrast stimulus (serving as an after-image inducing stimulus) and a test stimulus was varied. We chose the inducing stimulus contrast to be the highest found among our natural scenery images and used a square-wave stimulus, which is a powerful inducer due to its clear borderlines. Nevertheless, we found a very small effect, tenfold less than the initial after-image strength. Furthermore, the after-image we found was negative not positive. If anything, a negative after-image is expected to cancel the effect of a test stimulus, since it has the opposite contrast polarity. We conclude that the superior matching performance obtained when presenting the base and test stimulus in the same retinal coordinates does not stem from a retinotopic after-image. 
(2) A change to spatiotopic coordinates may require time. 
Recent findings suggest that the frame of representation of visual information may change from a retinotopic to a spatiotopic one, if the time that elapsed from the intervening saccade is long enough. Thus, even when the task explicitly requires memory of the target position in space (spatial memory), initially, a spatial cue improves performance if it is presented in the same retinal coordinates as the target (Golomb et al., 2008). Only after about 200 ms after the intervening saccade, performance is better if the target is in the same spatiotopic coordinates as the cue (rather than the same retinotopic coordinates), suggesting a shift to a spatiotopic representation. In our case, the cue to make a saccade was about 600 ms before the test stimulus presentation and the average saccadic reaction time was 118 ± 40 ms (there were many express saccades, since the base stimulus offset could serve as a reliable predictor for the fixation point change that occurred 100 ms later). This leaves plenty of time for a putative coordinate change. However, to test this more carefully, we split the data to two data sets: those in which the saccades occurred early (saccadic reaction time <150 ms) and those with late saccades (saccades occurring longer than 150 ms but before base presentation). We reasoned that if the coordinate frame change (from a retinal representation to a screen-based one) is slow, one might expect to get better performance for the “same retina” conditions, when the saccade occurred late since this allows little time for the change in coordinate frame before the test stimulus appeared. On the other hand, the converse should be the case if the base and test stimulus appeared in the “same screen” condition. 
Performance level in the early saccade trials was better than in the trials in which the saccades occurred later, in all conditions (possibly because early saccades are indicative of higher vigilance levels; see Supplementary Figure 2). However, there was no greater gain from making early saccades in one condition compared to another. This result diminishes the likelihood that we fail to see evidence for spatiotopic representation simply because of our choice of temporal parameters. 
(3) A change to spatiotopic coordinates may require explicit attention to spatial position. 
One way to explain the difference between our results and previous behavioral studies providing evidence for a spatiotopic representation is that the default visual representation is the retinotopic one, unless one is explicitly required to attend to the position of the object in world coordinates. This was the conclusion from a series of experiments done by Golomb et al. (2008). Similarly, object-based representation was found for the adaptation after-effect only when subjects attended to the object's location and not to a distractor (Melcher, 2008). Thus, if one fixates while an adaptor, placed inside a disk, moves on the screen (and on the retina), powerful adaptation effects will still be seen, as long as the test stimulus is presented on the same disk. 
In our case, subjects had to base their decision whether the two stimuli matched or not, on the basis of their visual features (rather than if they appeared in the same position). Thus, whether the images were presented in the same screen position or not was absolutely irrelevant to our visual matching task. This may account for our purely retinotopic effect. Furthermore, our task required a simple image matching, which could be completed in the absence of high-level recognition. This task may therefore give greater weight to the activity in low-level visual areas, which are known to have a retinal representation of the image at the finest resolution. Neuronal receptive fields in these areas are strictly restricted to one hemifield. This may explain why there is a significant decrease in performance in the “opposite hemifield” condition compared to the “same retina” condition. There is also a drop in performance when the matching image is presented at the same hemifield but at a different retinal position (though this trend does not reach significance). This may be a clue to the typical receptive field size in the most relevant areas for this “retain and compare” task. A reasonable candidate area is V4 (though this is admittedly very speculative). Its neurons' average receptive field size is restricted to the contralateral visual field, spanning about 5 degrees in diameter at the target eccentricity used in this experiment (Gattass, Sousa, & Gross, 1988; Smith, Singh, Williams, & Greenlee, 2001). It is possible that tasks requiring further image analysis, such as categorization, which require explicit recognition, might well generalize across retinal positions. This would suggest that such a task is based on object-related visual areas (in which receptive fields are much greater and include the ipsilateral side). 
Visual remapping may be irrelevant for visual stability
Behavioral evidence for trans-saccadic integration and an extra-retinal representation in perceptional tasks is limited, at best. This led researchers to posit that contrary to our intuitive expectation, visual remapping does not take part in maintaining visual stability. Instead, spatial remapping may be important for action control, spatial memory, and sensorimotor adaptation (Bays & Husain, 2007). Indeed, spatial remapping of neuronal receptive fields is found mainly in the parietal cortex and rarely in the ventral stream (though see McKyton & Zohary, 2007). Furthermore, performance in spatial memory or action-oriented tasks is dramatically affected by lesions in the parietal cortex (e.g., optic ataxia). If visual remapping is indeed the mechanism allowing for the creation of a spatiotopic representation, and this process does not take place in strictly visual tasks, this may explain why performance in our task did not show any spatiotopic effects. 
Conclusions
We found no evidence that image matching performance benefits from repeating the image in the same screen position when eye position was changed before presentation of the matching image. Performance was also found to be sensitive for image translation of a few degrees within the contralateral visual field. We note that image matching can often be performed quite effectively, without overall recognition of the scene (as was often the case in our experiment). Our results suggest that performance in this task is mostly determined by neuronal activity in earlier brain areas containing a relatively accurate representation of the visual scene in strictly retinotopic coordinates. We note, however, that our experimental conditions were somewhat artificial in requiring subjects to make a saccade to a new fixation position, and maintain fixation there while the relevant image for the task was presented elsewhere. One cannot rule out the possibility that mapping in spatiotopic coordinates may be relevant for perception in more natural circumstances. 
Supplementary Materials
Supplementary Figure 1 - Supplementary Figure 1 
Supplementary Figure 1. Reaction times for the various conditions in Experiment 2. Bars are as in Figure 3b. Note that responses in the “same retina” condition in which performance was best (in terms of percent correct) were also the fastest. Thus, there was no evidence for a speed/accuracy tradeoff that might explain the results. 
Supplementary Figure 2 - Supplementary Figure 2 
Supplementary Figure 2. Matching performance level for the four “match” conditions in Experiment 2, split according to the time of refixation to early and late saccades. Light bars represent the fraction correct when subjects performed the saccade 150 ms or earlier after the fixation point moved to the new location. Dark bars represent the fraction correct for saccades made after 150 ms but before the test stimulus appearance. Note that if subjects had fixated the new location earlier, their overall performance was better. However, this effect was not specific to any of the particular conditions (Anova shows no interactions, p = 0.99). 
Supplementary Figure 3 - Supplementary Figure 3 
Supplementary Figure 3. After-image experimental design and results. (a) Design: after subjects pressed a ready key, a base stimulus appeared with a light stripe in the middle. After a short (50 ms) or long (712.5 ms) interval, the test stimulus appeared either with a bright or with a dark stripe in the middle, in various contrast levels. Subjects had to report whether the test stimulus had a light or a dark stripe in its middle. (b) Results for 6 subjects. The X-axis represents the contrast level of the test stimulus. Negative contrast values indicate that a dark stripe was present in the middle of the test stimulus while positive ones: a bright stripe. The Y-axis represents the fraction in which subjects reported the stripe in the middle of the test stimulus to be brighter than its neighboring stripes. Gray squares are for the 50-ms interval and black diamond markers are for 712.5-ms interval. Lines represent the best fit logistic regression curve. 
Acknowledgments
We wish to thank Dr. Galia Avidan for her kind generosity, allowing us to use her eye tracking system in Ben Gurion University for Experiment 2. This work was supported by the National Institute for Psychobiology in Israel (NIPI) and the Binational USA–Israel Science Foundation (BSF #39/09) to Ehud Zohary. 
Commercial relationships: none. 
Corresponding author: Ayelet McKyton. 
Email: ayelet.mckyton@mail.huji.ac.il. 
Address: Silverman Building, Neurobiology Department, Hebrew University Givat Ram, Jerusalem, Israel, 96551. 
References
Ahissar, M. Hochstein, S. (1997). Task difficulty and the specificity of perceptual learning. Nature, 387, 401–406. [PubMed] [CrossRef] [PubMed]
Andersen, R. A. Bracewell, R. M. Barash, S. Gnadt, J. W. Fogassi, L. (1990). Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. Journal of Neuroscience, 10, 1176–1196. [PubMed] [Article] [PubMed]
Andersen, R. A. Essick, G. K. Siegel, R. M. (1985). Encoding of spatial location by posterior parietal neurons. Science, 230, 456–458. [PubMed] [CrossRef] [PubMed]
Andersen, R. A. Mountcastle, V. B. (1983). The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. Journal of Neuroscience, 3, 532–548. [PubMed] [PubMed]
Andersen, R. A. Snyder, L. H. Bradley, D. C. Xing, J. (1997). Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Review of Neuroscience, 20, 303–330. [PubMed] [CrossRef] [PubMed]
Bays, P. M. Husain, M. (2007). Spatial remapping of the visual world across saccades. Neuroreport, 18, 1207–1213. [PubMed] [CrossRef] [PubMed]
Bremmer, F. (2000). Eye position effects in macaque area V4. Neuroreport, 11, 1277–1283. [PubMed] [CrossRef] [PubMed]
Crist, R. E. Kapadia, M. K. Westheimer, G. Gilbert, C. D. (1997). Perceptual learning of spatial localization: Specificity for orientation, position, and context. Journal of Neurophysiology, 78, 2889–2894. [PubMed] [Article] [PubMed]
DeSouza, J. F. Dukelow, S. P. Vilis, T. (2002). Eye position signals modulate early dorsal and ventral visual areas. Cerebral Cortex, 12, 991–997. [PubMed] [Article] [CrossRef] [PubMed]
Dobbins, A. C. Jeo, R. M. Fiser, J. Allman, J. M. (1998). Distance modulation of neural activity in the visual cortex. Science, 281, 552–555. [PubMed] [CrossRef] [PubMed]
Duhamel, J. R. Bremmer, F. BenHamed, S. Graf, W. (1997). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389, 845–848. [PubMed] [CrossRef] [PubMed]
Duhamel, J. R. Colby, C. L. Goldberg, M. E. (1992). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255, 90–92. [PubMed] [CrossRef] [PubMed]
Epstein, R. A. Higgins, J. S. Parker, W. Aguirre, G. K. Cooperman, S. (2006). Cortical correlates of face and scene inversion: A comparison. Neuropsychologia, 44, 1145–1158. [PubMed] [CrossRef] [PubMed]
Fahle, M. Edelman, S. Poggio, T. (1995). Fast perceptual learning in hyperacuity. Vision Research, 35, 3003–3013. [PubMed] [CrossRef] [PubMed]
Galletti, C. Battaglini, P. P. (1989). Gaze-dependent visual neurons in area V3A of monkey prestriate cortex. Journal of Neuroscience, 9, 1112–1125. [PubMed] [Article] [PubMed]
Gattass, R. Sousa, A. P. Gross, C. G. (1988). Visuotopic organization and extent of V3 and V4 of the macaque. Journal of Neuroscience, 8, 1831–1845. [PubMed] [Article] [PubMed]
Golomb, J. D. Chun, M. M. Mazer, J. A. (2008). The native coordinate system of spatial attention is retinotopic. Journal of Neuroscience, 28, 10654–10662. [PubMed] [Article] [CrossRef] [PubMed]
Hayhoe, M. Lachter, J. Feldman, J. (1991). Integration of form across saccadic eye movements. Perception, 20, 393–402. [PubMed] [CrossRef] [PubMed]
Hayhoe, M. M. Shrivastava, A. Mruczek, R. Pelz, J. B. (2003). Visual memory and motor planning in a natural task. Journal of Vision, 3, (1):6, 49–63, http://journalofvision.org/3/1/6/, doi:10.1167/3.1.6. [PubMed] [Article] [CrossRef] [PubMed]
Irwin, D. E. (1991). Information integration across saccadic eye movements. Cognitive Psychology, 23, 420–456. [PubMed] [CrossRef] [PubMed]
Jonides, J. Irwin, D. E. Yantis, S. (1983). Failure to integrate information from successive fixations. Science, 222, 188. [CrossRef] [PubMed]
Karni, A. Sagi, D. (1991). Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences of the United States of America, 88, 4966–4970. [PubMed] [Article] [CrossRef] [PubMed]
McKyton, A. Zohary, E. (2007). Beyond retinotopic mapping: The spatial representation of objects in the human lateral occipital complex. Cerebral Cortex, 17, 1164–1172. [PubMed] [CrossRef] [PubMed]
McKyton, A. Zohary, E. (2008). The coordinate frame of pop-out learning. Vision Research, 48, 1014–107. [PubMed] [CrossRef] [PubMed]
Medendorp, W. P. Goltz, H. C. Vilis, T. Crawford, J. D. (2003). Gaze-centered updating of visual space in human parietal cortex. Journal of Neuroscience, 23, 6209–6214. [PubMed] [Article] [PubMed]
Melcher, D. (2005). Spatiotopic transfer of visual-form adaptation across saccadic eye movements. Current Biology, 15, 1745–1748. [PubMed] [CrossRef] [PubMed]
Melcher, D. (2008). Dynamic, object-based remapping of visual features in trans-saccadic perception. Journal of Vision, 8, (14):2, 1–17, http://journalofvision.org/8/14/2/, doi:10.1167/8.14.2. [PubMed] [Article] [CrossRef] [PubMed]
Melcher, D. Morrone, M. C. (2003). Spatiotopic temporal integration of visual motion across saccadic eye movements. Nature Neuroscience, 6, 877–881. [PubMed] [CrossRef] [PubMed]
Merriam, E. P. Genovese, C. R. Colby, C. L. (2003). Spatial updating in human parietal cortex. Neuron, 39, 361–373. [PubMed] [CrossRef] [PubMed]
Merriam, E. P. Genovese, C. R. Colby, C. L. (2007). Remapping in human visual cortex. Journal of Neurophysiology, 97, 1738–1755. [PubMed] [Article] [CrossRef] [PubMed]
Nakamura, K. Colby, C. L. (2000). Visual, saccade-related, and cognitive activation of single neurons in monkey extrastriate area V3A. Journal of Neurophysiology, 84, 677–692. [PubMed] [Article] [PubMed]
Nowicka, A. Ringo, J. L. (2000). Eye position-sensitive units in hippocampal formation and in inferotemporal cortex of the macaque monkey. European Journal of Neuroscience, 12, 751–759. [PubMed] [CrossRef] [PubMed]
Schoups, A. A. Vogels, R. Orban, G. A. (1995). Human perceptual learning in identifying the oblique orientation: Retinotopy, orientation specificity and monocularity. The Journal of Physiology, 483, 797–810. [PubMed] [Article] [CrossRef] [PubMed]
Shiu, L. P. Pashler, H. (1992). Improvement in line orientation discrimination is retinally local but dependent on cognitive set. Perception & Psychophysics, 52, 582–588. [PubMed] [CrossRef] [PubMed]
Smith, A. T. Singh, K. D. Williams, A. L. Greenlee, M. W. (2001). Estimating receptive field size from fMRI data in human striate and extrastriate visual cortex. Cerebral Cortex, 11, 1182–1190. [PubMed] [Article] [CrossRef] [PubMed]
Sommer, M. A. Wurtz, R. H. (2002). A pathway in primate brain for internal monitoring of movements. Science, 296, 1480–1482. [PubMed] [CrossRef] [PubMed]
Van Eccelpoel, C. Germeys, F. De Graef, P. Verfaillie, K. (2008). Coding of identity-diagnostic information in transsaccadic object perception. Journal of Vision, 8, (14):29, 1–16, http://journalofvision.org/8/14/29/, doi:10.1167/8.14.29. [PubMed] [Article] [CrossRef] [PubMed]
Von Holst, E. Mittelstaedt, H. (1950). Das Reafferenzprincip. Naturwissenschaften, 37, 464–476. [CrossRef]
Figure 1
 
Experiment 1: design and results. (a) Four examples of base and test images used in the experiment. (b) Procedure: at the start of every trial, subjects fixated on a fixation point in the middle of the screen. After pressing a start button, a base picture appeared 2.8 degrees to the left or to the right of the fixation point for 25-ms duration. After another 500 ms, a test picture appeared also for duration of 25 ms. The images could appear either in the same (“same position”) or in opposite positions (“different positions”). Subjects had to respond whether the two images were the same (“match”) or different (“non-match”). (c) Results from the original experiment and from the experiment with the inverted images. Asterisks denote significant differences between “same position” and “different position” performance levels ( p < 0.001).
Figure 1
 
Experiment 1: design and results. (a) Four examples of base and test images used in the experiment. (b) Procedure: at the start of every trial, subjects fixated on a fixation point in the middle of the screen. After pressing a start button, a base picture appeared 2.8 degrees to the left or to the right of the fixation point for 25-ms duration. After another 500 ms, a test picture appeared also for duration of 25 ms. The images could appear either in the same (“same position”) or in opposite positions (“different positions”). Subjects had to respond whether the two images were the same (“match”) or different (“non-match”). (c) Results from the original experiment and from the experiment with the inverted images. Asterisks denote significant differences between “same position” and “different position” performance levels ( p < 0.001).
Figure 2
 
Procedures of Experiment 2: at the start of every trial, subjects fixated on a fixation point on the right side of the screen (depicted here as a black dot, inside a rectangle illustrating the screen). After pressing a start button, a base stimulus appeared, constructed from two pictures 2.8 and 8.4 degrees to the left of the fixation point for 150-ms duration. One hundred milliseconds after the base stimulus disappeared, the fixation point was relocated to the middle of the screen, instructing the subjects to make a saccade to its new location. Six hundred twelve milliseconds after that, the test stimulus (one picture 2.8 or 8.4 degrees to the left or right of the fixation point) appeared for 150-ms duration. Subjects had to respond whether the test image was identical (“match”) to one of the base images or a different third image (“non-match”).
Figure 2
 
Procedures of Experiment 2: at the start of every trial, subjects fixated on a fixation point on the right side of the screen (depicted here as a black dot, inside a rectangle illustrating the screen). After pressing a start button, a base stimulus appeared, constructed from two pictures 2.8 and 8.4 degrees to the left of the fixation point for 150-ms duration. One hundred milliseconds after the base stimulus disappeared, the fixation point was relocated to the middle of the screen, instructing the subjects to make a saccade to its new location. Six hundred twelve milliseconds after that, the test stimulus (one picture 2.8 or 8.4 degrees to the left or right of the fixation point) appeared for 150-ms duration. Subjects had to respond whether the test image was identical (“match”) to one of the base images or a different third image (“non-match”).
Figure 3
 
Testing the coordinate frame for scene matching. Experimental conditions and results: (a) For any given base stimulus, there could be 6 different conditions: 4 match and 2 non-match, according to the test stimulus identity and position. The screen is depicted by the black elongated rectangle frame, and the location of the fixation point and the images on the screen is illustrated by their relative position inside the rectangle. In all conditions, the test stimulus could appear in one of two eccentricities (near: “option 1”, or far: “option 2”). The conditions were: (1) “same retina”—the test (match) picture appeared in the same retinotopic position but in a different screen position; (2) “same hemifield”—the test (match) stimulus appeared in the same hemifield (5.6 degrees from its original retinotopic position) in a different screen position; (3) “opposite hemifield” condition—the third match picture appeared in the opposite hemifield and in a different screen position; (4) “same screen” condition—third match picture appeared in the opposite hemifield but in the same screen position. Two non-match conditions: (5) “same hemifield”—third non-match picture appear in the same hemifield but in a different screen position as the base pictures; (6) “opposite hemifield”—third non-match picture appeared in the opposite hemifield but in the same screen position as the base pictures. (b) Performance level for the various “match” and the “non-match” conditions. Asterisks denote significant differences between marked “match” performance levels (* p < 0.05; ** p < 0.005).
Figure 3
 
Testing the coordinate frame for scene matching. Experimental conditions and results: (a) For any given base stimulus, there could be 6 different conditions: 4 match and 2 non-match, according to the test stimulus identity and position. The screen is depicted by the black elongated rectangle frame, and the location of the fixation point and the images on the screen is illustrated by their relative position inside the rectangle. In all conditions, the test stimulus could appear in one of two eccentricities (near: “option 1”, or far: “option 2”). The conditions were: (1) “same retina”—the test (match) picture appeared in the same retinotopic position but in a different screen position; (2) “same hemifield”—the test (match) stimulus appeared in the same hemifield (5.6 degrees from its original retinotopic position) in a different screen position; (3) “opposite hemifield” condition—the third match picture appeared in the opposite hemifield and in a different screen position; (4) “same screen” condition—third match picture appeared in the opposite hemifield but in the same screen position. Two non-match conditions: (5) “same hemifield”—third non-match picture appear in the same hemifield but in a different screen position as the base pictures; (6) “opposite hemifield”—third non-match picture appeared in the opposite hemifield but in the same screen position as the base pictures. (b) Performance level for the various “match” and the “non-match” conditions. Asterisks denote significant differences between marked “match” performance levels (* p < 0.05; ** p < 0.005).
Figure 4
 
Eccentricity and lateral masking effects in the “match” conditions. (a) There were two images in the base stimulus in two eccentricities (“far” and “close”) at the same side of the fixation point. The test stimulus could appear in one of two eccentricities (“close” and “far”) to the left or to the right of the fixation point. (b) Results for the “match” conditions divided by test and base eccentricities.
Figure 4
 
Eccentricity and lateral masking effects in the “match” conditions. (a) There were two images in the base stimulus in two eccentricities (“far” and “close”) at the same side of the fixation point. The test stimulus could appear in one of two eccentricities (“close” and “far”) to the left or to the right of the fixation point. (b) Results for the “match” conditions divided by test and base eccentricities.
Supplementary Figure 1
Supplementary Figure 2
Supplementary Figure 3
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×