Free
Research Article  |   June 2009
Behavioral effects of visual field location on processing motion- and luminance-defined form
Author Affiliations
Journal of Vision June 2009, Vol.9, 24. doi:10.1167/9.6.24
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Patricia A. McMullen, Lesley E. MacSween, Charles A. Collin; Behavioral effects of visual field location on processing motion- and luminance-defined form. Journal of Vision 2009;9(6):24. doi: 10.1167/9.6.24.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Traditional theories posit a ventral cortical visual pathway subserving object recognition regardless of the information defining the contour. However, functional magnetic resonance imaging (fMRI) studies have shown dorsal cortical activity during visual processing of static luminance-defined (SL) and motion-defined form (MDF). It is unknown if this activity is supported behaviorally, or if it depends on central or peripheral vision. The present study compared behavioral performance with two types of MDF [one without translational motion (MDF) and another with (TM)] and SL shapes in a shape matching task where shape pairs appeared in the upper or lower visual fields or along the horizontal meridian of central or peripheral vision. MDF matching was superior to the other contour types regardless of location in central vision. Both MDF and TM matching was superior to SL matching for presentations in peripheral vision. Importantly, there was an advantage for MDF and TM matching in the lower peripheral visual field that was not present for SL forms. These results are consistent with previous behavioral findings that show no field advantage for static form processing and a lower field advantage for motion processing. They are also suggestive of more dorsal cortical involvement in the processing of shapes defined by motion than luminance.

Introduction
The capacity of the human visual system to identify shapes or forms of objects and the neural substrate underlying this function is of fundamental interest to visual neuroscientists (Grill-Spector & Malach, 2004; Logothetis & Sheinberg, 1996). These processes are core to our ability to perceive and recognize objects as food, friend, or foe. Traditionally, investigations into these processes have focused on the identification of shapes delineated from the background by static contours that emerge from differences in texture, color, depth, and/or luminance. 
However, processing of moving objects is at least as important because virtually all objects move visually relative to an observer. Object movement in space or a change in position relative to an observer is referred to as translational motion. Here, surfaces of the object remain coherent and fixed relative to each other while the locations of these surfaces change with respect to the background. 
A second type of movement that is relevant to shape processing is motion-defined form (MDF). This term describes stimuli in which the segregation of object form from background is apparent only on the basis of motion. The form and background values for luminance, color, texture, and depth are equivalent and so provide no basis for form perception. MDF stimuli may be stationary with respect to their locations in space and are so delineated by surfaces of objects that move in directions or velocities that are different from surfaces perceived as background (Regan, 2000). MDF stimuli may also translate their locations in space thereby defining their contours relative to the background (see Ferber, Humphrey, & Vilis, 2003) or they may have both moving object surfaces and translate their location in space. In all cases, object contours are perceived on the basis of differences in the velocity of the surfaces of objects relative to the velocity of the surfaces of the background. Non-translational or stationary MDF stimuli have been used to determine the psychophysics of MDF such as detection thresholds, contrast sensitivity, orientation and aspect-ratio discrimination, and Vernier acuity (Regan, 2000). Although the percepts generated by the two types of MDF described above are often different, their processing may be similar. For instance, they demonstrate the same Vernier acuity. Overall, it has been shown that our visual system is at least as sensitive to MDF as it is to static luminance-defined form (SL) (Regan, 2000, p. 314). 
Traditional object recognition theories based on results from object processing experiments in which contours are static and defined by luminance differences (SL) posit a ventral cortical visual pathway that subserves object processing (Goodale & Milner, 1992; Mishkin, 1982). However, many neuroimaging studies of MDF processing in normal observers have demonstrated activation of both ventral and dorsal visual cortices (Dupont et al., 1997; Ferber et al., 2003; Grill-Spector, Kushnir, Edelman, Itzchak, & Malach, 1998; Gulyas, Heywood, Popplewell, Roland, & Cowey, 1994; Large, Aldcroft, & Vilis, 2005; Murray, Olshausen, & Woods, 2003; Van Oostende, Sunaert, Van Hecke, Marchal, & Organ, 1997; Wang et al., 1999). Chief among the areas of activity are the lateral occipital complex (LOC) of the ventral cortex and hMT+/V5 and parts of posterior parietal cortex of the dorsal cortex. 
In general, the LOC is more active during the processing of intact objects than scrambled objects regardless of the means by which object contours are defined including motion and as such, in keeping with traditional theories, is considered a generalized module for object processing (Ferber et al., 2003; Grill-Spector, Kourtzi, & Kanwisher, 2001; Grill-Spector, Kushnir, Edelman, et al., 1998; Large et al., 2005). hMT+/V5 and parts of posterior parietal cortex are most active during basic motion processing such as the movement of dots that do not comprise a form and hence are considered part of a generalized motion-processing module (e.g., De Jong, Shipp, Skidmore, Frackowiak, & Zeki, 1994; Dupont et al., 1997; Tootell & Taylor, 1995). By inference from the brain areas active during these studies, both the basic motion and the form aspects of MDF stimuli are processed. Using MEG technology, Schoenfeld et al. ( 2003) have further demonstrated a time line for processing these two components of MDF in which dorsal cortical processing precedes ventral cortical processing. 
Recently, fMRI has also shown that SL objects show object-specific adaptation effects in dorsal cortical structures including V3A, hMT+/V5, V7, IPS1, and IPS2 (Konen & Kastner, 2008; see also Grill-Spector, Kushnir, Edelman, et al., 1998; Kourtzi & Kanwisher, 2000). These effects were in addition to those found in ventral cortex (V4 and LOC). The authors suggested that ventral and dorsal streams are activated in parallel to allow for integration of object and location information. They concluded that these findings were contrary to traditional theories of cortical object processing that posit strictly ventral cortical processing during object recognition. Hence, even static shapes activate hMT+ to some extent (see Kourtzi, Bulthoff, Erb, & Grodd, 2002). 
Notably, studies indicating dorsal cortical involvement in object processing have focused on neuroimaging and have not tested for behavioral support for this activity. One goal of the present study is to find such behavioral support. A second goal is to determine if this dorsal cortical involvement during object processing depends on central or peripheral vision. The human cortical visual system is sensitive to central and peripheral retinotopy, especially during early cortical processing stages (Grill-Spector & Malach, 2004; Tyler et al., 2005; Wandell, Dumoulin, & Brewer, 2007). 
To do so, behavioral responses to MDF processing were compared with those to SL processing as a function of placement of the visual stimuli in different areas of the visual field. Presentation of stimuli in the lower visual field has been thought to engage dorsal visual cortex to a greater extent than does presentation in the upper visual field. 
Two theories are consistent with this idea: First, Goodale and Milner's ( 1992) duplex theory of visual processing postulates that the dorsal visual cortex processes objects for action, while the ventral visual cortex processes objects for recognition (see Culham & Valyear, 2006; Danckert & Goodale, 2001). On the basis of results from studies such as these, they argued that the lower visual field has an advantage in processing vision for action and since vision for action is a dorsal cortical function, the lower visual field is functionally associated with the dorsal visual cortex (for a review, see Danckert & Goodale, 2003). A similar distinction between dorsal and ventral cortical processing has been made by Previc ( 1990). 
These two cortical processing pathways may exist because neuroanatomically, dorsal striate cortex responds optimally to lower visual field input and projects primarily to dorsal extrastriate cortex. While, ventral striate cortex responds optimally to upper visual field input and projects primarily to ventral extrastriate cortex (Grill-Spector, Kushnir, Hendler, et al., 1998; Maunsell & Newsome, 1987; and see Wandell et al., 2007). In sum, upper field input tends to activate ventral extrastriate cortex and lower field input tends to activate dorsal extrastriate cortex. 
Based on these theories, presenting objects in the upper visual field should result in more efficient object recognition than lower field presentation. In general, human behavioral results investigating upper and lower field effects on pattern recognition have been mixed (Christman & Niebauer, 1997) and have often been confounded by the use of letters as stimuli which may have engaged mechanisms specific to reading that are unrelated to objects in general. 
Importantly, motion perception is more consistently superior in the lower visual field (Amenedo, Pazo-Alvarez, & Cadaveira, 2007; Christman & Niebauer, 1997) in line with specialization of the dorsal cortical pathway for action (Goodale & Milner, 1992; Previc, 1990) presumably due to linking motion to the actions of guidance during locomotion and visuomotor coordination. Further support for this notion comes from neurophysiological studies indicating a strong bias toward the lower visual field in the receptive fields of MT cells (Maunsell & Van Essen, 1987). 
In sum, two theories suggest that lower visual field placement activates dorsal visual cortex more than ventral visual cortex. Consistent with these theories, advantages for perception for action and for motion perception have been shown for stimuli presented in the lower visual field. Inconsistent with these theories is a failure to find an advantage for object recognition for stimuli presented in the upper visual field, perhaps due to both ventral and dorsal cortical involvement (Kourtzi et al., 2002). 
In the current experiment, two issues were investigated by looking for lower visual field advantages: (1) behavioral support for dorsal visual cortical processing during object recognition and (2) whether this relationship (if found) depends on central or peripheral vision. Shapes were presented in the upper and lower visual fields, centrally and peripherally, with form defined on the basis of luminance or two types of motion. One type of MDF stimulus had static contours defined by textured surfaces that moved in a direction different from that of the direction of movement of background textured surfaces (MDF). A second type of MDF had an additional translation component in which the textured surfaces of forms moved on a static background and translated their position in space (TM). 
In both of the motion conditions, the shapes and their backgrounds were filled with random grayscale pixels having the same mean luminance. The shapes were thus invisible except when moving. The translational stimuli involved a shape moving coherently across the visual field, as a real-world object generally does. Thus, this condition represented the most common and basic sort of motion cue that would be encountered in real-world situations. However, one difficulty with translational stimuli is that they do not provide a completely isolated motion cue to shape (Regan, 2000). Parts of the visual system that are insensitive to motion could still extract shape information from a translating stimulus by integrating static signals across time. Thus, it is difficult to ascertain if differences between static (SL) and translating stimuli arise from motion or from some other characteristic of the stimuli. For this reason, a motion-defined form (MDF) condition was included in which shapes were defined purely by motion cues. In this case, the shape did not translate across the screen; instead, the pixels within the shape's boundaries scrolled in one direction while the background pixels scrolled in the opposite direction. Integration of the visual stimulus across time cannot help to define the shape in this case (Regan, 2000). Thus, the MDF condition was a pure test of the effects of motion. The MDF condition was also included to aid in interpreting the results of the translational motion stimuli. If similar effects were found for translational and MDF stimuli, we could have greater confidence that motion per se was responsible for any differences between results from the translational (TM) and static-luminance conditions (SL). 
Possible outcomes of this experiment and their implications follow. 
Upper field advantages: An upper field advantage for shape processing in general would support Previc's ( 1990) and Goodale and Milner's ( 1992) theories that shape processing is predominantly a ventral process. However, any sort of upper field advantage seems unlikely given previous failures from behavioral studies to consistently show an effect of upper/lower field placement on pattern recognition. 
Lower field advantages: A lower field advantage for shape processing in general would contradict the theories of Previc and Goodale and Milner because it would provide some support for dorsal cortical shape processing. However, contrary to these classic notions, fMRI studies of SL (Konen & Kastner, 2008; Kourtzi & Kanwisher, 2000) and SL and MDF (Grill-Spector, Kushnir, Edelman, et al., 1998; Kourtzi et al., 2002) processing have indicated some dorsal cortical activity. A greater lower field advantage for MDF relative to SL processing would support the importance of processing basic motion in hMT+/V5 and posterior parietal cortex during MDF processing (Schoenfeld et al., 2003). 
Our contention that an LVF advantage for a function is indicative of dorsal activation requires some elaboration. As previously mentioned, behavioral studies of dorsal versus ventral visual functions have found that some dorsal functions are consistently associated with an LVF advantage (Amenedo et al., 2007; Christman & Niebauer, 1997). However, this does not necessarily imply the reverse: That a function showing an LVF advantage has a dorsal substrate. Such an assertion would only be valid if ventral functions never (or rarely) showed LVF advantages. While it is difficult to prove the absence of a phenomenon, it is the case that no study of ventral functions that we are aware of has shown a behavioral advantage for presentation in the LVF. Instead, studies of ventral functions have shown either no advantage of either hemifield or a weak UVF advantage. Chambers, McBeath, Schiano, and Metz ( 1999) showed that the tops of objects are more salient than their bottoms. However, this effect was not tested as a function of the location of objects in the visual field. We therefore contend that showing an LVF advantage for processing of motion-based forms would be suggestive of this function having a dorsal substrate. 
Peripheral versus central vision processing: Predictions within the contour types presented in this experiment with respect to retinal eccentricity are difficult to make. The 10° eccentricity used for peripheral presentations in Experiment 1 should generate more dorsal cortical activity since presentation in the visual periphery is known to activate dorsal cortex (hMT+ and infraparietal cortex) more quickly and in a more sustained manner than central vision (Stephen et al., 2002). Since MDF involves activation of both of these dorsal cortical areas, it is predicted that stimuli involving motion will be matched with greater accuracy than SL defined shapes in the periphery. 
Experiment 1
Pairs of nonsense shapes with contours defined by luminance or motion were presented for matching on the basis of shape at different locations in the visual periphery. 
Method
Participants
Forty-two undergraduates from Dalhousie University (30 female) participated. All had normal or corrected-to-normal vision. Participation was voluntary and students received credit toward a class mark for their participation. On average, the experiment took an hour and thirty minutes to complete. Participants ranged in age from 17 to 31 years of age and had normal or corrected-to-normal vision. This experimental procedure was cleared by the Social Sciences and Humanities Research Ethics Board of Dalhousie University and participants gave informed consent prior to being tested. 
Stimuli
Figure 1 illustrates examples of the types of stimuli used in this experiment. Stimuli consisted of pairs of Attneave-style random polygons (Attneave, 1957; Attneave & Arnoult, 1956; Collin & McMullen, 2002), where both the interior and the exterior of the shapes were filled with luminance noise having a Gaussian profile. That is, each pixel had a randomly chosen gray level drawn from a normal distribution. Three different cues were used to define the shapes. In the luminance-defined form (SL) condition, the interior of the shapes had a mean luminance of 11.5 cd/m 2 while the exterior had a mean luminance of 3.6 cd/m 2 ( Figure 1a). In both of the other conditions, the mean luminance inside and outside of the shape was equal, at 11.5 cd/m 2. Thus, there were no luminance-based cues to form in the two MDF conditions ( Figures 1b and 1c). 
Figure 1
 
Sample stimuli. (a) Static, luminance-defined form (SL), (b) stationary, motion-defined form (MDF), and (c) translational, motion-defined form (TM).
Figure 1
 
Sample stimuli. (a) Static, luminance-defined form (SL), (b) stationary, motion-defined form (MDF), and (c) translational, motion-defined form (TM).
In the stationary MDF condition, all of the dots making up the interior of the shape scrolled upwards while all of the dots making up the exterior scrolled downwards at 4.8°/s, producing a cue to shape which was purely motion-based (Regan, 2000; see Figure 1b). In the translational MDF (TM) condition, the pixels within the shape moved up a total distance of 0.65° (i.e., 6.5 mm) before reversing to a downward movement of an equivalent distance against a static background. This alternating, translational movement occurred four times at a velocity of 9.7°/s (see Figure 1c). Although the cues to shape in this case were also motion based in that only movement of the pixels belonging to the form against a background of static pixels defined the shapes, the translation component allowed for integration across time to determine some information about the shape, which was not the present with the static MDF stimuli (see Regan, 2000). TM stimuli also had a static background to more closely mimic natural objects translating their location in space. Although MDF stimuli had a moving background, it was thought that a difference in velocity between foreground and background surfaces was the key to MDF contour definition and there was no a priori reason to believe that the magnitude of such a difference would result in different processing. Therefore, it was believed that MDF and TM stimuli were comparable except for the translation difference. 
To provide a range of difficulty for the shape-matching task, three different difficulty levels were created as described by Attneave's concepts of shape complexity and shape family (Attneave, 1957; Attneave & Arnoult, 1956; Collin & McMullen, 2002). The different difficulty levels varied in the matching task in the following ways: (1) different shape complexities—i.e., non-matching shapes had different numbers of sides, this was the easiest difficulty level; (2) different shape families but with the same complexity—i.e., non-matching pairs had the same number of sides, but different general shape, this was the intermediate difficulty condition; or (3) same shape family—i.e., non-matching shapes had the same number of sides and same general shape, this was the hardest condition. In the conditions with same shape family and same shape complexity, all shapes had 6 sides. In the conditions with different complexity, shapes could have either 6 or 12 sides. 
In all conditions, the shapes including background were 256 × 256 pixels in size and subtended about 10° of visual arc. The shapes plus background were presented against a neutral gray field with the background edges in a concentric eccentricity of 10° from fixation (see Figure 2). A fixation point (a small cross) appeared at the center of the screen throughout all conditions. 
Figure 2
 
Layout of the placement of stimulus pairs in the visual field. L = lower field, U = upper field, and H = horizontal meridian.
Figure 2
 
Layout of the placement of stimulus pairs in the visual field. L = lower field, U = upper field, and H = horizontal meridian.
Apparatus
An Apple Macintosh desktop G3/233 was used to control stimulus presentation on a Studiodisplay 17-in. CRT monitor. The experiment was programmed in MATLAB (The Mathworks, Inc., Natick, MA; www.mathworks.com) using the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997; see psychtoolbox.org for free download). Eye movements were monitored by an eye-tracking device (Eyelink II, SR Technologies, Ottawa, ON; www.sr-research.com) to ensure fixation. 
Procedure
Participants were positioned at a viewing range of 57 cm from the computer screen. They were first calibrated for the eye movement tracker. Once a stable calibration was achieved, experimental trials began. 
For each experimental trial, participants were instructed to focus on a central fixation point (a small cross) and to press the spacebar when ready. Two shapes then appeared 10° from fixation with the fixation cross remaining present throughout the trial. Pairs of shapes were presented either: (a) one to the left and one to the right of fixation on the horizontal meridian, the horizontal condition (HM); (b) one to the upper left and one to the upper right of fixation, the upper field condition (UVF); or (c) one to the lower left and one to the lower right of fixation, the lower field condition (LVF). Figure 2 illustrates these locations. Stimuli were also presented along the vertical meridian with one shape in the upper field and the other in the lower field. See the Results section regarding treatment of data from this condition. 
Participants were instructed to press a “yes” key if the shapes matched and a “no” key if they did not. These response keys were marked on the computer keyboard. Participants were instructed that accuracy, and not reaction times, was being recorded. Hence, the emphasis was on answering correctly as opposed to rapidly. Accordingly, the dependent measure for this study was accuracy. 
The stimuli were displayed for 250 ms. The Eyelink II© eye tracker was used to ensure that participants focussed on the fixation cross throughout presentation of the stimuli. Although the stimulus exposure duration of 250 ms was long enough to allow saccades, trials in which fixation was lost according to the Eyelink II© were discarded prior to analysis. In this way, there was assurance that only data from trials in which the shapes were processed at the locations in which they were presented relative to fixation were analyzed. 
All factors were fully blocked (Contour (3) × Location (4) × Difficulty Level (3) = 36 different blocks), and blocks were presented in random order. Participants were presented 20 trials for each possible combination of factors (10 match, 10 non-match) for a total of 720 trials. 
Results
Results of this experiment are shown in Figure 3 as mean proportion accuracy for match trials in all conditions. Reaction times were collected as well. However, since instructions emphasized accuracy over speed, these data were not analyzed. 
Figure 3
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 42 participants in the peripheral eccentricity ( Experiment 1).
Figure 3
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 42 participants in the peripheral eccentricity ( Experiment 1).
Only responses from match trials were analyzed because matches require processing the entire shape of both constituents of pairs of shapes. Matches are processed differently from mismatches (Farell, 1985). During mismatch trials, responses can be based on processing parts of shapes because only one part of two shapes needs to mismatch for a correct response. Match trial performance was therefore a better indicator of processing shapes in their entirety. Responses to the vertical location condition were also excluded from analysis because they were not of theoretical interest for the purposes of this study. Due to lost fixation during a trial, we excluded 0.2% of responses from the analysis. Accuracy across all conditions for each of the 42 individual participants ranged from 44% to 98.5%. 
Analysis of variance (ANOVA) of the accuracy data for matched trials using a within-subjects design of 3 × 3 × 4 (Difficulty × Contour × Position) showed a main effect of Difficulty, F (2,82) = 13.8, p < 0.001, with the mean proportion correct for Different Shapes, Different Families, and Same Families being 0.77, 0.81, and 0.76, respectively. Post hoc analyzes using Fisher's PLSD (alpha = .05) indicated that different family trials were more accurate than the other two conditions. Notably, the difficulty variable did not interact with any other variables in this analysis, and since its main purpose was to ensure that task difficulty was controlled across conditions, the data were collapsed across the difficulty variable in subsequent analyses. 
An omnibus ANOVA indicated main effects of Position, F (2,82) = 15.6, p < 0.0001, and Contour, F (2,82) = 18.6, p < 0.0001. Post hoc testing with Fisher's PLSD (alpha = .05) of the effect of Position revealed that accuracy in every location was different from every other with the order of performance from best to worst being: horizontal (0.83), lower (0.80), and upper (0.76). Post hoc testing of the effect of Contour revealed that the two motion conditions [static MDF (0.83) and translational TM (0.83)] were not different and were performed more accurately than the SL (0.74) condition. 
Separate one-way ANOVAs were conducted within each contour type to further examine the effect of placing the stimulus pairs in different locations. Analyses of the effects of Position within each of the two motion conditions (static MDF and translational TM) revealed nearly identical results. For both MDF and TM, accuracies in the lower (0.84) and horizontal (0.86) locations were highest, and performance in these two locations did not differ. Importantly, performance in each of these positions was better than in the upper field (0.78). Hence, shapes whose contours were defined by either type of motion were processed very similarly and least accurately when they were presented in the upper visual field. 
In contrast, performance at matching SL shapes in the upper (0.71) and lower (0.73) visual fields was not different and was inferior to performance from placement along the horizontal meridian (0.79). 
Discussion
The accuracy of matching shapes that were presented in the visual periphery was highly affected by the presence of motion in their contour definition. Overall, matching shapes whose contours were defined by motion and were static (MDF) or translated in space (TM) were more accurate than matching shapes whose contours were defined by luminance (SL). Furthermore, the pattern of performance was essentially identical for the two types of motion-defined contours across different locations in space. Accuracy was highest when pairs of shapes were presented straddling fixation along the horizontal meridian and when they were presented in the lower visual field. Performance was inferior to both of these locations when motion-defined shapes were presented in the upper visual field. Importantly, these data support a lower field advantage for shapes defined by contours that moved that was not found with shapes whose contours were static and defined by luminance. This pattern of results is consistent with behavioral results from motion processing alone (Amenedo et al., 2007; Christman & Niebauer, 1997) and so may suggest involvement of the dorsal visual cortex in motion-defined shape processing. 
In contrast, performance with SL shapes showed no bias with respect to visual field. This result is consistent with previous behavioral studies (Christman & Niebauer, 1997) and may reflect processing in both dorsal and ventral cortices. Like the moving shapes, an advantage was found for placement along the horizontal meridian. 
Experiment 2
Results of Experiment 1 indicated a processing advantage in the lower visual field relative to the upper field for motion-defined form stimuli presented in the visual periphery (10 deg of eccentricity). To determine if this advantage was specific to the visual periphery, the same procedure was repeated with shape presentations that were closer to central vision (3 deg of eccentricity). 
Methods
Thirty-six participants (21 females) were tested following the same procedure as outlined in Experiment 1. Only the eccentricity of the stimuli in visual space differed. 
Results
Data from 35 of the 36 participants were analyzed in the same manner as data from Experiment 1. Data from several conditions failed to be recorded for one participant and so none of the responses from this participant were included in the analysis. Performance on a participant-by-participant basis across all conditions ranged from 39.5% to 95% correct ( Figure 4). 
Figure 4
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 35 participants in the central eccentricity ( Experiment 2).
Figure 4
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 35 participants in the central eccentricity ( Experiment 2).
The omnibus ANOVA indicated main effects of Position, F (2,68) = 10.1, p < 0.0001, and Contour, F (2,68) = 10.3, p < 0.0001, and an interaction between these two variables, F (4,136) = 4.1, p < 0.01. Post hoc analyses revealed that accuracy in the horizontal position (0.84) was superior to that in the lower (0.79) and upper (0.79) visual fields. Unlike those from Experiment 1, results from the two motion conditions differed. Accuracy with MDF shapes (0.85) was superior to accuracy with TM (0.78) and SL (0.78) shapes. 
To explore the interaction found with the omnibus analysis and for consistency with the analysis from Experiment 1, separate one-way ANOVAs were conducted on the effects of Position for each type of shape contour. For MDF, accuracy was equivalent across all positions. For SL, there was an effect of Position, F (2,68) = 4.2, p < 0.05, with accuracy from horizontal placement higher than that for either upper or lower visual field placement and these accuracies with respect to field were not different. A similar pattern of effect of position was found for TM shapes, F (2,68) = 13.7, p < 0.0001, whereby horizontal placements resulted in more accurate matching than placements in the upper or lower fields which were again not different from each other. These individual analyses of contour type helped clarify that the omnibus interaction was due to no effect of position on MDF forms and a superior performance in the horizontal position for SL and TM shapes. 
Discussion
Placement of shapes closer to central vision resulted in a different pattern of effects of Contour type and Position relative to placement further in the periphery. Unlike results from Experiment 1, effects of Position on MDF and TM shape matching differed. In fact, the effects of position on TM and SL shapes were more similar to each other than those on TM and MDF. MDF accuracy was highest and was insensitive to position of shape placement in the visual field. In contrast, both TM and SL shape matching showed an advantage for placement along the horizontal meridian with lower and equivalent accuracy for placements in the upper and lower visual fields. These results indicate that the lower field advantage for processing shapes whose contours were defined by motion found in Experiment 1 was specific to peripheral vision. 
General discussion
Pairs of nonsense shapes whose contours were defined by one of three different types of information [(1) static, motion-defined (MDF), (2) translational, motion-defined (TM), or (3) static, luminance-defined (SL)] were presented in six different locations. In two separate experiments, eccentricities of 3 or 10 deg from fixation were tested. In each experiment, the locations for shape placement were in the upper or lower visual fields or along the horizontal meridian, equidistant from fixation. Participants made decisions about whether the shapes matched or mismatched while eye movements were monitored in order to detect lost fixation. Match accuracy was the dependent measure with Position and Contour as independent variables. 
It was argued in the Introduction that superior shape matching in the lower visual field would provide evidence of dorsal cortical processing. As noted there, this assertion rests on the assumption that ventral processes never produce LVF advantages while dorsal functions frequently do, which has been shown to date (Amenedo et al., 2007; Christman & Niebauer, 1997). These experiments tested the hypothesis that a lower visual field advantage for shape matching would be found for shapes whose contours were defined by motion. It was further suggested that this advantage would depend on peripheral visual presentations. These predictions were born out. 
Shape processing in the visual periphery
Shape matching in the visual periphery benefited from contours defined by motion differences between foreground and background (MDF and TM) in keeping with the notion that the visual periphery activates motion-sensitive dorsal cortex more quickly and for a more sustained period of time (Stephen et al., 2002). As evidenced by identical performance patterns across locations for the two motion contour types, the peripheral system was not sensitive to differences between them. Hence, the addition of a translational component to the stimuli (TM) did not alter the basic processing of static, form-from-motion (MDF). As noted in the Introduction, this supports effects on MDF as being due to motion per se and not due to the integration of static images over time and location. Identical performance for MDF and TM was not the case for central vision, as discussed below. Most notably, a lower field advantage was present for both types of motion stimuli. Relative to matching of motion-defined shapes, matching of luminance-defined shapes (SL) was more poorly performed and failed to show differences between upper and lower field placement. 
There are several implications of the effects of upper and lower field placement from Experiment 1. First, the lower field advantage with motion-defined shapes contradicts prominent theories (Goodale & Milner, 1992; Previc, 1990) that the ventral cortex is responsible for all shape processing. It is however consistent with behavioral findings that there is a lower field advantage for perceiving motion (Amenedo et al., 2007; Christman & Niebauer, 1997) with imaging studies indicating that motion processing involves dorsal visual cortex (e.g., De Jong et al., 1994; Dupont et al., 1997; Tootell & Taylor, 1995) and with MEG findings that have supported an early motion processing component of MDF (Schoenfeld et al., 2003). The fact that SL stimuli did not show any performance differences when placed in the lower or upper fields may be due to dorsal and ventral activation contributing to their processing (Konen & Kastner, 2008; see also Grill-Spector, Kushnir, Edelman, et al., 1998; Kourtzi et al., 2002; Kourtzi & Kanwisher, 2000). No field advantage of any sort for SL stimuli is also consistent with other behavioral findings using static luminance defined forms (Christman & Niebauer, 1997). 
The basic motion processor, hMT+, codes for retinal eccentricity such that dorsal parts are active during peripheral stimulation and ventral parts are active during central stimulation (Tyler et al., 2005; Wandell et al., 2007). Hence, dorsal hMT+ was most active during Experiment 1 and processed MDF and TM equivalently. Additionally, fMRI has indicated greater activation of LO, a dorsal, caudal part of LOC, in response to lower field presentations (Grill-Spector et al., 1999; Large, Culham, Kuchinad, Alcroft, & Tutis, 2008; Niemeier, Goltz, Kuchinad, Tweed, & Vilis, 2005; Sayres & Grill-Spector, 2008). However, it should be noted that LO responds to both upper and lower field presentations, in contrast to a distinctly retinotopic area such as V4v (Large et al., 2008). Shapes with a motion component may activate LO to a greater extent than SL and so contribute to the lower field advantage. Consistent with this notion, other studies have found that the superior LOC is particularly involved in the processing of motion-defined form (Murray et al., 2003; Zhuang, Peltier, He, LaConte, & Hu, 2008). 
The lower field advantage for motion-defined shapes could arguably be the result of the fact that moving patterns of forms in the lower field were directed centripetally while moving patterns in the upper field were directed centrifugally with respect to fixation (Edwards & Badcock, 1993). Additionally, stimuli of this type are known to cause an illusion whereby the position of the form can appear shifted in the direction of the internal motion (DeValois & DeValois, 1991). Either of these factors could have contributed to the lower field advantage. However, this seems unlikely given that the lower field advantage was not present with central presentations. A priori, there was no reason to assume that effects caused by the direction of motion should have been different for central and peripheral presentations. The lower field advantage was also not altered by translational motion that might be assumed to alter effects of direction of motion. Given that dorsal cortical processing has been associated with peripheral presentations (Stephen et al., 2002) and that it was only in this condition that the effect was found, it seems to us to be plausible that the lower field advantage in peripheral vision was due to activation of the dorsal visual cortex. 
Shape processing in central vision
Overall, shape matching in central vision showed a different pattern of effects of contour type and location of stimuli from that found in peripheral vision. Most notably, unlike the results from Experiment 1, processing differences were found between stimuli defined by static, form-from-motion (MDF) and those with a translation component (TM). Specifically, MDF shapes were matched more accurately and showed no sensitivity to location. This pattern suggests that MDF shapes were processed in a non-retinotopic manner. Although there is some debate about this issue, LOC is non-retinotopic (Grill-Spector, Kushnir, Hendler, et al., 1998; Large et al., 2008; Tootell, Mendola, Hadjikhani, Liu, & Dale, 1998; but for a competing view, see Larsson & Heeger, 2006) and is active during MDF processing (Murray et al., 2003; Schoenfeld et al., 2003; Zhuang et al., 2008). 
In contrast, TM shapes were matched in ways more similar to SL shapes. Overall, both were matched equally more poorly than MDF. For these contour types, shapes presented on the horizontal meridian were matched most accurately with poorer and undifferentiated performance in the lower and upper fields. Notably, unlike peripheral presentations, no sign of a lower field advantage was found with motion-defined stimuli. 
These results suggest that both the dorsal and the ventral cortices are involved in processing shapes when they are presented in central vision; a more equivalent manner even when they have a motion component. Interestingly, mechanisms engaged by central vision did differentiate between MDF and TM stimuli. In particular, the translation component diminished performance when shapes were presented in the upper and lower visual fields. This effect could be due to engagement during central vision of a processing module that cannot integrate shapes over different locations and time. Conceivably, all three shape types were processed by the same ventral module, LOC, in a non-retinotopic manner with the motion part of MDF boosting its processing and the translational part of TM impairing its processing. 
MDF and TM performances were not differentiated in peripheral vision but were differentiated in central vision. We speculate that hMT+ activity drove the lower field advantage in peripheral vision because it processed MDF and translational motion similarly and because basic motion processing shows a lower field advantage. Conversely, the differentiated performance in central vision was likely due to activity in the LOC. Superior LOC is responsive to MDF (Murray et al., 2003; Zhuang et al., 2008) and not translational motion. LOC is also non-retinotopic consistent with performance with all contour types in central vision. 
One of the strongest and most consistent effects was superior performance for shapes placed along the horizontal meridian. Similar effects have been described with pairs of static shapes (Desjardins & Braun, 2006; Sereno & Kosslyn, 1991). Here the advantage referred to the fact that shape pairs were matched more efficiently when presented along the horizontal meridian, straddling the vertical meridian, than when they were presented within a single, lateralized visual field or in heterotopic locations. Generally, pairs of shapes presented in homotopic fields are matched more quickly than pairs presented in heterotopic fields because inter-hemispheric callosal connections representing homotopic fields are more numerous than those representing heterotopic fields (Desjardins & Braun, 2006). Large et al. ( 2008) have also shown that integration of information across hemifields is possible within later processing modules such as because LOC because it preferentially responds to stimuli in the contralateral visual field, whereas integration within a hemifield is not possible because it occurs prior to LOC processing. Perhaps most relevant is Desjardins and Braun's ( 2006) demonstration that shape pairs in homotopic locations that straddle fixation are processed more quickly than those that do not. This effect is consistent with the superiority in the present experiment of the HM condition because only in that condition did shape pairs straddle fixation (see Figure 2). 
In summary, shapes with motion-defined contours were matched more accurately in the lower than upper, peripheral visual field. This effect was absent for shapes with contours defined by luminance or when shapes were presented centrally. We conclude that processing shapes involving motion, when presented in the lower periphery, may engage dorsal cortical functioning to a greater extent than when they are presented centrally or than shapes that do not have a motion component. When shapes involving motion are presented centrally, they likely engage dorsal cortex to a lesser extent OR more equivalently to the engagement of ventral cortex. Since even static shapes engage hMT+ to some extent (Kourtzi et al., 2002), this activity could be sufficient to equal ventral activity in LOC and so account for a lack of an upper or lower field bias for these shapes. Given the superior processing of MDF when presented in central and lower peripheral vision, it may be a useful format for shape presentation under conditions of impaired vision. 
Acknowledgments
These studies were funded by an Operating Grant from the Natural Sciences and Engineering Research Council of Canada to PM and by a Human Frontiers Science Program Grant RG 161/1999-B to PM. 
Commercial relationships: none. 
Corresponding author: Patricia A. McMullen. 
Email: Patricia.McMullen@dal.ca. 
Address: Department of Psychology, Dalhousie University, 1355 Oxford Street, Halifax, Nova Scotia, Canada B3H 4J1. 
References
Amenedo, E. Pazo-Alvarez, P. Cadaveira, F. (2007). Vertical asymmetries in pre-attentive detection of changes in motion direction. International Journal of Psychophysiology, 64, 184–189. [PubMed] [CrossRef] [PubMed]
Attneave, F. (1957). Physical determinants of the judged complexity of shapes. Journal of Experimental Psychology, 53, 221–227. [PubMed] [CrossRef] [PubMed]
Attneave, F. Arnoult, M. D. (1956). The quantitative study of shape and pattern perception. Psychological Bulletin, 53, 452–471. [PubMed] [CrossRef] [PubMed]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Chambers, K. W. McBeath, M. K. Schiano, D. J. Metz, E. G. (1999). Tops are more salient than bottoms. Perception and Psychophysics, 61, 625–635. [PubMed] [CrossRef] [PubMed]
Christman, S. D. Niebauer, C. L. Christman, S. (1997). The relation between left-right and upper-lower visual field asymmetries. Cerebral asymmetries in sensory and perceptual processing. (pp. 263–296). Amsterdam, Holland: Elsevier Science.
Collin, C. A. McMullen, P. A. (2002). Using Matlab to generate families of similar Attneave shapes. Behavior Research Methods, Instruments, and Computers, 34, 55–68. [PubMed] [CrossRef]
Culham, J. C. Valyear, K. F. (2006). Human parietal cortex in action. Current Opinion in Neurobiology, 16, 205–212. [PubMed] [CrossRef] [PubMed]
Danckert, J. Goodale, M. A. (2001). Superior performance for visually guided pointing in the lower visual field. Experimental Brain Research, 137, 303–308. [PubMed] [CrossRef] [PubMed]
Danckert, J. Goodale, M. A. Johnson-Frey, S. H. (2003). Ups and downs in the visual control of action. Taking action: Cognitive neuroscience perspectives on intentional acts. (pp. 29–64). Cambridge: MIT Press.
De Jong, B. M. Shipp, S. Skidmore, B. Frackowiak, R. S. Zeki, S. (1994). The cerebral activity related to the visual perception of forward motion in depth. Brain, 117, 1039–1054. [PubMed] [CrossRef] [PubMed]
Desjardins, S. Braun, C. M. J. (2006). Homotopy and heterotopy and the bilateral field advantage in the Dimond paradigm. Acta Psychologica, 121, 125–136. [PubMed] [CrossRef] [PubMed]
DeValois, R. L. DeValois, K. K. (1991). Vernier acuity with stationary moving Gabors. Vision Research, 31, 1619–1626. [PubMed] [CrossRef] [PubMed]
Dupont, P. De Bruyn, B. Vandenbergher, R. Rosier, A.-M Michiels, J. Marchal, G. (1997). The kinetic occipital region in human visual cortex. Cerebral Cortex, 7, 283–292. [PubMed] [CrossRef] [PubMed]
Edwards, M. Badcock, D. R. (1993). Asymmetries in the sensitivity to motion in depth: A centripetal bias. Perception, 22, 1013–1023. [PubMed] [CrossRef] [PubMed]
Farell, B. (1985). Same-different judgments: A review of current controversies in perceptual comparisons. Psychological Bulletin, 98, 419–456. [PubMed] [CrossRef] [PubMed]
Ferber, S. Humphrey, G. K. Vilis, T. (2003). The lateral occipital complex subserves the perceptual persistence of motion-defined groupings. Cerebral Cortex, 13, 716–721. [PubMed] [CrossRef] [PubMed]
Goodale, M. A. Milner, D. (1992). Separate visual pathways for perception and action. Trends in Neuroscience, 15, 20–25. [PubMed] [CrossRef]
Grill-Spector, K. Kourtzi, Z. Kanwisher, N. (2001). The lateral occipital complex and its role in object recognition. Vision Research, 41, 1409–1422. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. Kushnir, T. Edelman, S. Avidan, G. Itzchak, Y. Malach, R. (1999). Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron, 24, 187–203. [PubMed] [Article] [CrossRef] [PubMed]
Grill-Spector, K. Kushnir, T. Edelman, S. Itzchak, Y. Malach, R. (1998). Cue-invariant activation in object-related areas of the human occipital lobe. Neuron, 21, 191–202. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. Kushnir, T. Hendler, T. Edelman, S. Itzchak, Y. Malach, R. (1998). A sequence of object-processing stages revealed by fMRI in human occipital lobe. Human Brain Mapping, 6, 316–328. [PubMed] [CrossRef] [PubMed]
Grill-Spector, K. Malach, R. (2004). The human visual cortex. Annual Review of Neuroscience, 27, 649–677. [PubMed] [CrossRef] [PubMed]
Gulyas, B. Heywood, C. A. Popplewell, D. A. Roland, P. E. Cowey, A. (1994). Visual form discrimination from color or motion cues: Functional anatomy by positron emission tomography. Proceedings of the National Academy of Sciences of the United States of America, 91, 9965–9969. [PubMed] [Article] [CrossRef] [PubMed]
Konen, C. S. Kastner, S. (2008). Two hierarchically organized neural systems for object information in human visual cortex. Nature Neuroscience, 11, 224–231. [PubMed] [CrossRef] [PubMed]
Kourtzi, Z. Bulthoff, H. H. Erb, M. Grodd, W. (2002). Object-selective responses in the human motion area MT/MST. Nature Neuroscience, 5, 17–18. [PubMed] [CrossRef] [PubMed]
Kourtzi, Z. Kanwisher, N. (2000). Cortical regions involved in perceiving object shape. Journal of Neuroscience, 20, 3310–3318. [PubMed] [Article] [PubMed]
Large, M. E. Aldcroft, A. Vilis, T. (2005). Perceptual continuity and the emergence of perceptual persistence in the ventral visual pathway. Journal of Neurophysiology, 93, 3453–3462. [PubMed] [Article] [CrossRef] [PubMed]
Large, M. E. Culham, J. Kuchinad, A. Aldcroft, A. Tutis, V. (2008). fMRI reveals greater within- than between-hemifield integration of the human lateral occipital cortex. European Journal of Neuroscience, 27, 3299–3309. [PubMed] [CrossRef] [PubMed]
Larsson, J. Heeger, D. J. (2006). Two retinotopic visual areas in human lateral occipital cortex. Journal of Neuroscience, 26, 13128–13142. [PubMed] [Article] [CrossRef] [PubMed]
Logothetis, N. K. Sheinberg, D. L. (1996). Visual object recognition. Annual Review of Neuroscience, 19, 577–621. [PubMed] [CrossRef] [PubMed]
Maunsell, J. H. Newsome, W. T. (1987). Visual processing in monkey extrastriate cortex. Annual Review of Neuroscience, 10, 363–401. [PubMed] [CrossRef] [PubMed]
Maunsell, J. H. Van Essen, D. (1987). Topographic organization of the middle temporal visual area in the macaque monkey: Representational biases and the relationship to callosal connections and myeloarchitectonic boundaries. Journal of Comparative Neurology, 266, 535–555. [PubMed] [CrossRef] [PubMed]
Mishkin, M. (1982). Object vision and spatial vision: Two cortical pathways. Trends in Neuroscience, 6, 414–417. [CrossRef]
Murray, S. O. Olshausen, B. A. Woods, D. L. (2003). Processing shape, motion and three-dimensional shape-from-motion in human cortex. Cerebral Cortex, 13, 508–516. [PubMed] [Article] [CrossRef] [PubMed]
Niemeier, M. Goltz, H. C. Kuchinad, A. Tweed, D. B. Vilis, T. (2005). A contralateral preference in the lateral occipital area: Sensory and attentional mechanisms. Cerebral Cortex, 15, 325–331. [PubMed] [Article] [CrossRef] [PubMed]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [PubMed] [CrossRef] [PubMed]
Previc, F. (1990). Functional specialization in the lower and upper visual fields of humans: Its ecological origins and neurophysiological implications. Behavioral and Brain Sciences, 13, 519–575. [CrossRef]
Regan, D. (2000). Human perception of objects: Early visual processing of spatial form defined by luminance, color, texture, motion and binocular disparity. Sunderland, MA: Sinauer Assoc.
Sayres, R. Grill-Spector, K. (2008). Relating retinotopic and object-selective responses in human lateral occipital cortex. Journal of Neurophysiology, 100, 249–267. [PubMed] [CrossRef] [PubMed]
Schoenfeld, M. A. Woldorff, M. Duzel, E. Scheich, H. Heinze, H. Mangun, G. R. (2003). Form-from-motion: MEG evidence for time course and processing sequence. Journal of Cognitive Neuroscience, 15, 157–172. [PubMed] [CrossRef] [PubMed]
Sereno, A. B. Kosslyn, S. M. (1991). Discrimination within and between hemifields: A new constraint on theories of attention. Neuropsychologia, 29, 659–675. [PubMed] [CrossRef] [PubMed]
Stephen, J. M. Aine, C. H. Christner, R. F. Ranken, D. Huang, M. Best, E. (2002). Central versus peripheral visual field stimulation results in timing differences in dorsal stream sources as measured with MEG. Vision Research, 42, 3059–3074. [PubMed] [CrossRef] [PubMed]
Tootell, R. B. Mendola, J. D. Hadjikhani, N. K. Liu, A. K. Dale, A. M. (1998). The representation of the ipsilateral visual field in human cerebral cortex. Proceedings of the National Academy of Sciences of the United States of America, 95, 818–824. [PubMed] [Article] [CrossRef] [PubMed]
Tootell, R. B. Taylor, J. B. (1995). Anatomical evidence for MT and additional cortical visual areas in humans. Cerebral Cortex, 1, 39–55. [PubMed] [CrossRef]
Tyler, C. W. Likova, L. T. Chen, C.-C Kontsevich, L. L. Schira, M. M. Wade, A. R. (2005). Extended concepts of occipital retinotopy. Current Medical Imaging Reviews, 1, 319–329. [CrossRef]
Van Oostende, S. Sunaert, S. Van Hecke, P. Marchal, G. Organ, G. A. (1997). The kinetic occipital (KO) region in man: An fMRI study. Cerebral Cortex, 7, 690–701. [PubMed] [Article] [CrossRef] [PubMed]
Wandell, B. A. Dumoulin, S. O. Brewer, A. A. (2007). Visual field maps in human cortex. Neuron, 56, 366–383. [PubMed] [CrossRef] [PubMed]
Wang, J. Tiangang, Z. Qiu, M. Du, A. Cai, K. Wang, Z. (1999). Relationship between ventral stream for object vision and dorsal stream for spatial vision: An fMRI + ERP study. Human Brain Mapping, 8, 170–181. [PubMed] [CrossRef] [PubMed]
Zhuang, J. Peltier, S. He, S. LaConte, S. Hu, X. (2008). Mapping the connectivity with structural equation modeling in an fMRI study of shape-from-motion. Neuroimage, 42, 799–806. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Sample stimuli. (a) Static, luminance-defined form (SL), (b) stationary, motion-defined form (MDF), and (c) translational, motion-defined form (TM).
Figure 1
 
Sample stimuli. (a) Static, luminance-defined form (SL), (b) stationary, motion-defined form (MDF), and (c) translational, motion-defined form (TM).
Figure 2
 
Layout of the placement of stimulus pairs in the visual field. L = lower field, U = upper field, and H = horizontal meridian.
Figure 2
 
Layout of the placement of stimulus pairs in the visual field. L = lower field, U = upper field, and H = horizontal meridian.
Figure 3
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 42 participants in the peripheral eccentricity ( Experiment 1).
Figure 3
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 42 participants in the peripheral eccentricity ( Experiment 1).
Figure 4
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 35 participants in the central eccentricity ( Experiment 2).
Figure 4
 
Mean proportion correct for matching shapes as a function of presentation location and contour for 35 participants in the central eccentricity ( Experiment 2).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×