Open Access
Article  |   April 2016
Networks extending across dorsal and ventral visual pathways correlate with trajectory perception
Author Affiliations
Journal of Vision April 2016, Vol.16, 21. doi:10.1167/16.6.21
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Ryosuke Tanaka, Yuko Yotsumoto; Networks extending across dorsal and ventral visual pathways correlate with trajectory perception. Journal of Vision 2016;16(6):21. doi: 10.1167/16.6.21.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Despite its apparent importance in understanding the visual environment, neural mechanisms underlying perception of motion trajectories have been investigated less thoroughly compared with those of various other aspects of visual motion. In the present functional magnetic resonance imaging (fMRI) experiment, we focused on a recently reported visual illusion called the wriggling motion trajectory illusion (WMTI), which consists of dots that are actually moving straight and yet at the same time induce perception of curved trajectories. The use of this illusion enabled us to bypass confounding associations between trajectories and various other local motion features. Thus, the aim of the present study was to locate the brain areas that allow for differentiation between qualitatively distinct motion trajectories, such as straight or curved. At the same time, we also aimed to obtain further insights into the mechanisms of the illusion. Areas whose activation correlated with perceived wriggling trajectories were scattered across the dorsal and ventral visual pathways, including the superior parietal lobule (SPL) and fusiform. These patterns of activity indicate that motion information is integrated into trajectories in the ventral visual pathway in a similar manner to the integration of spatially continuous orientations into static contours. The present result is also in line with a previously suggested hypothetical mechanism of the illusion, which involves visual grouping.

Introduction
Perceiving visual motion trajectories can be crucial to understanding visual scenes, including interpretation of biological motion (Johansson, 1973), and for successful coordination of actions in the environment. However, regardless of its apparent importance, detailed mechanisms and neural underpinnings of trajectory perception do not seem to have attracted much academic attention, especially when compared with various other aspects of the visual motion processing such as direction (Snowden, Treue, Erickson, & Andersen, 1991), speed (Pack, Conway, Born, & Livingstone, 2006) or global optic flow (Smith, Wall, Williams, & Singh, 2006). Rather, trajectories of moving objects seem to have received a little more consideration in the context of object tracking (Cavanagh & Alvarez, 2005; Narasimhan, Tripathy, & Barrett, 2009; Pylyshyn & Storm, 1988; Shooner, Tripathy, Bedell, & Ogmen, 2010; Tripathy, Narasimhan, & Barrett, 2007). For example, Pylyshyn and Storm (1988) first reported that one can simultaneously track up to four or five objects moving along random trajectories. Interactions between tracking capacity and trajectory deviation were also studied in detail (Tripathy & Barrett, 2004; Tripathy et al., 2007). This line of studies has focused on the roles of visual attention and sensory memory, especially in relation to tracking capacity (Narasimhan et al., 2009; Shooner et al., 2010). 
However, these observations do not necessarily mean that discrimination between different kinds of trajectories requires continuous attentive tracking of targets. It has been confirmed time and again that the human visual system has heightened sensitivity to elongated motion trajectory (Grzywacz, Watamaniuk, & McKee, 1995; Verghese, Watamaniuk, McKee, & Grzywacz, 1999; Watamaniuk, McKee, & Grzywacz, 1995) and radial deformations of circular trajectories (Or, Wilkinson, & Wilson, 2011), neither of which phenomena can be explained by a mere linear summation of its constituent parts. These observations imply the existence of some perceptual, preattentive mechanisms that integrate motion vectors across time and space into trajectories (Grzywacz et al., 1995; Verghese et al., 1999) in a manner similar to the integration of orientation into static contours (Field, Hayes, & Hess, 1993). 
If so, what part of the brain is responsible for such motion integration? One plausible scenario is that some areas within earlier visual cortices are responsible for trajectory perception. For example, area V2 is believed to contain cells that selectively respond to contours with specific angles or curvatures (Hegdé & Van Essen, 2003; Ito & Komatsu, 2004) possibly by spatially integrating orientation information provided by V1. Additionally, it was recently reported that V2 contains a motion direction map (Lu, Chen, Tanigawa, & Roe, 2010). These observations make V2 a likely candidate for neural substrate of trajectory perception, suggesting spatiotemporal integration of motion direction information into trajectories with specific curvatures, in a manner similar to that underlying contour selectivity. 
Recent functional magnetic resonance imaging (fMRI) studies using periodic trajectories have reported that activation patterns in areas V2, V3, and MT encode certain information about visual motion trajectories (Gorbet, Wilkinson, & Wilson, 2012, 2014), which apparently seems to partly support our hypothesis that V2 is responsible for object motion trajectory perception. A problem with investigating neural substrates of trajectory perception, which has not been addressed thus far in the research is the confounding associations between trajectories and other features of visual motion. That is, considering that many visual areas have sensitivity to features such as position, speed, motion direction and motion coherency, different trajectories are likely to elicit different patterns of activities in those areas that are sensitive to those features, but not trajectory-specific. To bypass these problematic confounding associations, in the present research, we utilized a recently reported visual illusion called the wriggling motion trajectory illusion (WMTI; Kuwahara, Sato, & Yotsumoto, 2012). The illusion consists of dots running straight in random directions (yet not colliding with each other), and elicits robust perception of wriggling trajectories rather than veridical, straight trajectories. The advantage of using this illusion was that one can easily create two sets of stimuli that elicit strikingly different perception of trajectories yet are equalized well in global motion features such as coherency, just by manipulating whether dots are allowed to collide or not. Thus, the use of the illusion enabled us to compare brain activities while subjects were observing different trajectories, while basically ruling out the possibility of confounding associations with the other motion features. The illusion is also reported to occur without overt tracking of the dots' trajectories (Kuwahara et al., 2012) and thus was suitable for our current focus on nonattentive mechanism of trajectory perception. 
In the present fMRI study, we aimed at elucidating potential neural substrates of perception of trajectories, by using the WMTI illusion as stimuli and comparing brain activities while subjects were perceiving qualitatively different trajectories. At the same time, we also expected to obtain further insights into the mechanisms of WMTI itself, which is not fully understood so far. In order to identify areas whose activity correlated with the perception of wriggling trajectories, we first conducted exploratory whole brain analyses. Additionally, we identified functionally defined regions of interest (ROI; V1, V2, V3, MT, and MST) and looked into the time course of the activity to more directly test our hypothesis. 
Methods
Subjects
Sixteen subjects with normal or corrected-to-normal vision participated in the experiment (14 males: 2 females; mean age: 23.3 years; SD = 3.5; range: 21–33). All subjects gave written informed consent for their participation in the experimental protocol, which was approved by the institutional review boards of The University of Tokyo. 
Stimuli and procedure
The visual stimuli were presented on a MRI-compatible flat-panel LCD display (NNL-LCD, NordicNeuroLab, Bergen, Norway). Subjects viewed stimuli on the display through an oblique mirror mounted on the head coil. The spatial resolution of the display was 1920 pixels × 1080 pixels and the refresh rate was 60 frames/s. The viewing distance was 144 cm and the size of display was 70.0 cm × 39.5 cm. Subjects who needed vision correction used plastic correction lenses in the scanner. 
Each subject underwent one high-resolution anatomical scan and 16 functional scans. Functional sessions consisted of eight experimental sessions and eight localizer sessions. 
MRI data acquisition
All MRI images were acquired using a 3T MRI scanner (Magnetom Prisma, Siemens, Munich, Germany) equipped with a 64-channel head coil. Prior to functional sessions, a high-resolution anatomical image of each subject's whole brain was taken with MPRAGE protocol (TR = 2000 ms, TE = 2.9, TI = 900 ms, flip angle = 9°). The slices were aligned parallel to the AC-PC line and the spatial resolution of the volume was 1.0 × 1.0 × 1.0 mm3
In all functional sessions, BOLD signals were acquired with EPI (echo-planner imaging) sequences. In the experimental sessions, 39 slices aligned parallel to the AC-PC line were acquired without any gap in each run to cover the whole brain. The thickness of each slice was 3.5 mm (TR = 2000 ms, TE = 30 ms, flip angle = 80°, in-plane resolution = 3 × 3 mm2, FOV = 64 × 64 pixel2). In the localizer sessions, functional volumes of only the occipital part of each brain were obtained. Slices were aligned perpendicular to the calcarine sulcus. The number of slices was set to 25. The slice thickness was 3 mm and there were no gaps between slices (TR = 2000 ms, TE = 30 ms, flip angle = 80, in-plane resolution = 2 × 2 mm2, FOV = 94 × 94 pixel2). 
Experimental session
In the experimental sessions, we employed a rapid presented event-related design with four conditions, namely: ILLUSION, CONTROL, STATIC1, and STATIC2 (Figure 1). In the ILLUSION condition, we recruited the same set of WMTI stimuli used in Kuwahara et al. (2012). The WMTI stimuli consisted of 300 dots moving straight in random directions arranged not to overlap with each other. The diameter of each dot was 0.3° and the speed was constant at 2.2°/s. The dots were presented inside the square-shaped aperture whose size was 14.6° × 14.6°. The CONTROL condition was the same as ILLUSION condition except that dots were allowed to overlap. The STATIC1 and STATIC2 conditions consisted of randomly extracted frames from the ILLUSION and CONTROL stimuli, respectively. Based on the nature of the different stimuli, we categorized the ILLUSION and CONTROL conditions as motion conditions, and the STATIC1 and STATIC2 conditions as no-motion conditions. Accordingly, we categorized the ILLUSION and STATIC1 conditions as no-overlap conditions, and CONTROL and STATIC2 as overlap conditions. 
Figure 1
 
Stimuli presented in the experiment. (a) ILLUSION stimuli contained 300 dots moving straight in random directions without overlapping. The dots were perceived to have curved trajectories. (b) CONTROL condition; dots were allowed to overlap. The dots were perceived to have straight trajectories. (c, d) STATIC 1 and STATIC 2 stimuli consisted of frames randomly extracted from latter half (i.e., frame No. 31 to No. 60) of the ILLUSION and CONTROL videos, respectively.
Figure 1
 
Stimuli presented in the experiment. (a) ILLUSION stimuli contained 300 dots moving straight in random directions without overlapping. The dots were perceived to have curved trajectories. (b) CONTROL condition; dots were allowed to overlap. The dots were perceived to have straight trajectories. (c, d) STATIC 1 and STATIC 2 stimuli consisted of frames randomly extracted from latter half (i.e., frame No. 31 to No. 60) of the ILLUSION and CONTROL videos, respectively.
The duration of each stimulus was 2 s regardless of the condition. Stimuli were presented according to the predetermined paradigms generated using the optseq2 program. Each of the four conditions was presented 16 times in a single session and the overall duration of a session was 240 s. The stimulus presentations were interleaved with the FIXATION condition in which a fixation point was presented against a black background. Eight experimental sessions were conducted for each subject. 
The stimulus images were generated beforehand using OpenGL and presented using MATLAB (2012b, The MathWorks Inc., Natick, MA) with Psychtoolbox 3 expansion (Brainard, 1997; Kleiner, Brainard, & Pelli, 2007; Pelli, 1997). 
V1, V2, and V3 localizer
Of the eight localizer sessions, two were conducted to identify the retinotopic organization of the medial occipital cortex and thus localize early visual areas (V1, V2, and V3). For the visual stimuli, static, flickering wedges along either vertical or horizontal meridians (Choe, Blake, & Lee, 2014; Fischer, Spotswood, & Whitney, 2011) with black and white checkerboard-pattern were presented against a gray background in block design. The wedges extended for 15° and their central angles were 22.5° each. The pattern of the wedges reversed at a frequency of 1 Hz. Each block lasted 20 s and six vertical and six horizontal blocks were presented in turn. The total duration of a session was 240 s. 
MT+ and MST localizer
Of the remaining six localizer sessions, two were conducted to localize MT+ regions and four for MST regions. In both sets of sessions, white dots moving radially (diameter = 0.25°; speed = 8°/s) against a black background were employed (Huk, Dougherty, & Heeger, 2002). Dots were presented within the circular aperture (diameter = 15°) and travelled toward and away from the central point. The direction of the dots was reversed once per second. In the MT+ localizer sessions, the aperture was presented at the center of the display and subjects fixated on the center of the aperture. In the MST localizer sessions, the center of the aperture was placed 10° aside (right in two sessions, left in two sessions) from the fixation point. In both the MT+ and MST localizer sessions, block design was employed. In the MT+ localizer sessions, static-dot blocks (24 s), in which dots did not move, and moving-dot blocks (12 s) were presented in turn seven times each. In the MST localizer sessions, static-dot blocks (18 s), and moving-dot blocks (18 s) were presented in turn seven times each. Thus, the duration of each session was 252 s for both the MT+ and MST localizer sessions. MT ROI was obtained by subtracting the resulting MST ROI values from the MT+ ROI values. 
Fixation task
During the scans, subjects performed the fixation task. The aim of the task was twofold. First, we wanted subjects not to attend to dot trajectories since the aim of the present study is to locate neural correlates of preattentive integration of motion information into trajectories. Second, we wanted stabilize subjects' attention since BOLD signal is known to fluctuate much according to the state of attention (Huk, Ress, & Heeger, 2001). Subjects were asked to fixate on the fixation point and respond to changes of color of the point by pushing a button with their left hand. The original color of the point was red and it turned into blue occasionally. The change occurred once per 9 s on average. 
Behavioral measures
After the fMRI sessions, the subjects participated in a brief behavioral experiment outside the scanner in order to confirm that they actually perceived the illusory stimuli as “wriggling.” The experiment was conducted with a laptop (13-in MacBook Air, Apple, Cupertino, CA). Subjects were instructed to maintain a viewing distance of about 70 cm. In each trial, a red fixation point was presented for 500 ms against a black background followed by a stimulus movie of 2 s, which was chosen from the same set of motion stimuli (ILLUSION and CONTROL conditions) used in the fMRI experiment. The sizes of the fixation point and the movies were the same as in the fMRI experimental sessions. After the presentation of the movie, a response panel, 3.0° in height and 15.9° in width, was displayed on the screen. The words “Straight” and “Wriggling” were placed at the left and right ends of the panel, respectively. The subjects were asked to rate their perception of the dots' motion by clicking a point on the panel. The rating scores were calculated as the relative distance between the clicked location and the left end of the panel, within a range of 0–100. That is, the higher the score was, the more the movie looked like it contained wriggling. The subjects were told that the points in the panel correspond to certain rating scores and were encouraged to use the full range of the response panel. The experiment consisted of 60 trials, with the first 20 trials used for practice. The subjects were instructed to create an internal criterion regarding the kind of perception that they would rate as straight or wriggling. After practice, the ILLUSION and CONTROL conditions were presented 20 times each in random order. 
Data analysis
All imaging data were preprocessed and analyzed using FreeSurfer and FS-FAST software, which is documented and freely available at http://surfer.nmr.mgh.harvard.edu/. For each subject, the cortical surface was extracted from the anatomical image and converted into a wire-frame model (Dale, Fischl, & Sereno, 1999; Dale & Sereno, 1993; Fischl & Dale, 2000; Fischl, Liu, & Dale, 2001; Ségonne, Pacheco, & Fischl, 2007). The individual wire-frame cortical surfaces were then spatially aligned to the common surface for later analyses (Fischl, Sereno, & Dale, 1999; Fischl, Sereno, Tootell, & Dale, 1999). The functional images were motion and slice-time corrected. Then, the functional images were aligned onto the individual wire-frame cortical surfaces and spatially smoothed with a Gaussian kernel of 8 mm full width at the half maximum. 
The functional data first underwent single-subject level general linear model (GLM) analyses. Five functional scans out of eight were chosen based on the fixation task performance for each subject. The choice of this rather strict exclusion criterion was in order to exclude the scans during which the subjects felt sleepy, as well as to exclude the scans during which the subjects tracked the dots by moving their eyes. The hemodynamic response function was approximated as a gamma function. Slow signal drift was fitted and removed quadratically and head motion parameters were included in the model as noninterest variances and removed. Each functional image was then aligned onto the common wire-frame surface for later group-level analysis. The four experimental conditions and the fixation condition were modeled separately and different contrast images (Each Condition [ILLUSION, CONTROL, STATIC1, and STATIC2] > Fixation, ILLUSION > CONTROL and STATIC1 > STATIC2) were generated for each subject. 
In the group-level GLM analyses, the one-sample group mean for each condition of individual contrast images was tested using the one-sample t test. For multiple comparison correction, cluster thresholding was used instead of conventional Bonferroni correction, which might be overly conservative (Hagler, Saygin, & Sereno, 2006). The vertex-wise significance threshold to define clusters, which consist of groups of adjacent vertices with significant differences between conditions, was set at p < 0.01. The threshold for the sizes of clusters to achieve p < 0.05 was determined based on the previous simulation (Hagler et al., 2006), which is implemented in the FreeSurfer software. 
In order to further assess the differences among conditions in the experimental sessions, the time-course of fMRI signal changes for each condition was calculated based on the ROIs (V1, V2, V3, MT, and MST). These were manually defined based on the individual functional images acquired in the localizer sessions. The time courses of the BOLD signal changes within the localized ROIs were computed and averaged across subjects for each condition. The peak signal changes were then calculated individually by temporally averaging the time-course signal changes within the time window of 4 to 7 s from the stimulus onset. The data from the two hemispheres were also averaged in each subject. The repeated-measures two-way analysis of variance (ANOVA) was applied to compare the peak signal changes across conditions. 
Results
Behavioral ratings
The average rating score for the ILLUSION condition was 63.2 and 36.9 for the CONTROL condition. A paired-sample t test confirmed a significant difference between the conditions (t = 5.79; p = 3.71 × 10−4). Note that the subjects were asked to use the entire range of the panel and rate their perception of visual motion in a relative manner. Therefore, the behavioral results do not necessarily mean that the CONTROL condition was always perceived as straight. Nonetheless, the result confirmed that the ILLUSION condition surely evoked the perception of wriggling trajectories at least to a greater extent than the CONTROL condition. 
Whole brain analysis
Before directly comparing ILLUSION and CONTROL conditions, we first examined “Each condition > Fixation” contrasts, which illustrate the networks responsive to multiple conditions (Figure 2). Every stimulus condition activated a large portion of the posterior part of the brain extending from the dorsal to ventral visual pathways. Consistent with many previous reports, motion conditions (ILLUSION and CONTROL) activated much broader regions within the dorsal visual pathway, including MT+ and IPS, which are known to be more sensitive to motion than the STATIC conditions. Right precentral areas were also activated only in the motion conditions, especially in the ILLUSION condition. In all conditions, the anterior region of medial occipital cortex up to the parieto-occipital sulcus showed reduced activity compared with baseline, which is most likely because of attentional suppression in the peripheral visual field (Smith, Williams, & Singh, 2004). Reduced activity was also scattered around the insula and cingulate sulcus, while some positive cingulate activity was seen only in ILLUSION condition. 
Figure 2
 
Activation maps contrasting each condition with the FIXATION condition.
Figure 2
 
Activation maps contrasting each condition with the FIXATION condition.
The contrasts between the ILLUSION and CONTROL conditions are shown in Figure 3 and Table 1. Several significant clusters were identified across the whole brain. The largest clusters were in the bilateral pericalcarine areas, i.e., the early visual cortices. The ILLUSION condition, relative to the CONTROL, also activated several other visual areas within dorsal visual pathway (L superior parietal; X = −32.5; Y = −45.5; Z = 40.9) and ventral visual pathway (R fusiform; X = 37.8; Y = −74.6; Z = −14.0). Another cluster on the medial surface of the right hemisphere (R Paracentral; X = 17.9; Y = −31.7; Z = 40.6), which extended from the paracentral gyrus to the cingulate sulcus, largely corresponded to the cingulate sulcus visual area (CSv) reported in previous studies (Antal, Baudewig, Paulus, & Dechent, 2008; Fischer, Bülthoff, Logothetis, & Bartels, 2012). The other three clusters that responded more to the ILLUSION condition than CONTROL were located around the left central sulcus. There was only one cluster that showed greater activity to the CONTROL condition than to the ILLUSION, which was located in the anterior part of the left middle temporal gyrus (L Middle temporal; X = −59.2; Y = −9.2; Z ∼ −22.7). 
Figure 3
 
Significant clusters revealed in the ILLUSION > CONTROL (positive) and ILLUSION < CONTROL (negative) contrasts. The positive clusters were located within the bilateral medial occipital cortices, right fusiform and CSv, left superior parietal lobule (SPL), and pre- and postcentral gyri and the negative one was located within the anterior part of the middle temporal gyrus.
Figure 3
 
Significant clusters revealed in the ILLUSION > CONTROL (positive) and ILLUSION < CONTROL (negative) contrasts. The positive clusters were located within the bilateral medial occipital cortices, right fusiform and CSv, left superior parietal lobule (SPL), and pre- and postcentral gyri and the negative one was located within the anterior part of the middle temporal gyrus.
Table 1
 
Significant clusters contrast images. Notes: Anatomical labels were assigned by automatic segmentation implemented in FreeSurfer Software (Desikan et al., 2006; Fischl et al., 2004). The p values were corrected according to the cluster size simulation (Hagler et al., 2006).
Table 1
 
Significant clusters contrast images. Notes: Anatomical labels were assigned by automatic segmentation implemented in FreeSurfer Software (Desikan et al., 2006; Fischl et al., 2004). The p values were corrected according to the cluster size simulation (Hagler et al., 2006).
Contrasts between the two STATIC conditions produced only a single occipital cluster (Table 1). Note that no significant cluster responded more to STATIC 2 than STATIC 1. 
ROI analysis
We further assessed differences between the conditions based on predetermined ROIs (V1, V2, V3, MT, and MST). Repeated-measure ANOVAs for overlap/nonoverlap and motion/static conditions for within-subject factors were conducted separately on each ROI (Figure 4). The analyses revealed significant main effects for the motion/nonmotion comparison in all ROIs (V1: F(1, 15) = 53.51; p = 2.55 × 10−6; V2: F(1, 15) = 95.86; p = 16.6 × 10−8; V3: F(1, 15) = 120.8; p = 1.42 × 10−8; MT: F(1, 15) = 72.47; p = 3.97 × 10−7; MST: F(1, 15) = 158; p = 2.29 × 10−9) and for overlap/nonoverlap in V1, V2, and V3 (V1: F(1, 15) = 53.51; p = 6.25 × 10−6; V2: F(1, 15) = 45.91; p = 0.000117; V3: F(1, 15) = 6.266; p = 0.0244), while no significant interaction was observed. 
Figure 4
 
Results of additional ROI analyses. ***p < 0.001, **p < 0.01; *p < 0.05. The red crosses correspond to the mean signal change of each condition. The blue dotted lines indicate the average signal changes across conditions. The black broken lines show the point of zero signal change. Significant main effects of motion/nonmotion were revealed in all ROIs. The main effects of overlap/nonoverlap were significant in V1, V2, and V3. No interactions were significant.
Figure 4
 
Results of additional ROI analyses. ***p < 0.001, **p < 0.01; *p < 0.05. The red crosses correspond to the mean signal change of each condition. The blue dotted lines indicate the average signal changes across conditions. The black broken lines show the point of zero signal change. Significant main effects of motion/nonmotion were revealed in all ROIs. The main effects of overlap/nonoverlap were significant in V1, V2, and V3. No interactions were significant.
Additional analyses on whole brain significant clusters
Additionally, we conducted the same repeated-measures two-way ANOVAs as ROI analyses on peak signal changes (4 to 7 s from the stimulus onset) within the clusters identified in the contrast between the ILLUSION and CONTROL conditions. This was in order to make sure that those significantly different activities detected in the contrast between the ILLUSION and CONTROL conditions were specifically corresponding to perceived trajectories rather than due to differences between no-overlap (ILLUSION and STATIC1) and overlap (CONTROL and STATIC2) conditions in general. As expected from the result of the ROI analyses, bilateral occipital clusters exhibited significant differences between overlap/nonoverlap conditions (L Pericalcarine: F(1, 15) = 83.73; p = 1.59 × 10−7; R Pericalcarine: F(1, 15) = 45.92; p = 6.23 × 10−6). The significant main effect of overlap/nonoverlap conditions was also found in the left temporal cluster, L Middle temporal: F(1, 15) = 10.68; p = 0.00519. Several clusters showed significant main effects of motion/nonmotion (L Pericalcarine: F(1, 15) = 105.3; p = 3.56 × 10−8; L Precentral (dorsal one): F(1, 15) = 7.40; p = 0.0158; L Pericalcarine (ventral one): F(1, 15) = 10.61; p = 0.00053; L Superior parietal: F(1, 15) = 32.32; p = 4.33 × 10−5; R Pericalcarine: F(1, 15) = 63.92; p = 8.68 × 10−7; R Paracentral: F(1, 15) = 4.71; p = 0.0464). No cluster showed significant interactions. 
Discussion
In the present study, we aimed to elucidate the potential neural substrates of trajectory curvature representation, and, at the same time, clarify the mechanisms of the WMTI. Our initial hypothesis was that areas within earlier visual cortices, especially V2, should dissociate trajectories with different curvatures. This expectation was based on previous observations that V2 somehow integrates orientation or motion direction information to represent higher order visual representations such as contours (Hegdé & Van Essen, 2003; Ito & Komatsu, 2004). 
ROI analyses
To test our hypothesis directly, we conducted Overlap-by-Motion two-way ANOVA on the peak amplitude of activities within functionally defined ROIs (V1, V2, V3, MT, and MST). In V1 to V3, this analysis revealed significant main effects of both motion/no-motion ([ILLUSION & CONTROL] vs. [STATIC1 & STATIC2]) and overlap/no-overlap ([CONTROL & STATIC2] vs. [ILLUSION & STATIC1]), but no interaction between these factors. This main effect of overlap/nonoverlap without interaction indicates that the occipital cortices responded to some motion irrelevant differences between ILLUSION and CONTROL, rather than the perceived trajectories. In the stimuli employed in the experiment, the same number of dots was presented in all the conditions, while the dots were allowed to overlap only in the CONTROL and STATIC2 conditions. Thus, the ILLUSION and STATIC1 conditions were slightly brighter on average, and contained more edges defined by luminance than the others. Considering the responsivity of early visual areas to luminance and edges (Goodyear & Menon, 1998), it is plausible that the main effect of overlap/nonoverlap in V1 to V3 is due to the difference in luminance and or numbers of edges in the test conditions. Lastly, either subdivisions of the MT+ complex did not exhibit significant difference between overlap/nonoverlap conditions, indicating those areas do not discern trajectories with different perceived curvatures. 
Ventral visual pathway
Our exploratory whole brain analysis comparing two motion conditions (ILLUSION vs. CONTROL) yielded several significant clusters within visual cortices, including both dorsal and ventral areas. One such cluster was found within the fusiform gyrus, which is known for its selectivity to human faces (Kanwisher, McDermott, & Chun, 1997) and letters (Nobre, Allison, & McCarthy, 1994; Starrfelt & Gerlach, 2007). This might sound surprising, considering that it is widely-accepted that the ventral visual areas are dedicated to processing stable visual information, such as object quality (Kravitz, Saleem, Baker, Ungerleider, & Mishkin, 2013; Mishkin, Ungerleider, & Macko, 1983), and our stimuli did not include any different colors and shapes, nor complex objects. However, there have been studies that have ascribed the fusiform gyrus roles in motion processing, such as integration of local motion cues in transparent motion (Muckli, Singer, Zanella, & Goebel, 2002) or biological motion processing (Grossman & Blake, 2002; Grossman, Blake, & Kim, 2004; Peelen, Wiggett, & Downing, 2006; Santi, Servos, Vatikiotis-Bateson, Kuratate, & Munhall, 2003). Also, a recent study reported that the ventral visual pathway, of which the fusiform gyrus is part, has patches selective to curvature in images, rather than to specific kinds of objects (Yue, Pourladian, Tootell, & Ungerleider, 2014). Considering these observations, the fusiform may play a proactive role in integrating motion vectors into curved trajectories in a similar manner as it processes static, curved contours. This finding is in accordance with recent converging evidence, which asserts that there are substantial functional (Zanon, Busan, Monti, Pizzolato, & Battaglini, 2010) and anatomical connections (Takemura et al., 2015; Yeatman et al., 2014) between dorsal and ventral visual streams. 
Dorsal visual pathway
Another region, this time one within the dorsal visual stream, the SPL, also showed significant preference for the perception of curved trajectories. It has been demonstrated that the SPL is involved in top-down spatial attention. However, considering that our test subjects performed a fixation task, and that WMTI is independent of attentive tracking, it is unlikely that covert top-down attention toward trajectories resulted in the observed difference in the SPL. 
More recently, another line of research has accumulated evidence that parietal areas around intraparietal sulcus including superior parietal lobule play an important role in spontaneous visual grouping of target objects (Fukuda, Bahrami, Kanai, & Rees, 2015; Kanai, Carmel, Bahrami, & Rees, 2011; Romei, Driver, Schyns, & Thut, 2011; Xu & Chun, 2007; Zaretskaya, Anstis, & Bartels, 2013; Zeki & Stutters, 2013). For example, it is reported that SPL shows pronounced activity when one perceives multiple rotating dots as a dynamic illusory Gestalt, rather than as ungrouped individual elements (Zaretskaya et al., 2013). In light of these observations, SPL activity observed in the present experiment seems consistent with the grouping account of the WMTI, which is explained in the following section. 
Grouping hypothesis of WMTI
As we have noted in the Introduction, WMTI is a relatively novel illusion and its detailed mechanisms are still unclear. Thus, one understandable concern over the use of WMTI in the present study is that the illusion might not be misperception of trajectories and rather might be a percept of something else, which requires higher order cognition. For example, there remains the possibility that the conscious recognition that dots are somehow avoiding collisions is the cause of perceived wriggling. To exclude this possibility, we created a novel demo that clearly shows that even much smaller numbers of dots elicit robust misperception of trajectories. The three straight-moving dots in Movie 1b are identical with dots shown in yellow in Movie 1a, which is a clip actually used as an ILLUSION condition stimulus. These dots look to be rotating synchronously as a group and running on curved trajectories, even though they do not look to be avoiding each other. Movie 1c and 1d were prepared to show that this misperception of trajectories is much weaker or even absent in configurations where dots were arranged to collide. 
 
Movie 1a.
 
Demos of WMTI with small numbers of dots. Yellow dots in (b) are extracted from (a). (a) is identical to a clip actually used as ILLUSION condition stimulus except the colors. Even three dots elicit robust perception of illusory curved trajectory. (c) and (d) are almost identical except dots are arranged to collide in (d). The illusory perception of trajectories disappears in (d).
 
 
 
The first paper reported the illusion (Kuwahara et al., 2012) and provided a speculative explanation that virtual boundaries defined by adjacent dots change shapes or imply rotation, and thus evoke the illusory perception of curved trajectories. Taking into account the activation of parietal areas that are known to subserve perceptual grouping observed in the present experiment, this grouping account now seems quite plausible. 
An obvious problem with this account is that it does not explain clearly why the illusion is weaker for overlapping dots, as Kuwahara et al. (2012) also admits (note that perception of wriggling trajectories also occurs to some, albeit a much lesser, extent in control videos). One persuasive explanation is that making dots not to collide with each other incidentally increased the frequency that motion patterns of dots similar to Movie 1c emerge in the display and in turn reduced the frequency of patterns like Movie 1d, thus leading to stronger perception of illusory wriggling. 
Other neural correlates and limitations
The other significant clusters of activity were located around the left central sulcus, within left anterior middle temporal gyrus, and the right cingulate sulcus. The locations of some of those clusters seem to correspond to areas such as the cingulate sulcus visual area (CSv) and the frontal eye field (FEF), which are responsive to certain forms of visual stimuli. However, we have to admit that the large discrepancy between the experimental configurations of the present study and the previous studies, which investigated functions of those areas, is making it difficult to draw any persuasive account for the present results. 
Although we narrowed down the possible neural mechanisms underlying trajectory representation and the WMTI, there are several limitations in the present study. First, although we assumed that the subjects' attention levels were consistent across test conditions thanks to the fixation task and our exclusion criterion, a more elaborate configuration will be needed to clearly isolate the neural underpinnings of the illusion and trajectory perception from the secondary effects of voluntary attention. For example, parametrically modulating task difficulty is one way to dissociate these two. Second, the possibility that the confounding effects of luminance and edges extended beyond the bounds of the occipital visual areas cannot be strictly excluded based on the present data alone, although our post hoc analyses did not detect any such confounding effects in the clusters except occipital and anterior middle temporal ones. Using the illusion with unfilled dots (reported also to induce illusory perception of wriggling trajectories to some extent; Kuwahara et al., 2012), may be one way to bypass this problem. Lastly, future research should test the mechanistic explanations of trajectory perception and the WMTI provided herein. Notably, further investigation is merited to clarify the causal interaction between clusters identified in the current experiment. Use of techniques such as transcranial magnetic stimulation in tandem with fMRI and dynamic causal modeling could be helpful (Friston, Harrison, & Penny, 2003). 
Conclusions
The visual system has the ability to preattentively process trajectories of moving objects; however, the neural mechanisms underpinning this ability are thus far largely unknown. We compared neural activities in subjects observing WMTI and control stimuli, which elicit perceptions of strikingly different trajectories although their physical configurations are similar, in order to clarify (a) neural correlates of trajectory perception and (b) the mechanism of the illusion. The resulting patterns of activity extended across the ventral and dorsal visual pathways, suggesting that motion trajectories are processed in a manner similar to static contours, and supporting the previously posited grouping theory of the illusion. 
Acknowledgments
This work was supported by Grants-in-Aid for Scientific Research (KAKENHI-25119003, 16H03749, 24330208) for YY. The authors declare no competing financial interests. 
Commercial relationships: none. 
Corresponding author: Yuko Yotsumoto. 
Email: cyuko@mail.ecc.u-tokyo.ac.jp. 
Address: Department of Life Sciences, University of Tokyo, Komaba 3-8-1, Meguro-ku, Tokyo Japan 153-8902. 
References
Antal A., Baudewig J., Paulus W., Dechent P. (2008). The posterior cingulate cortex and planum temporale/parietal operculum are activated by coherent visual motion. Visual Neuroscience, 25 (1), 17–26, doi:10.1017/S0952523808080024.
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436, doi:10.1163/156856897X00357.
Cavanagh P., Alvarez G. A. (2005). Tracking multiple targets with multifocal attention. Trends in Cognitive Sciences, 9 (7), 349–354, doi:10.1016/j.tics.2005.05.009.
Choe K. W., Blake R., Lee S.-H. (2014). Dissociation between neural signatures of stimulus and choice in population activity of human V1 during perceptual decision-making. Journal of Neuroscience, 34 (7), 2725–2743, doi:10.1523/JNEUROSCI.1606-13.2014.
Dale A. M., Fischl B., Sereno M. I. (1999). Cortical surface-based analysis. I. Segmentation and surface reconstruction. NeuroImage, 9, 179–194, doi:10.1006/nimg.1998.0395.
Dale A. M., Sereno M. I. (1993). Improved localization of cortical activity by combining EEG and MEG with MRI cortical surface reconstruction: A linear approach. Journal of Cognitive Neuroscience, 5, 162–176, doi:10.1162/jocn.1993.5.2.162.
Desikan R. S., Ségonne F., Fischl B., Quinn B. T., Dickerson B. C., Blacker D., Killiany R. J. (2006). An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. NeuroImage, 31, 968–980, doi:10.1016/j.neuroimage.2006.01.021.
Field D. J., Hayes A., Hess R. F. (1993). Contour integration by the human visual system: Evidence for a local “association field.” Vision Research, 33 (2), 173–193, doi:10.1016/0042-6989(93)90156-Q.
Fischer E., Bülthoff H. H., Logothetis N. K., Bartels A. (2012). Human areas V3A and V6 compensate for self-Induced planar visual motion. Neuron, 73, 1228–1240, doi:10.1016/j.neuron.2012.01.022.
Fischer J., Spotswood N., Whitney D. (2011). The emergence of perceived position in the visual system. Journal of Cognitive Neuroscience, 23 (1), 119–136, doi:10.1162/jocn.2010.21417.
Fischl B., Dale A. M. (2000). Measuring the thickness of the human cerebral cortex from magnetic resonance images. Proceedings of the National Academy of Sciences, USA, 97 (20), 11050–11055, doi:10.1073/pnas.200033797.
Fischl B., Liu A. K., Dale A. M. (2001). Automated manifold surgery: Constructing geometrically accurate and topologically correct models of the human cerebral cortex. IEEE Transactions on Medical Imaging, 20 (1), 70–80, doi:10.1109/42.906426.
Fischl B., Sereno M. I., Dale A. M. (1999). Cortical surface-based analysis. II: Inflation, flattening, and a surface-based coordinate system. NeuroImage, 9, 195–207, doi:10.1006/nimg.1998.0396.
Fischl B., Sereno M. I., Tootell R. B. H., Dale A. M. (1999). High-resolution inter-subject averaging and a surface-based coordinate system. Human Brain Mapping, 8, 272–284.
Fischl B., Van Der Kouwe A. J. W., Destrieux C., Halgren E., Ségonne F., Salat D. H., Dale A. M. (2004). Automatically parcellating the human cerebral cortex. Cerebral Cortex, 14 (January), 11–22, doi:10.1093/cercor/bhg087.
Friston K. J., Harrison L., Penny W. (2003). Dynamic causal modelling. NeuroImage, 19 (4), 1273–1302, doi:10.1016/S1053-8119(03)00202-7.
Fukuda M., Bahrami B., Kanai R., Rees G. (2015). Brain activity dynamics in parietal regions during spontaneous switch in bistable perception. NeuroImage, 107, 190–197, doi:10.1016/j.neuroimage.2014.12.018.
Goodyear B. G., Menon R. S. (1998). Effect of luminance contrast on BOLD fMRI response in human primary visual areas effect of luminance contrast on BOLD fMRI response in human primary visual areas. Journal of Neurophysiology, 79, 2204–2207.
Gorbet D. J., Wilkinson F., Wilson H. R. (2012). An fMRI examination of the neural processing of periodic motion trajectories. Journal of Vision, 12 (11): 5, 1–25, doi:10.1167/12.11.5. [PubMed] [Article]
Gorbet D. J., Wilkinson F., Wilson H. R. (2014). Neural correlates of radial frequency trajectory perception in the human brain. Journal of Vision, 14 (1): 11, 1–19, doi:10.1167/14.1.11. [PubMed] [Article]
Grossman E. D., Blake R. (2002). Brain areas active during visual perception of biological motion. Neuron, 35, 1167–1175. Retrieved from http://www.sciencedirect.com/science/article/pii/S0896627302008978\npapers2://publication/uuid/D95D0E21-35A5-4780-AAFA-99F28E0E1823
Grossman E. D., Blake R., Kim C. (2004). Learning to see biological motion: Brain activity parallels behavior. Journal of Cognitive Neuroscience, 16 (9), 1669–1679.
Grzywacz N. M., Watamaniuk S. N. J., McKee S. P. (1995). Temporal coherence theory for the detection and measurement of visual motion. Vision Research, 35 (22), 3183–3203, doi:10.1016/0042-6989(95)00102-6.
Hagler D. J., Saygin A. P., Sereno M. I. (2006). Smoothing and cluster thresholding for cortical surface-based group analysis of fMRI data. NeuroImage, 33 (4), 1093–1103, doi:10.1016/j.neuroimage.2006.07.036.
Hegdé J., Van Essen D. C. (2003). Strategies of shape representation in macaque visual area V2. Visual Neuroscience, 20 (3), 313–328, doi:10.1017/S0952523803203102.
Huk A. C., Dougherty R. F., Heeger D. J. (2002). Retinotopy and functional subdivision of human areas MT and MST. Journal of Neuroscience, 22 (16), 7195–7205, doi:20026661.
Huk A. C., Ress D., Heeger D. J. (2001). Neuronal basis of the motion aftereffect reconsidered. Neuron, 32 (1), 161–172. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11604147
Ito M., Komatsu H. (2004). Representation of angles embedded within contour stimuli in area V2 of macaque monkeys. Journal of Neuroscience, 24 (13), 3313–3324, doi:10.1523/JNEUROSCI.4364-03.2004.
Johansson G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14 (2), 201–211.
Kanai R., Carmel D., Bahrami B., Rees G. (2011). Structural and functional fractionation of right superior parietal cortex in bistable perception. Current Biology, 21 (3), R106–R107, doi:10.1016/j.cub.2010.12.009.
Kanwisher N., McDermott J., Chun M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17 (11), 4302–4311, doi:10.1098/Rstb.2006.1934.
Kleiner M., Brainard D., Pelli D. (2007). “What's new in Psychtoolbox-3?” Perception 36 ECVP Abstract Supplement.
Kravitz D. J., Saleem K. S., Baker C. I., Ungerleider L. G., Mishkin M. (2013). The ventral visual pathway: An expanded neural framework for the processing of object quality. Trends in Cognitive Sciences, 17 (1), 26–49, doi:10.1016/j.tics.2012.10.011.
Kuwahara M., Sato T., Yotsumoto Y. (2012). Wriggling motion trajectory illusion. Journal of Vision, 12 (12): 4, 1–14, doi:10.1167/12.12.4. [PubMed] [Article]
Lu H. D., Chen G., Tanigawa H., Roe A. W. (2010). A motion direction map in macaque V2. Neuron, 68 (5), 1002–1013, doi:10.1016/j.neuron.2010.11.020.
Mishkin M., Ungerleider L. G., Macko K. A. (1983). Object vision and spatial vision: Two cortical pathways. Trends in Neurosciences, doi:10.1016/0166-2236(83)90190-X.
Muckli L., Singer W., Zanella F. E., Goebel R. (2002). Integration of multiple motion vectors over space: An fMRI study of transparent motion perception. NeuroImage, 16 (4), 843–856, doi:10.1006/nimg.2002.1085.
Narasimhan S., Tripathy S. P., Barrett B. T. (2009). Loss of positional information when tracking multiple moving dots: The role of visual memory. Vision Research, 49 (1), 10–27, doi:10.1016/j.visres.2008.09.023.
Nobre A. C., Allison T., McCarthy G. (1994). Word recognition in the human inferior temporal lobe. Nature, 372, 260–263.
Or C. C., Wilkinson F., Wilson H. R. (2011). Discrimination and identification of periodic motion trajectories. Journal of Vision, 11 (8): 7, 1–11, doi:10.1167/11.8.7. [PubMed] [Article]
Pack C. C., Conway B. R., Born R. T., Livingstone M. S. (2006). Spatiotemporal structure of nonlinear subunits in macaque visual cortex. Journal of Neuroscience, 26 (3), 893–907, doi:10.1523/JNEUROSCI.3226-05.2006.
Peelen M. V., Wiggett A. J., Downing P. E. (2006). Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion. Neuron, 49, 815–822, doi:10.1016/j.neuron.2006.02.004.
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, doi:10.1163/156856897X00366.
Pylyshyn Z. W., Storm R. W. (1988). Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spatial Vision, 3, 179–197, doi:10.1163/156856888X00122.
Romei V., Driver J., Schyns P. G., Thut G. (2011). Rhythmic TMS over parietal cortex links distinct brain frequencies to global versus local visual processing. Current Biology, 21 (4), 334–337, doi:10.1016/j.cub.2011.01.035.
Santi A., Servos P., Vatikiotis-Bateson E., Kuratate T., Munhall K. (2003). Perceiving biological motion: Dissociating visible speech from walking. Journal of Cognitive Neuroscience, 15 (6), 800–809.
Ségonne F., Pacheco J., Fischl B. (2007). Geometrically accurate topology-correction of cortical surfaces using nonseparating loops. IEEE Transactions on Medical Imaging, 26 (4), 518–529, doi:10.1109/TMI.2006.887364.
Shooner C., Tripathy S. P., Bedell H. E., Ogmen H. (2010). High-capacity, transient retention of direction-of-motion information for multiple moving objects. Journal of Vision, 10 (6): 8, 1–20, doi:10.1167/10.6.8. [PubMed] [Article]
Smith A. T., Wall M. B., Williams A. L., Singh K. D. (2006). Sensitivity to optic flow in human cortical areas MT and MST. European Journal of Neuroscience, 23, 561–569, doi:10.1111/j.1460-9568.2005.04526.x.
Smith A. T., Williams A. L., Singh K. D. (2004). Negative BOLD in the visual cortex: Evidence against blood stealing. Human Brain Mapping, 21 (4), 213–220, doi:10.1002/hbm.20017.
Snowden R. J., Treue S., Erickson R. G., Andersen R. A. (1991). The response of area MT and V1 neurons to transparent motion. Journal of Neuroscience, 11 (September), 2768–2785.
Starrfelt R., Gerlach C. (2007). The Visual What For Area: Words and pictures in the left fusiform gyrus. NeuroImage, 35 (1), 334–342, doi:10.1016/j.neuroimage.2006.12.003.
Takemura H., Rokem A., Winawer J., Yeatman J. D., Wandell B. A., Pestilli F. (2015). A major human white matter pathway between dorsal and ventral visual cortex. Cerebral Cortex, 1–10, doi:10.1093/cercor/bhv064.
Tripathy S. P., Barrett B. T. (2004). Severe loss of positional information when detecting deviations in multiple trajectories. Journal of Vision, 4 (12): 4, 1020–1043, doi:10.1167/4.12.4. [PubMed] [Article]
Tripathy S. P., Narasimhan S., Barrett B. T. (2007). On the effective number of tracked trajectories in amblyopic human vision. Journal of Vision, 7 (6): 2, 1–18, doi:10.1167/7.6.2. [PubMed] [Article]
Verghese P., Watamaniuk S. N. J., McKee S. P., Grzywacz N. M. (1999). Local motion detectors cannot account for the detectability of an extended trajectory in noise. Vision Research, 39 (1), 19–30, doi:10.1016/S0042-6989(98)00033-9.
Watamaniuk S. N. J., McKee S. P., Grzywacz N. M. (1995). Detecting a trajectory in random-direction motion noise. Vision Research, 35 (1), 65–77.
Xu Y., Chun M. M. (2007). Visual grouping in human parietal cortex. Proceedings of the National Academy of Sciences, USA, 104 (47), 18766–18771, doi:10.1073/pnas.0705618104.
Yeatman J. D., Weiner K. S., Pestilli F., Rokem A., Mezer A., Wandell B. A. (2014). The vertical occipital fasciculus: A century of controversy resolved by in vivo measurements. Proceedings of the National Academy of Sciences, USA, 111 (48), E5214–E5223, doi:10.1073/pnas.1418503111.
Yue X., Pourladian I. S., Tootell R. B. H., Ungerleider L. G. (2014). Curvature-processing network in macaque visual cortex. Proceedings of the National Academy of Sciences, USA, 111 (33), E3467–E3475, doi:10.1073/pnas.1412616111.
Zanon M., Busan P., Monti F., Pizzolato G., Battaglini P. P. (2010). Cortical connections between dorsal and ventral visual streams in humans: Evidence by TMS/EEG co-registration. Brain Topography, 22 (4), 307–317, doi:10.1007/s10548-009-0103-8.
Zaretskaya N., Anstis S., Bartels A. (2013). Parietal cortex mediates conscious perception of illusory gestalt. Journal of Neuroscience, 33 (2), 523–531, doi:10.1523/JNEUROSCI.2905-12.2013.
Zeki S., Stutters J. (2013). Functional specialization and generalization for grouping of stimuli based on colour and motion. NeuroImage, 73, 156–166, doi:10.1016/j.neuroimage.2013.02.001.
Figure 1
 
Stimuli presented in the experiment. (a) ILLUSION stimuli contained 300 dots moving straight in random directions without overlapping. The dots were perceived to have curved trajectories. (b) CONTROL condition; dots were allowed to overlap. The dots were perceived to have straight trajectories. (c, d) STATIC 1 and STATIC 2 stimuli consisted of frames randomly extracted from latter half (i.e., frame No. 31 to No. 60) of the ILLUSION and CONTROL videos, respectively.
Figure 1
 
Stimuli presented in the experiment. (a) ILLUSION stimuli contained 300 dots moving straight in random directions without overlapping. The dots were perceived to have curved trajectories. (b) CONTROL condition; dots were allowed to overlap. The dots were perceived to have straight trajectories. (c, d) STATIC 1 and STATIC 2 stimuli consisted of frames randomly extracted from latter half (i.e., frame No. 31 to No. 60) of the ILLUSION and CONTROL videos, respectively.
Figure 2
 
Activation maps contrasting each condition with the FIXATION condition.
Figure 2
 
Activation maps contrasting each condition with the FIXATION condition.
Figure 3
 
Significant clusters revealed in the ILLUSION > CONTROL (positive) and ILLUSION < CONTROL (negative) contrasts. The positive clusters were located within the bilateral medial occipital cortices, right fusiform and CSv, left superior parietal lobule (SPL), and pre- and postcentral gyri and the negative one was located within the anterior part of the middle temporal gyrus.
Figure 3
 
Significant clusters revealed in the ILLUSION > CONTROL (positive) and ILLUSION < CONTROL (negative) contrasts. The positive clusters were located within the bilateral medial occipital cortices, right fusiform and CSv, left superior parietal lobule (SPL), and pre- and postcentral gyri and the negative one was located within the anterior part of the middle temporal gyrus.
Figure 4
 
Results of additional ROI analyses. ***p < 0.001, **p < 0.01; *p < 0.05. The red crosses correspond to the mean signal change of each condition. The blue dotted lines indicate the average signal changes across conditions. The black broken lines show the point of zero signal change. Significant main effects of motion/nonmotion were revealed in all ROIs. The main effects of overlap/nonoverlap were significant in V1, V2, and V3. No interactions were significant.
Figure 4
 
Results of additional ROI analyses. ***p < 0.001, **p < 0.01; *p < 0.05. The red crosses correspond to the mean signal change of each condition. The blue dotted lines indicate the average signal changes across conditions. The black broken lines show the point of zero signal change. Significant main effects of motion/nonmotion were revealed in all ROIs. The main effects of overlap/nonoverlap were significant in V1, V2, and V3. No interactions were significant.
Table 1
 
Significant clusters contrast images. Notes: Anatomical labels were assigned by automatic segmentation implemented in FreeSurfer Software (Desikan et al., 2006; Fischl et al., 2004). The p values were corrected according to the cluster size simulation (Hagler et al., 2006).
Table 1
 
Significant clusters contrast images. Notes: Anatomical labels were assigned by automatic segmentation implemented in FreeSurfer Software (Desikan et al., 2006; Fischl et al., 2004). The p values were corrected according to the cluster size simulation (Hagler et al., 2006).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×