**Peripherally presented stimuli evoke stronger activity in scene-processing regions than foveally presented stimuli, suggesting that scene understanding is driven largely by peripheral information. We used functional MRI to investigate whether functional connectivity evoked during natural perception of audiovisual movies reflects this peripheral bias. For each scene-sensitive region—the parahippocampal place area (PPA), retrosplenial cortex, and occipital place area—we computed two measures: the extent to which its activity could be predicted by V1 activity (connectivity strength) and the eccentricities within V1 to which it was most closely related (connectivity profile). Scene regions were most related to peripheral voxels in V1, but the detailed nature of this connectivity varied within and between these regions. The retrosplenial cortex showed the most consistent peripheral bias but was less predictable from V1 activity, while the occipital place area was related to a wider range of eccentricities and was strongly coupled to V1. We divided the PPA along its posterior–anterior axis into retinotopic maps PHC1, PHC2, and anterior PPA, and found that a peripheral bias was detectable throughout all subregions, though the anterior PPA showed a less consistent relationship to eccentricity and a substantially weaker overall relationship to V1. We also observed an opposite foveal bias in object-perception regions including the lateral occipital complex and fusiform face area. These results show a fine-scale relationship between eccentricity biases and functional correlation during natural perception, giving new insight into the structure of the scene-perception network.**

*Dog Day Afternoon*(including audio), as described in a previous publication (Arcaro et al., 2015); note that we used free-viewing runs (rather than the fixation runs analyzed previously). Eight of these subjects watched a 5-min clip six times: twice unaltered from the original movie, twice with coarse temporal reordering (randomly ordered movie segments of 7–20 s), and twice with fine temporal reordering (randomly ordered movie segments of 0.5–1.5 s). One subject viewed only the two unaltered runs of the clip, and one viewed the clip once in each condition. The remaining five subjects watched other clips also taken from popular movie and television shows: One watched a 25-min episode of

*The Twilight Zone*titled “The Lateness of the Hour” (1960; Chen, Honey, Simony, Arcaro, Norman, & Hasson, 2015); two watched a 50-min segment from the episode “A Study in Pink” of BBC's

*Sherlock*(2010; Chen, Leong, Norman, & Hasson, 2016); one watched a 26-min segment from an episode of BBC's

*Merlin*(2008); and one watched a 4-min clip from Charlie Chaplin's 1921

*The Kid*twice. We note that the variety of movies used ensures that our connectivity measures are not driven by a specific stimulus, and instead reflect a more general pattern of connectivity.

*connectivity strength*between V1 and the seed ROI, since it measures the degree to which the seed ROI is functionally correlated with signals within V1. If a seed region is totally unrelated to V1, it will have a connectivity strength near zero, since it will not be possible to predict its activity from V1 activity. We note that with this analysis, it is possible to observe a strong bias (i.e., clear relative differences in the peripheral versus central weights) but weak connectivity strength (i.e., the V1 model captures little of the variance in the seed area overall), or vice versa.

*w*is the V1 connectivity weight vector,

*V*is the matrix of V1 time courses,

*s*is the seed time course,

*N*is the number of V1 vertices, and

*n*is the set of spatial neighbors of vertex

_{i}*i*. The first term implements a standard least-squares multiple regression, while the second term penalizes weight differences for neighboring vertices; see our full paper (Baldassano et al., 2012) for additional description.

*λ*, which interpolates between no smoothing (

*λ*= 0) and a constant weight map over the whole ROI (

*λ*= ∞). However, note that even for a particular choice of

*λ*, the strength of the spatial smoothness can vary locally over V1, with weights changing faster in some regions than others depending on the underlying signals' similarity to the searchlight time course. This is a major advantage over presmoothing the data with a fixed Gaussian kernel, which would force us to pick a fixed amount of spatial smoothing that is constant through all of V1. The

*λ*parameter therefore only sets a rough spatial scale over which we would like the weight maps to vary. In all experiments in this article, we use

*λ*= 1000, in order to yield maps that are smooth on the scale of about 5–10 mm (Supplementary Figure S1). A precise setting of

*λ*is not required to obtain the eccentricity preference results we report, as shown in Supplementary Figure S2.

*connectivity profile*over V1, describing which eccentricities in V1 are most related to the time course of this ROI in this subject. For example, the connectivity profile in Figure 1b (in orange) shows that weights generally increase with eccentricity, indicating a bias toward peripheral connectivity. We can test these eccentricity profiles for foveal versus peripheral preference by computing the Pearson correlation between weights and eccentricity; positive values indicate a peripheral bias (weights increasing with eccentricity), while negative values indicate a foveal bias (weights decreasing with eccentricity).

*p*value as the fraction of null consistencies that were at least as large. This

*p*value therefore represents the probability that a consistency value could have been generated by a random draw of 15 eccentricity profiles. For detecting differences between profiles, we also constructed a null distribution by computing all differences between consistencies in the null map, and defined a

*p*value as the fraction of null differences whose absolute consistency difference was at least as large as the true profile difference. Linear correlations of weight versus eccentricity were Fisher transformed and then subjected to a

*t*test; one-sided

*t*tests were used in the ROIs (based on previous work identifying LOC and FFA as foveal and OPA, RSC, and PPA as peripheral), whereas two-sided

*t*tests were used in the searchlight. For the binning analysis, the correlation in each bin was compared to the mean correlation in the other two bins with a one-sided

*t*test. Searchlight

*p*values were corrected for multiple comparisons using the false discovery rate (

*q*), calculated with the same calculation as AFNI's 3dFDR (Cox, 1996).

*p*s < 0.001 by permutation test) and show a marked connectivity preference for portions of V1 beyond 5° of eccentricity. This peripheral bias is revealed by a highly significant and positive linear trend of weight versus eccentricity in each ROI—OPA:

*t*(7) = 3.59,

*p*= 0.004; PPA:

*t*(13) = 6.49,

*p*< 0.001; RSC:

*t*(7) = 6.69,

*p*< 0.001 (one-tailed

*t*test). The results also reveal some differences among the regions. OPA shows peak connectivity around 10° of visual angle, while PPA and RSC maintain high weights out to the maximum eccentricity measured (15°), and hence OPA shows a weaker overall linear trend between weight and correlation—PPA > OPA:

*t*(8) = 5.22,

*p*= 0.001; RSC > OPA:

*t*(8) = 4.14,

*p*= 0.004 (two-tailed

*t*test). In addition, RSC shows a more pronounced weight difference between foveal and peripheral eccentricities compared to PPA, and has a stronger weight–eccentricity correlation—RSC > PPA:

*t*(8) = 2.74,

*p*= 0.029 (two-tailed

*t*test). We note that many of these subtler effects, and particularly the detailed information about peak connectivity and eccentricity, would be lost in methods that use fixed foveal versus peripheral stimulation (Hasson et al., 2002) or fixed bins across V1 (as in our comparison bin analysis later).

*t*(8) = 2.80,

*p*= 0.027; PPA > RSC:

*t*(8) = 7.97,

*p*< 0.001 (two-tailed

*t*test)—indicating that a smaller portion of RSC's activity can be predicted purely by V1 activity.

*p*< 0.001 by permutation test) and more strongly connected to peripheral than foveal eccentricities. The connectivity profile for aPPA, however, is not very correlated across subjects (

*p*> 0.1 by permutation test), indicating that this region has a much less well-defined preference for specific eccentricities; aPPA's connectivity profile is significantly less consistent than that of PHC1 (

*p*< 0.001, permutation test) and PHC2 (

*p*< 0.02, permutation test). When testing simply for a linear correlation between weights and eccentricity, it is possible to detect a peripheral bias in all three subregions—PHC1:

*t*(14) = 6.96,

*p*< 0.001; PHC2:

*t*(14) = 3.83,

*p*< 0.001; aPPA:

*t*(12) = 2.08,

*p*= 0.030 (one-tailed

*t*test)—including a weak peripheral preference in anterior PPA. There is an even larger difference between the subregions in V1 connectivity strength, with PPA subregions becoming less and less predictable from V1 activity as we move posterior to anterior—PHC1 > PHC2:

*t*(13) = 4.42,

*p*< 0.001; PHC1 > aPPA:

*t*(13) = 4.53,

*p*< 0.001; PHC2 > aPPA:

*t*(13) = 2.85,

*p*= 0.014 (two-tailed

*t*test). It is unlikely that this gradient is driven by local noise correlations, since V1 and PHC1 are separated by more than 25 mm.

*p*< 0.001 by permutation test) but had weights that were highly concentrated within 5° of eccentricity. A linear correlation of weights versus eccentricity was significantly negative in both regions—FFA:

*t*(13) = 7.78,

*p*< 0.001; LOC:

*t*(13) = 2.81,

*p*= 0.007 (one-tailed

*t*test)—indicating that weights decreased at higher eccentricities. Both regions were strongly coupled to V1 activity, with connectivity strengths similar to those of OPA and PPA. Together, our ROI results indicate that examining fine-grained connectivity patterns over V1 reveals new insights compared to simply measuring overall correlations between V1 and regions of interest.

*, 20 (12), 2226–2237, doi:10.1162/jocn.2008.20156.*

*Journal of Cognitive Neuroscience**, 10 (6), e0128840, doi:10.1371/journal.pone.0128840.*

*PLoS ONE**, 4, 1–28, doi:10.7554/eLife.03952.*

*eLife**, 29 (34), 10638–10652, doi:10.1523/JNEUROSCI.2807-09.2009.*

*The Journal of Neuroscience**, 75, 228–237, doi:10.1016/j.neuroimage.2013.02.073.*

*NeuroImage**, 3, e784, doi:10.7717/peerj.784.*

*PeerJ**Human-object interactions are more than the sum of their parts*. Cerebral Cortex, in press, doi:10.1093/cercor/bhw077.

*, 63 (3), 1099–1106, doi:10.1016/j.neuroimage.2012.07.046.*

*NeuroImage**, 25, 1711–1722, doi:10.1162/jocn_a_00422.*

*Journal of Cognitive Neuroscience**, 35 (36), 12366–12382, doi:10.1523/JNEUROSCI.4715-14.2015.*

*The Journal of Neuroscience**, 86, 35–42, doi:10.1016/j.visres.2013.04.006.*

*Vision Research**, 33 (41), 16209–16219, doi:10.1523/JNEUROSCI.0363-13.2013.*

*The Journal of Neuroscience**, 29 (3), 162–173. http://www.ncbi.nlm.nih.gov/pubmed/8812068*

*Computers and Biomedical Research**, in press, doi:10.1093/cercor/bhv155.*

*Cerebral Cortex**, preprint, doi:10.1101/035931.*

*bioRxiv**, 9 (2), 179–194, doi:10.1006/nimg.1998.0395.*

*NeuroImage**, 21 (7), 1498–1506, doi:10.1093/cercor/bhq186.*

*Cerebral Cortex**, 33 (4), 1331–1336, doi:10.1523/JNEUROSCI.4081-12.2013.*

*The Journal of Neuroscience**(pp. 105–134). Cambridge, MA: MIT Press.*

*Scene vision**, 9 (2), 195–207, doi:10.1006/nimg.1998.0396.*

*NeuroImage**, 101 (6), 3270–3283, doi:10.1152/jn.90777.2008.*

*Journal of Neurophysiology**, 49 (4), 3248–3256, doi:10.1016/j.neuroimage.2009.11.036.*

*NeuroImage**, 19 (1), 72–78, doi:10.1093/cercor/bhn059.*

*Cerebral Cortex**, 66, 376–384, doi:10.1016/j.neuroimage.2012.10.037.*

*NeuroImage**, 34 (3), 479–490. http://www.ncbi.nlm.nih.gov/pubmed/11988177*

*Neuron**, 4 (6), 223–233, doi:10.1016/S1364-6613(00)01482-0.*

*Trends in Cognitive Sciences**, 56 (3), 1426–1436, doi:10.1016/j.neuroimage.2011.02.077.*

*NeuroImage**, 7 (858), 58–67, doi:10.2174/1874440001307010058.*

*The Open Neuroimaging Journal**, 36 (5), 1490–1501, doi:10.1523/JNEUROSCI.2999-15.2016.*

*The Journal of Neuroscience**, 23 (2), 207–215, doi:10.1016/j.conb.2012.12.004.*

*Current Opinion in Neurobiology**, 4 (5), 455–456, doi:10.1038/87399.*

*Nature Neuroscience**Journal of Neurophysiology, 97*(5), 3494–3507, doi:10.1152/jn.00010.2007.

*, 108, 3389–3394, doi:10.1073/pnas.1013760108.*

*Proceedings of the National Academy of Sciences, USA**, 40 (2), 471–487, doi:10.1037/a0034986.*

*Journal of Experimental Psychology*:*Human Perception and Performance**, 26 (51), 13128–13142, doi:10.1523/JNEUROSCI.1657-06.2006.*

*The Journal of Neuroscience**, 4 (5), 533–539, doi:10.1038/87490.*

*Nature Neuroscience**Cerebral Cortex, 25*(8), 2267–2281, doi:10.1093/cercor/bhu034.

*, 6 (4), 176–184, doi:10.1016/S1364-6613(02)01870-3.*

*Trends in Cognitive Sciences**, 35 (44), 14896–14908, doi:10.1523/JNEUROSCI.2270-15.2015.*

*The Journal of Neuroscience**, 109 (12), 2883–2896, doi:10.1152/jn.00658.2012.*

*Journal of Neurophysiology**, 44 (3), 893–905, doi:10.1016/j.neuroimage.2008.09.036.*

*NeuroImage**, 83, 892–900, doi:10.1016/j.neuroimage.2013.07.030.*

*NeuroImage**, 34 (20), 6721–6735, doi:10.1523/JNEUROSCI.4802-13.2014.*

*The Journal of Neuroscience**Cerebral Cortex, 25*(7), 1792–1805, doi:10.1093/cercor/bht418.

*, 19 (2), 141–151, doi:10.1002/hipo.20490.*

*Hippocampus**, 33 (40), 15978–15988, doi:10.1523/JNEUROSCI.1580-13.2013.*

*The Journal of Neuroscience**, 10 (3), 284–294, doi:10.1093/cercor/10.3.284.*

*Cerebral Cortex**, 35 (34), 11921–11935, doi:10.1523/JNEUROSCI.0137-15.2015.*

*The Journal of Neuroscience**, 27 (20), 5326–5337, doi:10.1523/JNEUROSCI.0991-07.2007.*

*The Journal of Neuroscience**, 10 (11), 792–802, doi:10.1038/nrn2733.*

*Nature Reviews Neuroscience**, 447 (7140), 83–86, doi:10.1038/nature05758.*

*Nature**Cerebral Cortex, 25*(10), 3911–3931, doi:10.1093/cercor/bhu277.

*, 124, 107–117, doi:10.1016/j.neuroimage.2015.08.058.*

*NeuroImage*