Free
Article  |   March 2012
The effects of display time and eccentricity on the detection of amplitude and phase degradations in textured stimuli
Author Affiliations
Journal of Vision March 2012, Vol.12, 7. doi:10.1167/12.3.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Alasdair D. F. Clarke, Patrick R. Green, Mike J. Chantler; The effects of display time and eccentricity on the detection of amplitude and phase degradations in textured stimuli. Journal of Vision 2012;12(3):7. doi: 10.1167/12.3.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The amplitude and phase spectra of an image contain important information for perception, and a large body of work has investigated the effects of manipulating these spectra on the recognition or classification of image content. Here, we use a novel means of investigating sensitivity to amplitude and phase spectra properties, testing the ability of observers to detect degradations of the spectral content of synthetic images of textured surfaces that are broadband in the frequency domain. The effects of display time and retinal eccentricity on sensitivity to these two manipulations are compared using stimuli matched for difficulty of detection. We find no difference between the time courses for the detection of degradation in the two spectra; in both cases, accuracy rises above chance when display times are greater than 80 ms. Increasing retinal eccentricity to 8.7°, however, has a significantly stronger effect on the accuracy of detecting degradations of the amplitude spectrum than of the phase spectrum. Further, sensitivity to phase randomization that is restricted to low spatial frequencies is greater in the periphery (at 8.7° eccentricity) than in the fovea. These last two results imply that the fovea and periphery are specialized for the processing of phase spectrum information in distinct spatial frequency bands.

Introduction
One of the main aims of vision science is to understand how images are coded in the early visual system. Both the amplitude and phase spectra of an image have been shown to contain important information regarding the appearance of an image (Tadmor & Tolhurst, 1993) and there is a large body of work investigating the effect of globally modifying stimuli by manipulating these spectra. Using a novel procedure that allows a direct comparison of the effects of these manipulations, this study compares how display time and viewing eccentricity affect the ability of observers to detect small amounts of degradation in the amplitude or phase spectra of synthetic, homogeneously textured images. The results reveal an unexpected superiority of peripheral vision for detecting phase relationships between low spatial frequency components. 
Effects of manipulating amplitude spectra
While both spectra contain independent information about an image's appearance, the amplitude spectrum is often regarded as the less important of the two. For example, Oppenheim and Lim (1984) have demonstrated that human observers can recognize images from the phase information that they contain alone. Similarly, it is well known that images of natural environments typically have very similar power 1 spectra and follow a 1/f β rule with β ≈ 2 (Burton & Moorhead, 1987; Field, 1987). Even so, the amplitude spectrum has been shown to contain information that is perceptually useful in some tasks: information regarding the illumination conditions can be extracted from an image's amplitude spectrum (Chantler & Delguste, 1997) and it has been shown to play a role in the perception of scene gist (Joubert, Rousselet, Fabre-Thorpe, & Fize, 2009; Oliva & Torralba, 2006). 
Several studies have investigated how varying the amplitude spectrum affects performance in detection threshold experiments. The power spectrum is usually modeled with a 1/f β distribution and the spectral falloff β has been shown to influence the thresholds for detecting small changes to an image. Knill, Field, and Kersten (1990) looked at how different values of β affect discrimination thresholds for fractal images. They used a 2AFC task and asked observers to identify which of the two images had the lowest spectral falloff. Their results showed that detection thresholds were lowest for 2.8 < β < 3.6 and increased outside this range. Knill et al. suggest that this is evidence that the visual system is optimized to discriminate images with Markov statistics (β = 3). 
Recently, some studies have explored how changes in the amplitude spectrum influence our perception of properties of images of surface textures. Padilla, Drbohlav, Green, Spence, and Chantler (2008) investigated the relationships of β and RMS height with the perceived roughness of a textured surface and developed a model based on a Gaussian filter. The relationship between perceived directionality and the shape of the amplitude spectrum has also been investigated (Shah, Chantler, & Green, 2010). 
Effects of manipulating phase spectra
The human visual system has been shown to be tolerant to the addition of noise to an image's phase spectrum. For example, Wichmann, Braun, and Gegenfurtner (2006) used the animal detection paradigm and found that observers were able to maintain accuracies of 75% even when up to 120° of phase noise had been added to the images. Emrith, Chantler, Green, Maloney, and Clarke (2010) have investigated how sensitive observers are to changes in the amount of noise in the phase spectrum of textures. Using maximum likelihood difference scaling (MLDS), they recovered the psychometric function relating the degree of phase randomization to an observer's perception of how different a pair of textures appeared. They proposed a model of perceived structure based on phase congruency (Kovesi, 2000), which accounted for the psychophysical data. 
Combined effects of manipulating phase and amplitude spectra
Very few experiments have investigated the combined effects of variation in both phase and amplitude spectra together. Thomson and Foster (1997) investigated the role of phase in the ability of observers to discriminate between images differing only in their amplitude spectra. They compared the effect of β on the Δβ threshold for phase-rich and (amplitude matched) random phase stimuli and found that for phase-rich images the thresholds were higher when the test β was close to the original image's value, while for random phase images the Δβ threshold increased monotonically with increasing β. However, Hansen and Hess (2006), using stimuli with a smaller spatial extent, found no effect of phase spectrum differences on discrimination of the slope of the amplitude spectrum. 
The role of the amplitude and phase spectra in the animal detection paradigm has been explored by Gaspar and Rousselet (2009) using an amplitude spectra swapping paradigm. They concluded that classification may be based on an interaction between phase and amplitude spectra and hypothesize that local phase congruency (Kovesi, 2000) might play an important role. Joubert et al. (2009) carried out a similar study, investigating the effects of equalizing amplitude spectra and increasing phase noise on “rapid visual context categorization.” They found that equalizing the amplitude spectrum of images significantly reduces observers' ability to discriminate between natural and man-made scenes. Adding up to 50% phase noise had no effect on discrimination accuracy for amplitude spectrum-equalized images, although performance dropped suddenly to chance level when more than 50% phase noise was added. 
The time courses of amplitude and phase processing
Detection of properties of an image's amplitude spectrum requires integration of the outputs of multiple spatial frequency-tuned channels. Kihara and Takeda (2010) investigated the time course of this process in an experiment that used a natural image categorization task. They found an advantage for composite low- and high-pass filtered stimuli over probability summation of separate filtered images between 83 and 100 ms after stimulus onset, implying that integration of channel outputs has begun by this time. Is the integration of phase information from separate spatial channels achieved at the same time, or does it depend on a subsequent process? Rousselet, Pernet, Bennett, and Sekuler (2008) investigated how early event-related potentials (ERPs) to images of faces are affected by the addition of phase noise and found that sensitivity to phase noise starts at around 120–130 ms. This is slightly longer than the time needed to integrate information from multiple frequencies (Kihara & Takeda, 2010). However, very different stimuli and experimental procedures were used in the two experiments. Without a more direct comparison using the same original images and the same task, a conclusion about the relative time scales of these two processes cannot be drawn. 
Effects of eccentricity on amplitude and phase processing
The dependence of contrast sensitivity on retinal eccentricity has been extensively documented. For example, Pointer and Hess (1989) measured contrast sensitivity from the fovea out to 60° eccentricity and found maximum sensitivity at the fovea for all frequencies tested (from 0.05 to 12.8 c/deg) and a linear gradient of sensitivity in the periphery when eccentricity was expressed as periods of the sinusoidal stimulus used. This conclusion holds across all studies that have made the same measurements (Garcia-Perez & Sierra-Vazquez, 1996). These studies have also defined how the contrast sensitivity gradient varies with spatial frequency and with the retinal meridian tested. Models of peripheral spatial vision based on these data have been shown to account for effects of eccentricity on the ability to discriminate degrees of image blur (Peli & Geri, 2001) and to detect objects (Kwon & Legge, 2011). The perception of image differences in the periphery cannot be accounted for entirely in terms of gradients in spatial filtering, however. Changes in orientation and color of image patches are examples (To, Gilchrist, Troscianko, & Tolhurst, 2011). These authors argue that these decrements arise from an additional effect of self-crowding between image elements, as crowding effects on pattern recognition are known to operate over greater distances with increasing eccentricity (Levi, Hariharan, & Klein, 2002). 
Parraga, Troscianko, and Tolhurst (2000) investigated how well observers could discriminate between two images from a morph sequence (i.e., a photograph of a man's face was morphed to a photograph of a woman's face). They found that discrimination thresholds were lower for natural amplitude spectra, and decreasing or increasing the spectral falloff had the effect of increasing the discrimination thresholds. When the stimuli were presented in the periphery the thresholds tended to be larger (but not always) and the smallest threshold was no longer found at the original value of α. Instead, observers were better at discriminating images that had been blurred somewhat, implying that peripheral processing is optimized for its characteristic low-pass filtering properties. 
Other known differences between foveal and peripheral vision are potentially relevant to the ability to detect properties of an image's phase spectrum. Westheimer (1982) demonstrated greater positional uncertainty in the periphery, over and above that which would be accounted for by decreased acuity, and recent evidence identifies the source of this effect in an increase in spatial disarray in post-receptoral neural processing networks (Hess & Hayes, 1994). Independently of the increase in positional uncertainty with eccentricity, there is also a step decrement in contour integration ability beyond 10° (Hess & Dakin, 1999), suggesting that a specific contour linking mechanism does not operate beyond this retinal radius. 
More directly relevant to the present study, the ability to detect phase relationships between sinusoidal components has been investigated using narrowband stimuli, such as Gabor patches and gratings, and some such studies have explored the effect of viewing eccentricity. Renschler and Treutwein (1985) explored the effect of relative phase (symmetric or asymmetric) on the discrimination of an f + 3f compound grating in foveal and peripheral vision and found that eccentricity had a large effect: When the stimuli were presented 2° away from fixation, discrimination approached chance level for symmetric stimuli while having little effect on asymmetric stimuli (as long as the stimuli were rescaled to take cortical magnification into account). Several studies from around the same time found similar results (Bennett & Banks, 1987, 1991; Chen & Tyler, 1999; Hess & Pointer, 1987). 
However, Morrone, Burr, and Spinelli (1989) found that relative phase sensitivity was as good in the periphery as in central vision for broadband edges (comprising of 256 cosine components) if the stimuli are scaled to take contrast sensitivity into account. They suggested that the discrepancy with earlier results is related to the narrowband, two-component, stimuli used and concluded that in broadband images relative phase sensitivity decreases with eccentricity at a similar rate to grating acuity. These results would imply that in a natural or synthetic broadband image, the decline with eccentricity in the ability to detect changes in the phase spectrum should match or exceed the decline in detection of changes in the amplitude spectrum, but this prediction has not so far been tested. 
The present study
The aim of the experiments presented below is to investigate whether there are any differences in our ability to detect degradations in the phase and amplitude spectra with respect to display time and viewing eccentricity. In common with earlier work on the perception of properties of image spectra, we measure observers' ability to detect degradations in the amplitude or the phase spectrum of an image. In the first case, the degradation involves applying a Gaussian low-pass filter to the image in order to remove the high-frequency information. In the second case, random noise is added to the phase spectrum. However, we use a novel method to make a direct comparison between the effects of these two manipulations. We first obtain for each observer individually the parameters of the Gaussian filter and of the added random noise that result in the same threshold level of performance, in foveal vision and with a long display time. We then measure the accuracy of detection for these individually identified levels of degradation while varying display time and eccentricity. 
A second novel feature of the experiments is the use of synthetic textured images (see Figure 1a). These are periodic, so as to avoid edge effects when modifying images in the Fourier domain, and have strongly non-random phase spectra while being spatially homogeneous in appearance. They therefore have the advantage over photographic images of natural scenes that they contain no images of identifiable objects, and so effects of image content on the detectability of changes in spectra, such as those identified by Peli and Geri (2001), can be eliminated. While the identification of objects in natural scenes is one important function of vision, perception of properties of textures such as gradients, discontinuities, and local anomalies is also important for natural tasks such as object (Christensen & Todd, 2004) and scene (Renninger & Malik, 2004) recognition, depth and slant perception, camouflage (Billock, Cunningham, & Tsou, 2008), and illumination estimation (Chantler & Delguste, 1997). Moreover, the perception of texture often appears to be an automatic, effortless task (Landy & Graham, 2004), and except perhaps for the case of examining expensive consumer goods, we rarely examine textures actively using foveal vision. Therefore, it is of interest to study how well our visual system can recover information about texture with short display times and away from the fovea. 
Figure 1
 
Examples of (a) a reference texture, (b) phase randomization, and (c) amplitude spectrum smoothing. Note: These are 256 × 256 pixel crops of the 1024 × 1024 pixel images that were used as stimuli.
Figure 1
 
Examples of (a) a reference texture, (b) phase randomization, and (c) amplitude spectrum smoothing. Note: These are 256 × 256 pixel crops of the 1024 × 1024 pixel images that were used as stimuli.
Experiment 1 compares the effects of display time on detection in foveal vision of amplitude and phase spectrum degradation, to test the prediction from earlier literature that phase spectrum processing is slower. Experiments 2 and 3 address the effects of eccentricity. Here, previous studies indicate clearly that detection of degradation of the amplitude spectrum will decline in the periphery. However, it is not clear how the ability to detect phase spectrum degradations will change. 
General methods
Stimuli
Synthetic surface textures were created and then rendered to yield naturalistic textures (Clarke, Chantler, & Green, 2009; Emrith et al., 2010). Height maps were generated by randomly placing 15,000 ellipsoid textons. The textons were between 10 × 10 and 20 × 20 pixels2 in size with random orientation. Where the ellipsoids overlapped, they were combined using a max function. The height maps were generated to be periodic (i.e., they can be tiled seamlessly in both the horizontal and vertical directions) in order to prevent artifacts from the edges appearing during phase randomization using the discrete Fourier Transform. In order to generate an image, the height maps were rendered using the Lambertian reflectance model to simulate illumination by a directional light source with azimuth = 90° and elevation = 60°. 
In order to measure observers' ability to detect changes in either the global phase or amplitude spectrum of textured images, the phase spectrum was modified by adding uniformly distributed noise with interval [0, x p ] while the amplitude spectrum was modified by multiplication with a low-pass Gaussian filter, with σ = x a pixels. Examples are shown in Figures 1b and 1c
Stimulus presentation was controlled by MatLab and the PsychToolBox (Brainard, 1997; Pelli, 1997). All stimuli were 1024 × 1024 pixels in size and displayed on a calibrated NEC LCD2090UXi monitor. The pixel dimensions were 0.255 mm × 0.255 mm resulting in images with physical dimensions of 261 mm × 261 mm (20.5° × 20.5° at the viewing distance of 0.72 m). The monitor was linearly calibrated with a GretagMacBeth Eye-One; maximum luminance was 120 cd/m2. This results in the rendered images appearing as if they were being lit under bright room lighting conditions. 
Procedure
In all three experiments, thresholds for detecting the presence of phase noise or Gaussian smoothing with 82% accuracy were found using the QUEST algorithm (Watson & Pelli, 1983) and a 2IFC task. Participants were shown two images, one after the other, and asked to indicate with a key press whether the first or second image had been degraded. They were not asked to identify which of the two modifications had been applied. Examples of the unmodified texture, the smoother image, and the image with phase noise were shown during a short training phase before the start of the experiment, with feedback provided. 
A fixation cross was shown before each trial. After each stimulus had been shown, a random noise mask was shown for 200 ms, and the interstimulus time was 750 ms, during which time the fixation cross was again displayed. There was no constraint on response time. In Experiments 1 and 2, six interleaved staircases were used (three for phase noise and three for Gaussian smoothing), and each staircase was run for 50 trials. In Experiment 3, two independent staircases were run for each of the six conditions (foveal/peripheral display and three frequencies), giving a total of twelve staircases. 
Observers
Observers were between the ages of 22 and 30 and had normal or corrected-to-normal vision. 
Experiment 1
Methods
Six observers took part, two members of the same laboratory who were not directly involved in this study and four naive observers who each received a £10 voucher for their participation. In the first stage of the experiment, each observer's threshold for detecting added phase noise and Gaussian smoothing in textured images was measured as described in the General methods section. Stimulus presentation time was 300 ms. In the second stage, the values of phase noise and smoothing were kept constant at each observer's threshold, while stimulus time was varied (t = 40, 80, 120, 160, 200, 250, 300, or 400 ms). 
Results
In the first stage, individual mean detection thresholds fell within a range of [36.2°, 55.3°] for phase randomization and [0.12, 0.18] for Gaussian smoothing. In the second stage, when stimulus presentation time was varied, detection accuracies for both types of degradation were at chance level up to 80-ms presentation time and increased from 120 ms onward (Figure 2). With a presentation time of 300 ms (the same as in the first stage when thresholds were determined), accuracies were close to the expected 82% level, with means (std. dev.) of 83.9 (6.5)% and 77.8 (8.6)% for the phase and amplitude degradations, respectively. A two-way ANOVA on both independent variables showed that while time had a significant effect (F(7, 35) = 37.9980, p < 0.001), there was no evidence of a difference between the two types of degradation (F(1, 5) = 0.034, p = 0.861). The interaction between the two variables was also non-significant (F(7, 35) = 0.796, p = 0.596). 
Figure 2
 
The effect of display time on accuracy. Interpersonal means and standard errors are shown. One observer, who performed at chance level for display times greater than 200 ms, was removed.
Figure 2
 
The effect of display time on accuracy. Interpersonal means and standard errors are shown. One observer, who performed at chance level for display times greater than 200 ms, was removed.
Experiment 2
Methods
The same six observers took part as in Experiment 1. Their individual thresholds for detecting phase noise and Gaussian smoothing were measured again, and in the second stage of the experiment, their detection accuracy at these threshold values was measured while the viewing eccentricity of the stimuli was varied. A ring mask was applied to the textured stimuli so that only a limited range of eccentricities was displayed. The following mean eccentricities were used: 1.25°, 2.50°, 3.75°, 5.00°, 6.24°, 7.47°, and 8.70°. The width of the ring was 2.5°, which meant that r = 1.25° resulted in a circular patch of texture, allowing for foveal vision (examples are shown in Figure 3). A Tobii eye tracker was used to ensure that the observers were fixating within 1° of the center of the screen before stimulus onset. 
Figure 3
 
Examples of the stimuli used in Experiment 2.
Figure 3
 
Examples of the stimuli used in Experiment 2.
Results
The range of thresholds was very similar to those reported in the previous experiment, [41.9°, 56.6°] for phase noise and [0.11, 0.12] for smoothing (note that the stimulus display time was different in the two experiments). The effect of different eccentricities on the observers' ability to detect the threshold level degradation is shown in Figure 4. For foveal vision (r = 1.25°), accuracies are close to the expected 82% level: 83.33 (10.7)% and 86.7 (8.4)% for the phase and amplitude spectra degradations, respectively. With increasing eccentricity, accuracy declines in both tasks but more steeply for the detection of smoothing than of phase noise. A two-way repeated measures ANOVA on both independent variables shows that the effect of eccentricity is statistically significant (F(6, 30 = 9.519), p < 0.001), while degradation type is tending toward significance (F(1, 5) = 5.332, p = 0.069). Crucially, the interaction between the two is significant (F(6, 30) = 5.274, p = 0.001). 
Figure 4
 
The effect of eccentricity on accuracy. Interpersonal means and standard errors are shown.
Figure 4
 
The effect of eccentricity on accuracy. Interpersonal means and standard errors are shown.
The effect of eccentricity on the detection of Gaussian smoothing is unsurprising, as visual acuity is well known to drop off outside the fovea and so we would expect our ability to detect the removal of an image's high-frequency information to decrease in the periphery. The weak effect of eccentricity on the detection of phase spectrum degradation is of more interest however. In the periphery, information about phase relationships of high frequencies is lost, and the results imply that this has little or no effect on our ability to detect phase randomization. The processes that support detection of the phase spectrum's properties appear to be tolerant of a marked loss of information from high frequencies, at least in the periphery. 
Experiment 3
To investigate the effect of eccentricity on the effect of phase randomization further, Experiment 3 compared detection in the fovea and at 8.7° eccentricity, using textured images in which randomization was restricted to low spatial frequencies. The experiment consisted of only one stage, in which thresholds for detecting phase randomization were measured in the same way as in the earlier experiments, while varying the cutoff frequency below which the phase spectrum was randomized. If there is a greater sensitivity to phase relationships between low spatial frequency components of an image in the periphery than in the fovea, then we predicted that thresholds for detecting phase randomization in the fovea would increase as randomization was restricted to successively lower frequency ranges but would remain constant, or increase more slowly, in the periphery. With low cutoff frequencies, we therefore predicted better detection performance at threshold in the periphery than in the fovea. 
Methods
Stimuli
The same baseline stimuli were used as in the earlier experiments. The images were degraded as follows. First, the image was low-pass filtered in the frequency domain with a Gaussian, σ = 12.2, 4.9, 3.0 cycles/degree, and then subtracted from the original image, to separate the low- and high-frequency information. (For comparison, the highest frequency in the original image is 25 cycles/degree.) The low-frequency component was then phase randomized and added back to the high-frequency component. Examples can be seen in Figure 5
Figure 5
 
Examples of the stimuli used in Experiment 3. Note that the same amount of phase noise has been added in all three cases. In the experiment, the aim was to find the amount of phase noise needed in each case to make image degradation equally detectable.
Figure 5
 
Examples of the stimuli used in Experiment 3. Note that the same amount of phase noise has been added in all three cases. In the experiment, the aim was to find the amount of phase noise needed in each case to make image degradation equally detectable.
Procedure
A 3 × 2 interleaved staircase design was used to compare detection thresholds with 82% accuracy in foveal and peripheral vision, using the QUEST algorithm to find the amount of phase randomization needed for threshold level detection for a given value of σ (i.e., cutoff frequency for phase randomization). As in Experiment 2, a Tobii eye tracker was used to ensure that observers were fixating the center of the screen before a trial started. Stimulus presentation time was 150 ms. 
Observers
Eight observers were used, three of whom had taken part in the previous two experiments. 
Results
Detection thresholds are shown in Figure 6 and follow the pattern predicted from the results of Experiment 2. Detection thresholds for low-frequency phase noise are lower in peripheral vision than in the fovea. The effects of both eccentricity and frequency cutoff are significant (F(1) = 6.86, p = 0.03 for r; F(2) = 24.83, p < 0.01 for σ) as is the interaction r × σ (F(2,1) = 7.28, p = 0.01). 
Figure 6
 
The effects of eccentricity and cutoff frequency for phase randomization on accuracy of detecting randomization. The dotted line shows the data for full-band phase randomization and foveal viewing, obtained in Experiment 2.
Figure 6
 
The effects of eccentricity and cutoff frequency for phase randomization on accuracy of detecting randomization. The dotted line shows the data for full-band phase randomization and foveal viewing, obtained in Experiment 2.
Discussion
The time courses of frequency and phase processing have both previously been studied independently, using different stimuli and tasks (Kihara & Takeda, 2010; Rousselet et al., 2008). Evidence from those studies suggested that phase spectrum information is available for visual tasks with a delay of approximately 40 ms after amplitude spectrum information becomes available. In Experiment 1, by first matching stimuli for levels of amplitude and phase degradation that were equally detectable to observers when presented for a relatively long period, we have been able to compare directly performance on the two tasks with shorter presentation times. The results do not indicate any delay in the processing of phase spectrum relative to amplitude spectrum information, within the limits of resolution imposed by the interval between successive display times. However, this interval was sufficiently short to detect a difference in time course in the region of 40 ms, and there is no evidence of this in the results. 
The results from Experiments 2 and 3 disagree with a number of previous studies concerning the effect of viewing eccentricity on the perception of phase information. Several studies, using narrowband stimuli consisting of gratings and Gabor patches, have found large differences between foveal and peripheral processing of relative phase, in some cases amounting to insensitivity to phase relations in the periphery (Bennett & Banks, 1987, 1991; Hess & Pointer, 1987; Renschler & Treutwein, 1985). In some respects, our results are more comparable with Morrone et al.'s (1989) conclusion, based on experiments using broadband stimuli, that the ability to detect relative phase decreases slowly with eccentricity, in line with contrast sensitivity and grating acuity. Morrone and Burr suggested that the different results were due to the use of broadband stimuli, and the results of our Experiment 2 would support this conclusion; the ability to detect a degradation in the amplitude spectrum of a broadband image decreased to chance performance 9° from the center of the fovea, whereas the ability to detect a degradation in the phase spectrum (matched to be equally detectable in the fovea) only fell to 70% at this eccentricity. 
However, Experiments 2 and 3 also demonstrate a novel result. Not only does sensitivity to relative phase decline only slowly with increasing eccentricity in our task, but Experiment 3 also demonstrates an advantage for peripheral over foveal vision in detecting phase randomization when it is restricted to the low spatial frequency components of an image. If the processing that underlies the ability to detect phase randomization were sensitive to phase relationships across the whole spectrum available at any particular retinal eccentricity, then this result would not be obtained. There would be some decline in accuracy in the periphery, as the narrower range of frequencies available as a result of low-pass filtering would add noise to the detection of phase alignment. However, the peripheral superiority for detecting phase randomization in low spatial frequencies, shown in Experiment 3, would not be observed. Such frequencies are also visible in foveal vision, and their phase relationships should be detected with accuracy at least equal to that in the periphery. 
A possible explanation for the results of Experiment 3 is that the processing of phase that underlies performance on our task operates on distinct bands of the spectrum in the fovea and in the periphery. In the periphery, phase relations are processed within the low-frequency band imposed by spatial filtering (Garcia-Perez & Sierra-Vazquez, 1996). Foveal vision is sensitive only to phase relations within a band of high frequencies, despite having superior contrast sensitivity over vision at 9° eccentricity at frequencies down to 0.2 cycle/degree (Pointer & Hess, 1989). The results of Experiment 3 (see Figure 6) imply that the frequency bands concerned must overlap to an extent. At 9° eccentricity, the upper limit of the band is in the region of 5 cycles/degree, after which adding phase noise to higher frequencies does not affect the detection threshold. In the fovea, the threshold continues to rise as the cutoff frequency falls below 5 cycles/degree, implying that the lower limit of the band is below this value. Further evidence will be needed to define these limits more closely and to determine whether they change in a graded way with eccentricity or show a step change in the same way as contour integration (Hess & Dakin, 1999). It will also be useful to establish whether the superior detection of phase randomization in the periphery demonstrated here extends to other examples of broadband visual textures. 
If this explanation is correct, it implies a novel functional difference between foveal and peripheral vision, which goes beyond the well-characterized variation in spatial filtering with eccentricity. Baddeley and Tatler (2006) have shown that, when correlations between different image statistics are taken into account, the feature that best predicts the location of fixations within an image during a memory task is the presence of a high spatial frequency edge. In contrast, low-frequency edges have a small inhibitory effect on fixations. As a result, they argue, luminance edges that arise from surface characteristics are more likely to be fixated on the fovea than those marking effects of illumination such as shadows. The results described here suggest that the preference for fixating high-frequency edges over low-frequency ones is accompanied by a greater sensitivity of foveal processing to phase alignments marking high-frequency edges than those marking low-frequency ones. 
Acknowledgments
Work funded by EPSRC Innovative Manufacturing Research Centre grant EP/F02553X/1. 
Commercial relationships: none. 
Corresponding author: Alasdair D. F. Clarke. 
Email: a.clarke@ed.ac.uk. 
Address: University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK. 
Footnote
Footnotes
1  The amplitude spectrum is the square root of the power spectrum and hence the slope of the amplitude spectrum, α = β/2. When discussing other studies, we have kept the terminology used by the authors.
References
Baddeley R. J. Tatler B. W. (2006). High frequency edges (but not contrast) predict where we fixate: A Bayesian system identification analysis. Vision Research, 46, 2824–2833. [CrossRef] [PubMed]
Bennett P. J. Banks M. S. (1987). Sensitivity loss in odd-symmetric mechanisms and phase anomalies in peripheral vision. Nature, 326, 873–876. [CrossRef] [PubMed]
Bennett P. J. Banks M. S. (1991). The effects of contrast, spatial scale, and orientation on foveal and peripheral phase discrimination. Vision Research, 31, 1759–1786. [CrossRef] [PubMed]
Billock V. A. Cunningham D. W. Tsou B. H. (2008). What visual discrimination of fractal textures can tell us about discrimination of camouflaged targets. Vision Research, 48, 1374–1382. [CrossRef] [PubMed]
Brainard D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Burton G. J. Moorhead I. R. (1987). Color and spatial structure in natural scenes. Applied Optics, 26, 157–170. [CrossRef] [PubMed]
Chantler M. J. Delguste G. B. (1997). Illuminant-tilt estimation from images of isotropic texture. IEE Proceedings on Vision, Image and Signal Processing, 144, 213–219. [CrossRef]
Chen C.-C. Tyler C. W. (1999). Spatial pattern summation is phase-insensitive in the fovea but not in the periphery. Spatial Vision, 12, 267–285. [CrossRef] [PubMed]
Christensen J. C. Todd J. T. (2004). The effects of texture changes on object recognition [Abstract]. Journal of Vision, 4(8):96, 96a, http://www.journalofvision.org/content/4/8/96, doi:10.1167/4.8.96. [CrossRef]
Clarke A. D. F. Chantler M. Green P. R. (2009). Modeling visual search on a rough surface. Journal of Vision, 9(4):11, 1–12, http://www.journalofvision.org/content/9/4/11, doi:10.1167/9.4.11. [PubMed] [Article] [CrossRef] [PubMed]
Emrith K. Chantler M. J. Green P. R. Maloney L. T. Clarke A. D. F. (2010). Measuring perceived differences in surface texture due to changes in higher order statistics. Journal of the Optical Society of America A, 27, 1232–1244. [CrossRef]
Field D. J. (1987). Relations between the statistics of natural images and the response profiles of cortical cells. Journal of the Optical Society of America, 4, 2379–2394. [CrossRef] [PubMed]
Garcia-Perez M. A. Sierra-Vazquez V. (1996). Do channels shift their tuning towards lower spatial frequencies in the periphery? Vision Research, 36, 3339–3372. [CrossRef] [PubMed]
Gaspar C. M. Rousselet G. A. (2009). How do amplitude spectra influence rapid animal detection. Vision Research, 49, 3001–3012. [CrossRef] [PubMed]
Hansen B. C. Hess R. F. (2006). Discrimination of amplitude spectrum slope in the fovea and parafovea and the local amplitude distributions of natural scene imagery. Journal of Vision, 6(7):3, 696–711, http://www.journalofvision.org/content/6/7/3, doi:10.1167/6.7.3. [PubMed] [Article] [CrossRef]
Hess R. F. Dakin S. C. (1999). Contour integration in the peripheral field. Vision Research, 39, 947–959. [CrossRef] [PubMed]
Hess R. F. Hayes A. (1994). The coding of spatial position by the human visual system—Effects of spatial scale and retinal eccentricity. Vision Research, 34, 625–643. [CrossRef] [PubMed]
Hess R. F. Pointer J. S. (1987). Evidence for spatially local computations underlying discrimination of periodic patterns in fovea and periphery. Vision Research, 27, 1343–1360. [CrossRef] [PubMed]
Joubert O. R. Rousselet G. A. Fabre-Thorpe M. Fize D. (2009). Rapid visual categorization of natural scene context with equalized amplitude spectrum and increasing phase noise. Journal of Vision, 9(1):2, 1–16, http://www.journalofvision.org/content/9/1/2, doi:10.1167/9.1.2. [PubMed] [Article] [CrossRef] [PubMed]
Kihara K. Takeda Y. (2010). Time course of the integration of spatial frequency-based information in natural scenes. Vision Research, 50, 2158–2162. [CrossRef] [PubMed]
Knill D. C. Field D. Kersten D. (1990). Human discrimination of fractal images. Journal of the Optical Society of America A, 7, 113–123. [CrossRef]
Kovesi P. (2000). Phase congruency: A low-level image invariant. Psychological Research, 64, 136–148. [CrossRef] [PubMed]
Kwon M. Y. Legge G. E. (2011). Spatial frequency cutoff requirements for pattern recognition in central and peripheral vision. Vision Research, 51, 1995–2007. [CrossRef] [PubMed]
Landy M. S. Graham N. (2004). Visual perception of texture. In Chalupa L. M. Werner J. S. (Eds.), The visual neurosciences (pp. 1106–1118). Cambridge, MA: MIT Press.
Levi D. M. Hariharan S. Klein S. A. (2002). Suppressive and facilitatory spatial interactions in peripheral vision: Peripheral crowding is neither size invariant nor simple contrast masking. Journal of Vision, 2(2):3, 167–177, http://www.journalofvision.org/content/2/2/3, doi:10.1167/2.2.3. [PubMed] [Article] [CrossRef]
Morrone M. C. Burr D. C. Spinelli D. (1989). Discrimination of spatial phase in central and peripheral vision. Vision Research, 29, 433–445. [CrossRef] [PubMed]
Oliva A. Torralba A. (2006). Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research, 155, 23–36. [PubMed]
Oppenheim A. V. Lim J. S. (1984). The importance of phase in signals. Proceedings of the IEEE, 69, 529–541. [CrossRef]
Padilla S. Drbohlav O. Green P. R. Spence A. D. Chantler M. J. (2008). Perceived roughness of 1/f β noise surfaces. Vision Research, 48, 1791–1797. [CrossRef] [PubMed]
Parraga T. Troscianko C. A. Tolhurst D. J. (2000). The human visual system is optimised for processing the spatial information in natural visual images. Current Biology, 10, 35–38. [CrossRef] [PubMed]
Peli E. Geri G. A. (2001). Discrimination of wide-field images as a test of a peripheral-vision model. Journal of the Optical Society of America A, 18, 294–301. [CrossRef]
Pelli D. G. (1997). The Videotoolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Pointer J. S. Hess R. F. (1989). The contrast sensitivity gradient across the human visual field—With emphasis on the low spatial-frequency range. Vision Research, 29, 1133–1151. [CrossRef] [PubMed]
Renninger L. W. Malik J. (2004). When is scene identification just texture recognition? Vision Research, 44, 2301–2311. [CrossRef] [PubMed]
Renschler I. Treutwein B. (1985). Loss of spatial phase relationships in extrafoveal vision. Nature, 313, 308–310. [CrossRef] [PubMed]
Rousselet G. A. Pernet C. R. Bennett P. J. Sekuler A. B. (2008). Parametric study of EEG sensitivity to phase noise during face processing. BMC Neuroscience, 9, 1–22. [CrossRef] [PubMed]
Shah P. J. Chantler M. J. Green P. R. (2010). Human perception of surface directionality. In Pointer M. (Ed.), 2nd CIE Expert Symposium on Appearance (pp. 77–78). Gent, Belgium.
Tadmor Y. Tolhurst D. J. (1993). Both the phase and the amplitude spectrum may determine the appearance of natural images. Vision Research, 33, 141–145. [CrossRef] [PubMed]
Thomson M. G. A. Foster D. H. (1997). Role of second- and third-order statistics in the discriminability of natural images. Journal of the Optical Society of America A, 14, 2081–2090. [CrossRef]
To M. P. Gilchrist I. D. Troscianko T. Tolhurst D. J. (2011). Discrimination of natural scenes in central and peripheral vision. Vision Research, 51, 1686–1698. [CrossRef] [PubMed]
Watson A. B. Pelli D. G. (1983). Quest: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33, 113–120. [CrossRef] [PubMed]
Westheimer G. (1982). The spatial grain of the perifoveal visual field. Vision Research, 22, 157–162. [CrossRef] [PubMed]
Wichmann F. A. Braun D. I. Gegenfurtner K. R. (2006). Phase noise and the classification of natural images. Vision Research, 46, 1520–1529. [CrossRef] [PubMed]
Figure 1
 
Examples of (a) a reference texture, (b) phase randomization, and (c) amplitude spectrum smoothing. Note: These are 256 × 256 pixel crops of the 1024 × 1024 pixel images that were used as stimuli.
Figure 1
 
Examples of (a) a reference texture, (b) phase randomization, and (c) amplitude spectrum smoothing. Note: These are 256 × 256 pixel crops of the 1024 × 1024 pixel images that were used as stimuli.
Figure 2
 
The effect of display time on accuracy. Interpersonal means and standard errors are shown. One observer, who performed at chance level for display times greater than 200 ms, was removed.
Figure 2
 
The effect of display time on accuracy. Interpersonal means and standard errors are shown. One observer, who performed at chance level for display times greater than 200 ms, was removed.
Figure 3
 
Examples of the stimuli used in Experiment 2.
Figure 3
 
Examples of the stimuli used in Experiment 2.
Figure 4
 
The effect of eccentricity on accuracy. Interpersonal means and standard errors are shown.
Figure 4
 
The effect of eccentricity on accuracy. Interpersonal means and standard errors are shown.
Figure 5
 
Examples of the stimuli used in Experiment 3. Note that the same amount of phase noise has been added in all three cases. In the experiment, the aim was to find the amount of phase noise needed in each case to make image degradation equally detectable.
Figure 5
 
Examples of the stimuli used in Experiment 3. Note that the same amount of phase noise has been added in all three cases. In the experiment, the aim was to find the amount of phase noise needed in each case to make image degradation equally detectable.
Figure 6
 
The effects of eccentricity and cutoff frequency for phase randomization on accuracy of detecting randomization. The dotted line shows the data for full-band phase randomization and foveal viewing, obtained in Experiment 2.
Figure 6
 
The effects of eccentricity and cutoff frequency for phase randomization on accuracy of detecting randomization. The dotted line shows the data for full-band phase randomization and foveal viewing, obtained in Experiment 2.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×