January 2010
Volume 10, Issue 1
Free
Research Article  |   January 2010
A biologically plausible model of human shape symmetry perception
Author Affiliations
Journal of Vision January 2010, Vol.10, 9. doi:https://doi.org/10.1167/10.1.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Frédéric J. A. M. Poirier, Hugh R. Wilson; A biologically plausible model of human shape symmetry perception. Journal of Vision 2010;10(1):9. https://doi.org/10.1167/10.1.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Symmetry is usually computationally expensive to encode reliably, and yet it is relatively effortless to perceive. Here, we extend F. J. A. M. Poirier and H. R. Wilson's (2006) model for shape perception to account for H. R. Wilson and F. Wilkinson's (2002) data on shape symmetry. Because the model already accounts for shape perception, only minimal neural circuitry is required to enable it to encode shape symmetry as well. The model is composed of three main parts: (1) recovery of object position using large-scale non-Fourier V4-like concentric units that respond at the center of concentric contour segments across orientations, (2) around that recovered object center, curvature mechanisms combine multiplicatively the responses of oriented filters to encode object-centric local shape information, with a preference for convexities, and (3) object-centric symmetry mechanisms. Model and human performances are comparable for symmetry perception of shapes. Moreover, with some improvement of edge recovery, the model can encode symmetry axes in natural images such as faces.

Introduction
Mirror symmetry, henceforth abbreviated as symmetry, is an important cue guiding mate selection and other behaviors in many species (Horridge, 1996; Møller, 1992; Swaddle & Cuthill, 1994). As Enquist and Arak (1994) argued, animals and insects may have coevolved symmetrical patterns and symmetry perception to improve recognition independent of position and orientation in the visual field. Symmetry is also useful for object processing. Indeed, along with elongation, it helps determine the reference frame for objects (Large, McMullen, & Hamm, 2003; Sekuler & Swimmer, 2000; see also Herbert, Humphrey, & Jolicoeur, 1994). 
Here we further the understanding of symmetry perception by extending an existing model of shape perception (Poirier & Wilson, 2006). This biologically plausible model is efficient, robust, and invariant with respect to shape perception over position and size. Before presenting the model, however, we review relevant aspects of research in symmetry perception (e.g., other models of symmetry, symmetry perception of the outline or shape, and biological mechanisms of symmetry perception). 
Computation of symmetry
Humans recover symmetry in shapes and random textures within 100 ms (Barlow & Reeves, 1979; Carmody, Nodine, & Locher, 1977; Tyler, Hardage, & Miller, 1995; Wilson & Wilkinson, 2002; for reviews, see Tyler, 2002; Wagemans, 1997). Symmetry perception is both sensitive and robust to several perturbations, including percent coherence and spatial jitter of elements composing the symmetrical pattern (Barlow & Reeves, 1979), shifts in midpoint collinearity (Jenkins, 1983), and phase variations (Dakin & Hess, 1997). Symmetry perception is also fairly resistant to slant or skew (i.e., rotation in depth or viewpoint; van der Vloed, Csathó, & van der Helm, 2005; Wagemans, Van Gool, & d'Ydewalle, 1991, 1992; Wagemans, Van Gool, Swinnen, & Van Horebeek, 1993), polarity changes (Mancini, Sally, & Gurnsey, 2005; Tyler & Hardage, 2002; Zhang & Gerbino, 1992), peripheral presentations (Barrett, Whitaker, McGraw, & Herbert, 1999; Julesz, 1971; Mancini et al., 2005; Poirier & Gurnsey, 2005; Sally & Gurnsey, 2001; Tyler, 1999; Wilson & Wilkinson, 2002), temporal perturbations (van der Vloed, Csathó, & van der Helm, 2007), and partial occlusion. 
Many models of symmetry employ some variation where luminance or contrast properties of the image are compared pointwise on either side of a potential axis of symmetry. For texture symmetry, most models are variants of filter–rectify–filter approaches (Dakin & Hess, 1997; Dakin & Watt, 1994; Gurnsey, Herbert, & Kenemy, 1998; Rainville & Kingdom, 1999, 2000, 2002), but other image-filtering approaches have been proposed (e.g., Bonneh, Reisfeld, & Yeshurun, 2002; Palmer, 1983; Royer, 1981) as well as more quantitative approaches (Dry, 2008; van der Helm & Leeuwenberg, 1996, 1999, 2004). Image properties on either side of the potential axis of symmetry are compared using a correlation, a difference, or some related measure (e.g., Dakin & Watt, 1994; Dry, 2008; Gurnsey et al., 1998; Hong & Pavel, 2002; Latimer, Joung, & Stevens, 2002; Mancini et al., 2005; Rainville & Kingdom, 2000). This operation is repeated at all visual locations and all orientations, making this a computationally expensive operation. The problem is compounded by the robustness and flexibility of symmetry perception in natural images, where viewpoint varies, occlusions occur, and light sources vary. 
Computational efficiency and robustness can be increased by focusing the symmetry operations on salient features, such as low spatial frequencies or groups of dots and blobs (Barlow & Reeves, 1979; Julesz & Chang, 1979; Labonté, Shapira, Cohen, & Faubert, 1995) provided the fine-scale structure is not removed in the process (Csathó, van der Vloed, & van der Helm, 2003; van der Helm & Leeuwenberg, 1999, 2004). Indeed, according to ideal observer simulations, about 25% of dots in a random-dot texture are used in the task (Tapiovaara, 1990), possibly those that occur in clusters. Moreover, symmetry perception in low dot-density textures is more efficient and more robust to changes in polarity and distance from foveal center (Tyler & Hardage, 2002; Wenderoth, 1996). Robust shape and symmetry perception can also occur with greater efficiency if processing is based on higher level features, such as object parts and global shape (see Discussion section). 
Center vs. outline
Research indicates that we are more sensitive to symmetry information located in two regions: the first is located near the axis of symmetry (Barlow & Reeves, 1979; Bruce & Morgan, 1975; Jenkins, 1982; Wenderoth, 1995, 2002), and the second forms the outline of the stimulus (Barlow & Reeves, 1979; Carmody et al., 1977; Wenderoth, 1995, 2002). We are less sensitive to information contained between the midline and the outline. It is unclear whether different mechanisms are encoding these two sources of symmetry information, or whether seemingly different mechanisms might just be different expressions of one underlying mechanism. 
More detailed studies have been carried out on the properties of symmetry perception in the central region. The “receptive field” for symmetry in that region has a 2:1 size ratio elongated along the axis of symmetry it encodes (Dakin & Herbert, 1998), is less sensitive to parallel orientations (Dakin & Hess, 1997; Rainville & Kingdom, 2000), and is selective for dot density (Rainville & Kingdom, 2002; but see Dakin & Herbert, 1998). Symmetry in that region is systematically biased toward axes at given orientations (Wenderoth, 2002). Symmetry detection can be equated in peripheral vision if stimuli are increased at the same rate as would be needed for positional tasks (Sally & Gurnsey, 2001). Peripheral symmetry detection is less robust to removal of the central region of the symmetry patch, either because it has a smaller integration window (Tyler, 1999), or because it is less robust to information loss (Poirier & Gurnsey, 2005). In this region, symmetry may be encoded as the coalignment of blobs after filtering with orthogonal orientations (e.g., Dakin & Hess, 1997), or with more specialized symmetry mechanisms (see above). 
Less is known about the properties of symmetry detection based on the outline. Unlike central information, symmetry information present in the outline is independent of the symmetry axis' orientation (Wenderoth, 2002). Shape symmetry can be detected peripherally (Julesz, 1971; Wilson & Wilkinson, 2002). Similarly, symmetry perception in textured patches benefit from outline information (Gurnsey et al., 1998; Labonté et al., 1995). Shape symmetry may be evaluated by point-by-point comparisons of salient features, such as corners or convexities (Wilson & Wilkinson, 2002), which have been shown to play a more important role in object perception than edges (Attneave, 1954; Bertamini, 2001, 2004; Biederman, 1987; Habak, Wilkinson, Zakher, & Wilson, 2004; Loffler, Wilson, & Wilkinson, 2003; Poirier & Wilson, 2007; Shevelev, Kamenkovich, & Sharaev, 2003; but see Hess, Wang, & Dakin, 1999; Mullen & Beaudot, 2002). 
Shape symmetry in the brain
Human object perception is achieved via a hierarchy of processing stages: retinal inputs, simple line and edge detectors in V1 (DeValois & DeValois, 1988; Graham, 1989; Hubel & Wiesel, 1968; Wilson, 1991), curvature detectors in V1 or V2 (Dobbins, Zucker, & Cynader, 1987, 1989; Koenderink & Richards, 1988; Wilson, 1985; Wilson & Richards, 1989), intermediate processing of part and shape representations in V4 (Merigan, 1996; Pasupathy & Connor, 2002; van Essen, 1985; Young, 1992), and further object processing in IT and LOC (Desimone, 1991; Gross, 1992; Tanaka, 1996). Results from fMRI studies suggest some extrastriate areas respond preferentially to texture symmetry (Sasaki, Vanduffel, Knutsen, Tyler, & Tootell, 2005; Tyler et al., 2005), which would be consistent with symmetry based on shape. 
Here we extend our biologically plausible model of shape perception (Poirier & Wilson, 2006) such that it can also account for data collected on shape symmetry perception (Wilson & Wilkinson, 2002; see Figure 1 for sample stimuli). This model was used for multiple reasons, including (1) it was built to account for shape perception of stimuli similar to those used in Wilson and Wilkinson (2002), (2) it is biologically plausible, (3) it has built-in invariance to position and a range of sizes, and (4) very little needs to be added or changed to account for symmetry perception. 
Figure 1
 
Stimuli used in Wilson and Wilkinson's (2002) experiment on shape symmetry. Shown in different columns are the different configurations used, and in different rows are the different phase differences. Symmetry is reduced as the different components are increasingly misaligned (i.e., as phase difference is increased), but the increased asymmetry is most easily detectable for certain patterns than others (e.g., “2 + 3” and “face” are easiest to notice, whereas “2 + 7” and “5 + 7” are hardest). Model responses were taken from these stimuli.
Figure 1
 
Stimuli used in Wilson and Wilkinson's (2002) experiment on shape symmetry. Shown in different columns are the different configurations used, and in different rows are the different phase differences. Symmetry is reduced as the different components are increasingly misaligned (i.e., as phase difference is increased), but the increased asymmetry is most easily detectable for certain patterns than others (e.g., “2 + 3” and “face” are easiest to notice, whereas “2 + 7” and “5 + 7” are hardest). Model responses were taken from these stimuli.
Model
The first 4 stages of the model presented here were previously published as a model of shape perception, and readers are referred to the original paper for details (Poirier & Wilson, 2006). Except for one parameter that was allowed to vary (see Fourth stage: Population code for shape section), the model used here is identical to the original publication. Other discrepancies between the two publications reflect removal of model components that are not relevant to the present issues and that have no consequences on the model's performance. Here, we present only an overview of the model's main features relevant for symmetry perception in shapes, as well as a detailed account of the additional 5th stage that encodes symmetry. 
Readers are referred to the Discussion section, where we provide a thorough treatment of several topics relevant to the current model, including (1) the robustness of the current model with regards to parameter changes and stimulus changes (see Model parameters section), (2) a comparison with other models of symmetry (see Other models of symmetry section), and (3) hierarchical organization does not imply a precedence order of task performance (see Generalizing the model section). 
Overview of the model
The model recovers shape symmetry in five stages: (1) contour information is recovered using oriented filters, (2) object center is recovered using higher order filters that respond at the center of concentric contours ( Figure 2, left), (3) local curvature signals ( Figure 2, middle) are recovered around the contour using a few curvature mechanisms tuned to different degrees of curvature ( Figure 2, right), (4) shape is represented as curvature signal strength as a function of orientation around the object's center, and (5) this object-centric curvature information is used to evaluate symmetry ( Figure 4). 
Figure 2
 
Overview of the shape perception model used to derive a population code of shape (Poirier & Wilson, 2006). (Left) Filters used in the computation of object center. Small-scale oriented filters encode the contour, and orthogonal large-scale filters positioned on either side from the center encode occurrences of concentric line elements. (Middle) Filters used in the computation of local curvature. Maximum response occurs when the contour passes through three oriented filters that are combined (multiplication shown here). (Right) Curvature information is relative to object center: curvature mechanisms scale with distance from object center, and they are oriented to prefer accentuated convexities relative to the curvature expected for a circle. Refer to the text and subsequent figures for the dimensions of the filters used.
Figure 2
 
Overview of the shape perception model used to derive a population code of shape (Poirier & Wilson, 2006). (Left) Filters used in the computation of object center. Small-scale oriented filters encode the contour, and orthogonal large-scale filters positioned on either side from the center encode occurrences of concentric line elements. (Middle) Filters used in the computation of local curvature. Maximum response occurs when the contour passes through three oriented filters that are combined (multiplication shown here). (Right) Curvature information is relative to object center: curvature mechanisms scale with distance from object center, and they are oriented to prefer accentuated convexities relative to the curvature expected for a circle. Refer to the text and subsequent figures for the dimensions of the filters used.
Stimuli
Closed contours are created using radial frequency (RF) patterns, where the radius of a circle is varied as a function of polar angle ( θ) using a sum of sinusoid functions of various amplitudes, phases, and frequencies (Wilkinson, Wilson, & Habak, 1998; Figures 1 and 3A): 
R(θ)=R0*(1+n=1mAncos(ωnθ+ϕn)),
(1)
where R0 is the mean radius, and ωn, An, and ϕn are the frequency, amplitude, and phase, respectively, for each radial modulation (n of m) added into the circle. RF patterns are useful to study intermediate-level shape perception because they provide controls on shape, show global shape processing properties (Hess et al., 1999; Jeffrey, Wang, & Birch, 2002; Loffler et al., 2003; Wilkinson et al., 1998), and are easily modified to create natural shapes such as faces (e.g., Wilson, Loffler, & Wilkinson, 2002; Wilson & Wilkinson, 2002). In the context of symmetry perception, phase alignment of the different components was manipulated to vary the degree of symmetry in the patterns (see Figure 1). Other stimulus parameters including the contour's luminance profile were matched to experimental conditions (see Wilson & Wilkinson, 2002). 
Figure 3
 
Sample filters and responses of the model at various stages of processing. (A) Sample radial frequency contours are convolved with (B) oriented filters for each of 8 evenly spaced orientations, (C) the output of which is thresholded. For each orientation, the output is convolved with (D) a pair of filters oriented orthogonally to first stage filters, and offset along their axis of orientation, the half-wave rectified output of which (E) is summed over the 8 orientations and (F) thresholded to recover the contour's center, which is estimated as the maximum of that 2D distribution. (G) Curvature mechanisms sample the responses of oriented filters at 5 locations (5 curvature mechanisms shown, each has 3 white and 2 black dots; 3 curvature mechanisms are shown with curved lines to emphasize samples belonging to the same mechanism), the overall scale of the curvature mechanism increases with distance from the object center and is oriented to produce preferential responding to peak curvatures pointing away from the center. After (H) multiplication and inhibition of the samples, (I) the curvature mechanisms respond preferentially at the location of convex peaks. The response of curvature mechanisms (I) is sampled using 2D Gaussian profiles (K; 30 samples shown overlapped). Curvature response is thus recovered as a function of orientation around the center (J, L).
Figure 3
 
Sample filters and responses of the model at various stages of processing. (A) Sample radial frequency contours are convolved with (B) oriented filters for each of 8 evenly spaced orientations, (C) the output of which is thresholded. For each orientation, the output is convolved with (D) a pair of filters oriented orthogonally to first stage filters, and offset along their axis of orientation, the half-wave rectified output of which (E) is summed over the 8 orientations and (F) thresholded to recover the contour's center, which is estimated as the maximum of that 2D distribution. (G) Curvature mechanisms sample the responses of oriented filters at 5 locations (5 curvature mechanisms shown, each has 3 white and 2 black dots; 3 curvature mechanisms are shown with curved lines to emphasize samples belonging to the same mechanism), the overall scale of the curvature mechanism increases with distance from the object center and is oriented to produce preferential responding to peak curvatures pointing away from the center. After (H) multiplication and inhibition of the samples, (I) the curvature mechanisms respond preferentially at the location of convex peaks. The response of curvature mechanisms (I) is sampled using 2D Gaussian profiles (K; 30 samples shown overlapped). Curvature response is thus recovered as a function of orientation around the center (J, L).
First stage: Oriented filters
Stimuli were convolved with a bank of 8 oriented filters (Wilson, 1985; Figure 3B) distributed evenly over 180°, with parameters adjusted to mimic 8 cpd selectivity (Wilson, McFarlane, & Phillips, 1983). A broader orientation selectivity (i.e., length set to 1/2 of the estimate of Phillips and Wilson, 1984) was used to reduce the number of orientations needed in the simulation to obtain a smooth response. Responses were thresholded at 1/3 of the maximum response over orientations and positions (Figure 3C), which removed sideband responses and reduced positional and response noise of curvature mechanisms. 
Second stage: Object position
Object position is needed to build position and size invariance (see Third stage: Curvature detectors section). Object position was recovered using higher order filters that have been shown to recover object center in concentric Glass patterns (Wilson & Wilkinson, 1998; Wilson, Wilkinson, & Asaad, 1997) and radial frequency patterns (Poirier & Wilson, 2006). Responses at each first stage filter orientation were convolved with a filter defined as a pair of difference of Gaussians (DOGs) oriented perpendicular to the contour, each multiplied by a Gaussian radially, and each displaced by ±yo from the filter's receptive field center (see Figures 2, left and 3D3F): 
W(x,y)=(3ex2/0.42ex2/1.22)*(e(y+y0)2/0.682+e(yy0)2/0.682),
(2)
where the terms contained in the first set of parentheses reflect the profile of the two DOGs, and the two terms contained in the second set of parentheses reflect the positions of the two DOGs. 
This second stage filter was four times larger in each dimension (and four times lower in spatial frequency) than the first stage filter. Orienting this second layer filter perpendicular to the first layer filter orientation results in extraction of contour curvature (Wilson, 1999), thus generating a strong response to the center of concentric shapes (Wilson et al., 1997). The object center was defined as the location of the peak neural response. 
Third stage: Curvature detectors
Curvature mechanisms operate in parallel over the visual field, encoding curvature for all orientations and a range of curvature amplitudes. For the purpose of providing a plausible yet parsimonious model of radial frequency pattern perception, we restricted our analysis to opponent curvature mechanisms optimized to encode deviations from circular shapes. Moreover, unpublished fMRI data from our laboratory support the presence of neurons responding optimally to deviations from circles in the human ventral pathway (Rainville, Yurganov, & Wilson, 2005). 
Curvature mechanisms were modeled as the combination of the responses of several oriented filters arranged along a curved line, where the response is optimal if the contour matches the sampled locations and orientations of the filters ( Figure 2, middle). For example, the curvature responses to a circle are optimal when oriented filters are positioned around the center of the circle, with distance equal to the radius and orientation perpendicular to the radius ( Figures 2, middle and 3G). Oriented filter activity was sampled (summed) within a small positional blur, increasing bandwidth over curvature, position, and orientation. 
Samples were positioned relative to the object center defined in polar coordinates (where (0, 0) is the object center and ( R center, θ) is the receptive field's output), taking one sample at the receptive field's center ( R center, θ), two samples at ±Δ θ of the first sample and slightly further away from the center of the object than the first sample ( R out, θ ± Δ θ), and the last two samples are also positioned at ±Δ θ but placed slightly inward ( R in, θ ± Δ θ). The radii of the inward and outward samples were scaled with distance from the object center, thereby providing size constancy over a larger range of object sizes. 
Combining sampled contour responses ( S) from center ( R center, θ) and inward samples ( R in, θ ± Δ θ) gives peak responses for accentuated convex curvatures (see Figures 3J and 3L), whereas combining responses from center ( R center, θ) and outward samples ( R out, θ ± Δ θ) gives a peak response for straighter or even concave deviations from a circle. Sampled contour responses were combined multiplicatively as  
C R R c e n t e r , θ = S R i n , θ + Δ θ * S R c e n t e r , θ * S R i n , θ Δ θ S R o u t , θ + Δ θ * S R c e n t e r , θ * S R o u t , θ Δ θ ,
(3)
followed by half-wave rectification, such that curvature mechanisms respond preferentially at points of convex curvature extrema. 
Fourth stage: Population code for shape
We now have curvature responses that are selective for points of maximum convex curvature (see Third stage: Curvature detectors section), located around the previously determined center of the object (see Second stage: Object position section), which gives a maximum response at each peak of a radial frequency pattern. Curvature responses were sampled using 30 2D Gaussians positioned around the object center in 12° steps, with their axes of elongation aligned radially (see Figure 3K), giving 30 object-relative curvature responses ( R curv). 
Finally, the sampled responses were subjected to a Naka–Rushton non-linearity to simulate cell firing rates (Naka & Rushton, 1966), which provides an accurate description of neural firing rates (Albrecht & Hamilton, 1982; Sclar, Maunsell, & Lennie, 1990): 
Rcell=100*RcurvNγ/(R50%N+RcurvNγ),
(4)
where R50% determines the point of the function where cell firing (Rcell) is half of its maximum (100%), N determines the steepness of the function, and γ is an exponential non-linearity applied to curvature responses. These 30 firing rates, one per sampled direction from the object center, give a population code for internal representation of object shape (see Figures 3J and 3L). 
Fifth stage: Symmetry response
Figures 4 and 5 show how symmetry is computed from the 30 firing rates from the fourth stage. First, these firing rates are normalized such that the sum of R norm equals unity ( Equation 5):  
R n o r m ( θ ) = R c e l l ( θ ) / θ R c e l l ( θ ) .
(5)
 
Figure 4
 
Diagram of the symmetry mechanism. Curvature mechanism responses are given as a function of orientation around the object's center (see Figures 2 and 3), which are first normalized to sum to unity. Then, local symmetry is implemented on each pair of normalized responses, such that the sum of responses is inhibited by an amount proportional to its absolute difference. That is, the sum is unchanged if both responses are equal, and reduced otherwise. Finally, global symmetry equals the sum of local symmetries consistent with a given symmetry axis orientation and equals unity for perfectly symmetrical patterns and reduces otherwise.
Figure 4
 
Diagram of the symmetry mechanism. Curvature mechanism responses are given as a function of orientation around the object's center (see Figures 2 and 3), which are first normalized to sum to unity. Then, local symmetry is implemented on each pair of normalized responses, such that the sum of responses is inhibited by an amount proportional to its absolute difference. That is, the sum is unchanged if both responses are equal, and reduced otherwise. Finally, global symmetry equals the sum of local symmetries consistent with a given symmetry axis orientation and equals unity for perfectly symmetrical patterns and reduces otherwise.
Figure 5
 
Sample computations of the symmetry mechanism for three patterns, namely: “2 + 3”, “2 + 7”, and “Face”. Columns are divided in two sections, i.e., for stimuli that are symmetrical or not (i.e., phase difference of 30°). For stimuli that are not symmetric, curvature responses are shown for the object (solid line) and its mirror image (dotted line), as well as some local computations of the symmetry model (i.e., solid line shows average response, dotted line shows difference, and dashed line shows curvature response after inhibition). Note that larger signals are more susceptible to inhibition when asymmetries occur. All axes are in arbitrary but common units. Global symmetry measures are 73.75%, 80.95%, and 75.75% for the “2 + 3”, “2 + 7”, and “face” stimuli, respectively.
Figure 5
 
Sample computations of the symmetry mechanism for three patterns, namely: “2 + 3”, “2 + 7”, and “Face”. Columns are divided in two sections, i.e., for stimuli that are symmetrical or not (i.e., phase difference of 30°). For stimuli that are not symmetric, curvature responses are shown for the object (solid line) and its mirror image (dotted line), as well as some local computations of the symmetry model (i.e., solid line shows average response, dotted line shows difference, and dashed line shows curvature response after inhibition). Note that larger signals are more susceptible to inhibition when asymmetries occur. All axes are in arbitrary but common units. Global symmetry measures are 73.75%, 80.95%, and 75.75% for the “2 + 3”, “2 + 7”, and “face” stimuli, respectively.
Then, for each pair of signals, symmetry is defined using shunting inhibition (Michaelis & Menten, 1913): 
Rsym(θ,ϕ)=0.5(Rnorm(ϕ+θ)+Rnorm(ϕθ))1+ωinh|Rnorm(ϕ+θ)Rnorm(ϕθ)|,
(6)
where ϕ is a given possible axis of symmetry, θ is the angle around the object center from that axis of symmetry, and ωinh is a weight regulating the mutual inhibition due to deviations from symmetry. Rsym equals Rnorm for symmetrical patterns and decreases monotonically as patterns become increasingly asymmetrical (i.e., as the absolute difference between Rnorm · (ϕ + θ) and Rnorm · (ϕθ) increases). Finally, the degree of perceived symmetry for the pattern (Spat) for a potential axis of symmetry (ϕ) is given by summing over orientations: 
Spat(ϕ)=θRsym(θ,ϕ),
(7)
and an axis of symmetry will be perceived whenever Spat peaks, with symmetry strength proportional to Spat, and the axis orientation equal to the ϕ value where the peak occurred. This not only allows the derivation of the axis of symmetry's orientation and strength but also allows for multiple peaks to occur. That is, the model recovers multiple axes of symmetry when appropriate. Because of the normalization phase (see Equation 5), Spat equals unity for symmetrical patterns and decreases monotonically as symmetry is decreased. 
Parameters
In the original experiment (e.g., Wilson & Wilkinson, 2002), the axis of symmetry was jittered within ±5° around the vertical axis, thus participants knew to search for near-vertical symmetry axes. That is, participants were likely to ignore off-axis symmetry, such as occasionally occurs when stimuli have phase-misaligned high-RF components. Similarly, we biased perceived axes of symmetry of the model toward the vertical by multiplying Spat with a Gaussian function (σ = 120°; the specific function used here or its width is not critical to the results), effectively removing these “accidental” axes of symmetry from influencing performance. It is currently unknown what the responses of participants would be if the symmetry axis' orientation was random, if participants' responses would be affected by off-axis symmetry, and whether simply removing or adjusting this bias would be sufficient to account for data from such a scenario. 
We sampled S pat values in steps of 15° phase shift and interpolated for intermediate values. We used a threshold S pat value of 0.8 to determine what participants would classify as symmetrical or not; this parameter was not considered a free variable, as it covaries with ω inh without significantly improving the quality of the fit. The free parameters N and ω inh were adjusted to decrease the sum of squared differences between the human and model data, using the fminsearch function in Matlab (Mathworks). All other parameters are taken from the low-RF mechanism (see Poirier & Wilson, 2006), which was the mechanism responsible for detection and discrimination performance for the lower range of radial frequencies, as well as lateral masking data collected on RF5 patterns. 
Natural faces
Several changes were made to the model to enable it to recover symmetry in natural faces: (1) edge coding used odd-symmetric oriented filters at a lower spatial frequency (Gabor filters, sine spatial frequency = 2.4 cpd, Gaussian σ = 0.15°), which is better-suited to natural edges, (2) the object center was defined as the weighted average position of the second stage filters, (3) curvature responses were computed within a range of ±23° of the concentric orientation in 5 equal steps, and the maximum curvature response over that range was taken, to increase the curvature mechanisms' orientation bandwidth. 
Results
Model outputs
Figure 6 shows model outputs for some sample stimuli, including a simple geometric shape, synthetic faces, natural faces, and random-dot symmetrical patterns. In all of these cases, the object center was correctly identified. Moreover, from the population code of shape, the correct axes of symmetry were identified for simple geometric shapes and front-view synthetic faces, identical to those we would identify as the axes of symmetry. The model generalized well to natural faces and random-dot symmetrical patterns, and the correct axis of symmetry was recovered for every pattern shown. In the case of natural and synthetic faces, there was some range of symmetry axis orientations over which the symmetry signal remained high. With random-dot symmetry, the symmetry signal dropped rapidly away from the axis of symmetry. 
Figure 6
 
Model performance on a simple shape, on synthetic faces, on natural faces, and on two random-dot symmetrical patterns. Stimuli (row 1) are filtered to extract contours and object center (row 2; edges in white, center in black). Then, a population code of shape is built based on object-centric curvature signals (row 3), from which the model extracts axes of symmetry (row 4; lighter means more symmetrical). The model recovers the correct axes of symmetry for simple shapes and front-view faces but is biased toward the axis of elongation for other views of faces. This reflects a true shift of the symmetry axis rather than a preference for elongation (see Wilson & Wilkinson, 2002, for more details). In addition, the model extracts the correct symmetry axis in random-dot stimuli. See text for details.
Figure 6
 
Model performance on a simple shape, on synthetic faces, on natural faces, and on two random-dot symmetrical patterns. Stimuli (row 1) are filtered to extract contours and object center (row 2; edges in white, center in black). Then, a population code of shape is built based on object-centric curvature signals (row 3), from which the model extracts axes of symmetry (row 4; lighter means more symmetrical). The model recovers the correct axes of symmetry for simple shapes and front-view faces but is biased toward the axis of elongation for other views of faces. This reflects a true shift of the symmetry axis rather than a preference for elongation (see Wilson & Wilkinson, 2002, for more details). In addition, the model extracts the correct symmetry axis in random-dot stimuli. See text for details.
Symmetry thresholds
Participants in Wilson and Wilkinson's (2002) experiment had to discriminate between symmetric and asymmetry shapes, and report which of two consecutively presented shapes was the asymmetrical one. These shapes were defined as the combination of two or more radial frequency components into one contour. Those radial frequency components were aligned to create symmetrical shapes, with increased phase difference between the components creating increasingly asymmetrical shapes. For each participant, threshold phase difference of the components was measured for a variety of combinations of radial frequencies (see Figure 1). Acuity to changes in symmetry was dependent on the specific combinations of radial frequencies used, as shown here in Figure 7 (right). 
Figure 7
 
(Left) Model symmetry signal as a function of phase difference, for the different patterns used in Wilson and Wilkinson's (2002) experiment. The thick dashed line at 0.8 shows the threshold value. (Right) Threshold phase differences for model (y-axis) and humans (x-axis).
Figure 7
 
(Left) Model symmetry signal as a function of phase difference, for the different patterns used in Wilson and Wilkinson's (2002) experiment. The thick dashed line at 0.8 shows the threshold value. (Right) Threshold phase differences for model (y-axis) and humans (x-axis).
From the simulations, it is clear that symmetry drops as a function of phase difference for all stimuli (see Figure 7, left). More interestingly, symmetry drops faster for some stimuli, indicating that asymmetries will be easier to detect in those stimuli. Threshold phase for each stimulus was set as the point where the curve crossed the threshold symmetry value of 0.8. The correlation between predicted threshold and human thresholds is high ( Figure 7, right; r(5) = 0.904, t = 4.73, p = 0.0052; or assuming slope = 1 and intercept = 0, r(7) = 0.900, t = 5.16, p = 0.0013), indicating that the model replicates human performance in this task. Replacing our neural implementation of symmetry (i.e., Equations 57) by the more common but biologically implausible cross-correlation operation does not alter the quality of the fit. 
Discussion
We present here the first model of shape symmetry that is based on an existing and biologically plausible model of shape perception. The model predicts perceptual data for a variety of shape-related tasks (Poirier & Wilson, 2006), which now includes shape symmetry perception (present study). Moreover, with small modifications to edge encoding, the model can extract symmetry in natural faces, which would be problematic for conventional symmetry models. These points are discussed in more details below. 
Model parameters
The current model relies on opponent curvature filters. The choice of opponent-curvature mechanisms rather than simple curvature mechanisms or edge representations is warranted to avoid spurious responses that could introduce noise into the signal. The reduction of spurious responses is increasingly important as the stimulus becomes more naturalistic and more complex, i.e., includes other edges or textures. For example, had we used oriented edge operators instead of curvature mechanisms, a radial frequency pattern would produce an oriented-edge response at all points around the shape, thus making it difficult to encode symmetry. Similarly, opponency restricts the analysis to convex curvature, which is likely to exclude from the analysis any segment that is either concave or ambiguous with regards to direction. However, for stimuli where such ambiguity is removed such as the artificial faces, we do not expect opponent curvature to improve the model's performance relative to simple curvature mechanisms. 
The model presented here should hold very well to changes in shape size. First, we showed in the original paper that the model follows human size invariance over a 3.5-fold size change. In addition, the normalization step ( Equation 5) assures that even in the presence of size-related changes in the shape signal used by the symmetry estimation mechanism, the symmetry computation itself would be quite invariant with respect to size. That is, our model will encode symmetry for shapes of different sizes at least as well as it can encode the shape itself. It can also encode various degrees of convex curvatures, although the model would be sensitive to differences in curvature amplitude. Indeed, the model would be less sensitive overall to differences in distance between the stimulus center and the points of maximum curvature, provided it does not alter the recovered object position. The model should be more sensitive overall to changes in curvature amplitude and also to the angular arrangement of curvature maximums. This is because the model uses the curvature's angular position and amplitude to compute symmetry, but not so much its distance to the object center. 
Figure 8 shows the response non-linearity applied to get the population code of shape, both for the original paper (i.e., Poirier & Wilson, 2006; N = 1.942) and the present study (Equation 4; N = 0.288). The remaining free parameter ωinh was 163.7. Also plotted on Figure 8 are the relevant curvature response ranges for different data sets that were included in the fits. The new value of N creates a more gradual response increase, producing a more continuous code for shape, whereas the earlier parameter produced a sharper increase, which is perhaps more important for threshold detection of the patterns. Clearly, if the previous parameters were kept as is in the present simulations, the symmetry response would be dominated by the rare cases where curvature mechanisms produced low responses. In addition, as noted in the original paper (see Poirier & Wilson, 2006), the model is robust over a 10% change in many of its parameters, N included. A new parameter fit confirms that this robustness could be extended to encompass the current value of N. This parameter fit used the value of N used in this study, and only varied the parameters that were not critical for the present study (e.g., parameters regulating masking or percent correct in identification tasks). The resulting quality of fits was within the range of fit qualities shown in Poirier and Wilson (2006) when parameters were changed by 10%. Thus the model is robust when the value of N is changed to the current value. 
Figure 8
 
Comparison of compressive non-linearity parameters (slope and midpoint) from the previous publication (solid thick line) with those found to fit the symmetry data (dotted thick line). Also plotted are curvature mechanisms' response statistics for three data sets (i.e., circles, detection patterns for radial frequencies 3–6, and symmetry patterns; circles from left to right show the minimum, log-average, average, and maximum, respectively for each data set; note that the range shown for the circle represents simulation noise; data ranges are placed such that the average falls on the compressive non-linearity used in Poirier & Wilson, 2006). Clearly, the compressive non-linearity used previously appropriately increases sensitivity around threshold but compresses most of the relevant variance in the symmetry responses such that sensitivity is effectively decreased. See text for details.
Figure 8
 
Comparison of compressive non-linearity parameters (slope and midpoint) from the previous publication (solid thick line) with those found to fit the symmetry data (dotted thick line). Also plotted are curvature mechanisms' response statistics for three data sets (i.e., circles, detection patterns for radial frequencies 3–6, and symmetry patterns; circles from left to right show the minimum, log-average, average, and maximum, respectively for each data set; note that the range shown for the circle represents simulation noise; data ranges are placed such that the average falls on the compressive non-linearity used in Poirier & Wilson, 2006). Clearly, the compressive non-linearity used previously appropriately increases sensitivity around threshold but compresses most of the relevant variance in the symmetry responses such that sensitivity is effectively decreased. See text for details.
Other models of symmetry
Most modeling efforts in symmetry perception have focused on texture symmetry, with stimuli consisting of randomly placed dots, random checkerboard patterns, or filtered noise. Except for differences in the details of the computations, these models have in common that local low-level features of the image are matched, rather than higher order features. The inputs to the symmetry mechanism would come from V1/V2 cells. An implicit assumption, therefore, is that these mechanisms operate prior to much of the machinery that deals with occlusion, edge assignment, object localization, light source, shape-from-shading, or viewpoint variance. That mechanism's ability to encode symmetry would be seriously compromised by introducing such manipulations. This deficit can be alleviated by additional higher level symmetry mechanisms that compensate for situations that lower level symmetry mechanism cannot encode properly, and/or feedback from object-processing mechanisms onto the inputs to the lower level symmetry mechanisms. 
Another proposal for object shape perception and symmetry has been made (Kurbat, 2002) based on the symmetric axis transform (Blum, 1973). This model is similar in principle with grassfire or erosion models, in that the shape is successively trimmed until only a stick-figure representation remains. Alternatively, object parts could be coded separately using their centroids and axes of elongations. Either way, although such a reduction in the amount of information that enters a shape symmetry algorithm is useful in explaining the efficiency observed in symmetry perception, it is not clear how it could be used to account for psychophysical data on shape symmetry. Specifically, the steps required to generate thresholds have to be made explicit. 
Another model of shape symmetry involves computing the effort needed to morph a shape into its nearest symmetrical (Zabrowsky & Algom, 2002). That is, the distance is calculated between each point of the asymmetric object and its corresponding point on the symmetric object most similar to it. This computation is performed after normalization for size, thus providing size invariance. They show that for a set of random shapes, perceived “goodness” of shape (defining goodness was left up to the participants) was well correlated with a combination of reflexive symmetry and rotational symmetry. It is not clear however what would be the neural implementation of such a model, or how well thresholds derived using it would be correlated with psychophysical thresholds (e.g., Wilson & Wilkinson, 2002). 
Another model of symmetry uses the “weight of evidence” (WoE) computation to account for novel effects relating to symmetry-to-noise ratios (Csathó, van der Vloed, & van der Helm, 2004). The WoE computation is simply WoE = S/n, where S is the number of symmetry pairs, and n is the total number of elements. As the bulk of their argument relates to this computation, it is worthwhile comparing with our own computation. Although the details of the implementation differ, our model also does a computation that is conceptually similar to W = S/n. The division by n is done in our model by normalization (Equation 5), and the sum of S is done by adding (Equation 7) the normalized responses after shunting inhibition removes the responses that are asymmetrical (Equation 6). Thus, Equations 57 perform a computation equivalent to the WoE implementation. It follows then that any conclusion derived from the WoE will equally apply to our model. The only notable differences are (1) we use shunting inhibition to classify whether responses are random elements or pairs, and this classification varies gradually with the difference between signals of a pair, (2) the symmetry computation is done on a different internal representation of the stimulus, and (3) the two models were developed to account for performance with regards to different classes of stimuli. 
A recent model by Dry (2008) describes the stimulus using Voronoi tessellation. Voronoi tessellation is in many ways similar to grassfire models presented above, in that for each point, it defines a region that contains every position that is closest to that point. However, instead of using these regions to reduce stimulus complexity, Dry's model compares whether the same region rotated to the opposite side of the symmetry axis also contains only one dot. If symmetrically placed regions do not each contain exactly one dot, then the stimulus is perceived as less symmetrical. Thus its operation is similar to a pointwise comparison, with some resistance to jitter proportional to the inverse of local dot density. That is, as local dot density increases, the model tolerates less positional jitter. It provides a good fit to empirical data. However, it is not clear what would be the neural implementation of such a model. It is also unclear how the model would need to be adapted to encode symmetry in more natural stimuli like shapes or faces. 
Figure 6 shows that the current model can extract the symmetry axis from random-dot patterns, and this even without the adjustment of the parameters used for natural images. However, much research is needed before any claim is made to relate our model to how humans extract symmetry from such patterns. Indeed, the model was created to extract symmetry from roughly circular shapes, and in both random patterns it extracts hints of a roughly circular shape. This emphasis is different from the usual proposal that information near the symmetry axis is more important to the task. We can speculate that this model, if involved in symmetry perception in random-dot patterns, is most likely more involved in recovering symmetry of the outline or of a shape contained within the pattern. As such, our model could be complementary to a model recovering symmetry along the midline, both operating on similar principles. 
Generalizing the model
The model presented here accounts for radial frequency (RF) pattern perception in a variety of tasks (Poirier & Wilson, 2006) including symmetry (present study). We have shown here with several examples that when the orientation filter's spatial frequency selectivity matches that of the contour, the present model robustly recovers the axes of symmetry. Evidently, model performance would improve further if other mechanisms were in place to extract a more reliable and more precise contour signal. Moreover, the shape signal would be less noisy if the outputs of several curvature mechanisms were combined, each tuned to different degrees of curvature, different orientations, and different scales, rather than the single mechanism used here. The neural representation of objects may also include information about curvature distance from the object's center, in addition to radial position and curvature amplitude used here. Object shape would then be a population code of curvature responses, mapped as a function of distance and orientation around the object's center, as proposed by Pasupathy and Connor (2002). 
The model presented, despite its hierarchical organization, is consistent with both local-to-global and global-to-local processing schemes (see Chen, 2005), where a global-to-local processing precedence are evident when information at later hierarchical stages is available earlier to consciousness than information encoded at earlier hierarchical stages (e.g., Ahissar & Hochstein, 2004). Moreover, the performance of the current model could be improved significantly by additional feedback or closed-loop processing. Global information could also be privileged if asymmetries exist between integration and segregation (see Poirier & Frost, 2005) and between low-level and high-level information (see Poirier & Wilson, 2007). 
Future directions
The modeling here provide some novel answers regarding shape processing and symmetry in human vision and is flexible enough to adapt to a range of stimuli such as synthetic and natural faces. One main contribution is the proposal of simple ways in which invariance may be achieved. The original model (Poirier & Wilson, 2006) incorporated invariance to object position and size over a given range. Conversely, the extraction of a symmetry axis (or axes) can help in building orientation invariance of shape processing by providing a common frame of reference for object processing in addition to elongation and other cues (e.g., Figure 9). As more elements of the puzzle are put together, it is obvious that system synergies occur that provide a much more robust response than would be expected from the components. 
Figure 9
 
Interactions of facial features and head shape in determining symmetry. We modeled shape symmetry above, but it is possible that a similar feature-based symmetry mechanism would be used to encode symmetry of facial features. The mechanism used to find head location in the image could be used to constrain the search of symmetry axes positions, and the symmetry axis found based on shape information could constrain further the orientation of possible feature-based symmetry axes. Estimates of symmetry axes based on head shape and on internal features could then be combined.
Figure 9
 
Interactions of facial features and head shape in determining symmetry. We modeled shape symmetry above, but it is possible that a similar feature-based symmetry mechanism would be used to encode symmetry of facial features. The mechanism used to find head location in the image could be used to constrain the search of symmetry axes positions, and the symmetry axis found based on shape information could constrain further the orientation of possible feature-based symmetry axes. Estimates of symmetry axes based on head shape and on internal features could then be combined.
An issue that proved critical to the success of the mechanisms across stimuli (i.e., in natural and synthetic images) was whether the receptive fields encoding the contours were appropriate for the stimuli. Variability in responses at the oriented receptive field level, whether they arise from image contrast differences or mismatches in spatial frequency in the image and in the receptive fields, has multiplicative effects at the level of curvature mechanisms and above. Simple modifications of spatial-frequency selectivity either by combining responses across spatial-frequency channels or selecting a spatial frequency more appropriate for the stimulus considerably reduced this problem. Clearly, the present model needs to incorporate biologically plausible mechanisms that deal with contrast and spatial-frequency invariance and can account for effects of contrast modulations and occlusions (see Habak et al., 2004; Hess et al., 1999; Poirier & Wilson, 2007). 
Finally, it is still unclear whether curvature scaling arises from top-down or bottom-up mechanisms. Top-down mechanisms would select the mechanisms appropriate for the scale of processing involved. Bottom-up selection would occur as a consequence of increasing invariance over spatial frequency, scale, and curvature, where the mechanisms involved are selected simply because they are the most sensitive to the shape differences at that scale and spatial frequency. 
Acknowledgments
This research was supported by an NSERC Operating Grant to HRW (#OP227224). Portions of this paper were presented at VSS 2006 (Sarasota, Florida). 
Commercial relationships: none. 
Corresponding author: Frédéric J. A. M. Poirier. 
Address: Laboratoire de Psychophysique et Perception Visuelle, Université de Montréal, 3744 rue Jean-Brillant, Bureau 210-46, Montréal, Québec H3T 1P1, Canada. 
References
Ahissar M. Hochstein S. (2004). The reverse hierarchy theory of visual perception learning. Trends in Cognitive Sciences, 8, 457–464. [PubMed] [CrossRef] [PubMed]
Albrecht D. G. Hamilton D. G. (1982). Striate cortex of monkey and cat: Contrast response function. Journal of Neurophysiology, 48, 217–237. [PubMed] [PubMed]
Attneave F. (1954). Some information aspects of visual perception. Psychological Review, 61, 183–193. [CrossRef] [PubMed]
Barlow H. B. Reeves B. C. (1979). The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vision Research, 19, 783–793. [PubMed] [CrossRef] [PubMed]
Barrett B. T. Whitaker D. McGraw P. V. Herbert A. M. (1999). Discriminating mirror symmetry in foveal and extra-foveal vision. Vision Research, 39, 3737–3744. [PubMed] [CrossRef] [PubMed]
Bertamini M. (2001). The importance of being convex: An advantage for convexity when judging position. Perception, 30, 1295–1310. [PubMed] [CrossRef] [PubMed]
Bertamini M. (2004). Early computation of contour curvature and part structure: Evidence from holes. Perception, 33, 35–48. [PubMed] [CrossRef] [PubMed]
Biederman I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94, 115–147. [PubMed] [CrossRef] [PubMed]
Blum H. (1973). Biological shape and visual science. Journal of Theoretical Biology, 38, 205–287. [CrossRef] [PubMed]
Bonneh Y. Reisfeld D. Yeshurun Y. Tyler C. W. (2002). Quantification of local symmetry: Application to texture discrimination. Human symmetry perception and its computational analysis. (pp. 304–319). Mahwah, NJ: Lawrence Erlbaum Associates.
Bruce V. Morgan M. (1975). Violations of symmetry and repetition in visual patterns. Perception, 4, 239–249. [CrossRef]
Carmody D. Nodine C. Locher P. (1977). Global detection of symmetry. Perceptual and Motor Skills, 45, 1267–1273. [PubMed] [CrossRef] [PubMed]
Chen L. (2005). The topological approach to perceptual organization. Visual Cognition, 12, 553–637. [CrossRef]
Csathó A. van der Vloed G. van der Helm P. A. (2003). Blobs strengthen repetition but weaken symmetry. Vision Research, 43, 993–1007. [PubMed] [CrossRef] [PubMed]
Csathó A. van der Vloed G. van der Helm P. A. (2004). The force of symmetry revisited: Symmetry-to-noise ratios regulate (asymmetry effects. Acta Psychologica, 117, 233–250. [PubMed] [CrossRef] [PubMed]
Dakin S. C. Herbert A. M. (1998). The spatial region of integration for visual symmetry detection. Proceedings of the Royal Society of London B: Biological Sciences, 265, 659–664. [PubMed] [Article] [CrossRef]
Dakin S. C. Hess R. F. (1997). The spatial mechanisms mediating symmetry perception. Vision Research, 37, 2915–2930. [PubMed] [CrossRef] [PubMed]
Dakin S. C. Watt R. J. (1994). Detection of bilateral symmetry using spatial filters. Spatial Vision, 8, 393–413. [PubMed] [CrossRef] [PubMed]
Desimone R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 3, 1–8. [CrossRef] [PubMed]
DeValois R. L. DeValois K. K. (1988). Spatial vision. New York: Oxford University Press.
Dobbins A. Zucker S. W. Cynader M. S. (1987). Endstopped neurons in the visual cortex as a substrate for calculating curvature. Nature, 329, 438–441. [PubMed] [CrossRef] [PubMed]
Dobbins A. Zucker S. W. Cynader M. S. (1989). Endstopping and curvature. Vision Research, 29, 1371–1387. [PubMed] [CrossRef] [PubMed]
Dry M. (2008). Using relational structure to detect symmetry: A Voronoi tessellation based model of symmetry perception. Acta Psychologica, 128, 75–90. [PubMed] [CrossRef] [PubMed]
Enquist M. Arak A. (1994). Symmetry, beauty and evolution. Nature, 372, 169–172. [PubMed] [CrossRef] [PubMed]
Graham N. (1989). Visual pattern analyzers. New York: Oxford University Press.
Gross C. G. (1992). Representation of visual stimuli in inferior temporal cortex. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 335, 3–10. [PubMed] [CrossRef]
Gurnsey R. Herbert A. M. Kenemy J. (1998). Bilateral symmetry embedded in noise is detected accurately only at fixation. Vision Research, 38, 3795–3803. [PubMed] [CrossRef] [PubMed]
Habak C. Wilkinson F. Zakher B. Wilson H. R. (2004). Curvature population coding for complex shapes in human vision. Vision Research, 44, 2815–2823. [PubMed] [CrossRef] [PubMed]
Herbert A. M. Humphrey G. K. Jolicoeur P. (1994). The detection of bilateral symmetry: Effects of surrounding frames. Canadian Journal of Experimental Psychology, 48, 140–148. [CrossRef]
Hess R. F. Wang Y. Dakin S. C. (1999). Are judgements of circularity local or global? Vision Research, 39, 4354–4360. [PubMed] [CrossRef] [PubMed]
Hong S. Pavel M. Tyler C. W. (2002). Determinants of symmetry perception. Human symmetry perception and its computational analysis. (pp. 135–156). Mahwah, NJ: Lawrence Erlbaum Associates.
Horridge G. A. (1996). The honeybee Apis mellifera detects bilateral symmetry and discriminates its axis. Journal of Insect Physiology, 42, 755–764. [PubMed] [Article] [CrossRef]
Hubel D. H. Wiesel T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195, 215–243. [PubMed] [Article] [CrossRef] [PubMed]
Jeffrey B. G. Wang Y. Birch E. E. (2002). Circular contour frequency in shape discrimination. Vision Research, 42, 2773–2779. [PubMed] [CrossRef] [PubMed]
Jenkins B. (1982). Redundancy in the perception of bilateral symmetry in dot textures. Perception & Psychophysics, 32, 443–448. [PubMed] [CrossRef] [PubMed]
Jenkins B. (1983). Component processes in the perception of bilaterally symmetric dot textures. Perception & Psychophysics, 34, 171–177. [PubMed]
Julesz B. (1971). Foundations of cyclopean perception. Chicago: University of Chicago Press.
Julesz B. Chang J. J. (1979). Symmetry perception and spatial-frequency channels. Perception, 8, 711–718. [PubMed] [CrossRef] [PubMed]
Koenderink J. J. Richards W. (1988). Two-dimensional curvature operators. Journal of the Optical Society of America A, 5, 1136–1141. [CrossRef]
Kurbat M. A. Tyler C. W. (2002). A network model for generating differential symmetry axes of shapes via receptive fields. Human symmetry perception and its computational analysis. (pp. 227–236). Mahwah, NJ: Lawrence Erlbaum Associates.
Labonté F. Shapira Y. Cohen P. Faubert J. (1995). A model of global symmetry detection in dense images. Spatial Vision, 9, 33–55. [PubMed] [CrossRef] [PubMed]
Large M. E. McMullen P. A. Hamm J. P. (2003). The role of axes of elongation and symmetry in rotated object naming. Perception & Psychophysics, 65, 1–19. [PubMed] [Article] [CrossRef] [PubMed]
Latimer C. Joung W. Stevens C. Tyler C. W. (2002). Modelling symmetry detection with back-propagation networks. Human symmetry perception and its computational analysis. (pp. 209–226). Mahwah, NJ: Lawrence Erlbaum Associates.
Loffler G. Wilson H. R. Wilkinson F. (2003). Local and global contributions to shape discrimination. Vision Research, 43, 519–530. [PubMed] [CrossRef] [PubMed]
Mancini S. Sally S. L. Gurnsey R. (2005). Detection of symmetry and anti-symmetry. Vision Research, 45, 2145–2160. [PubMed] [CrossRef] [PubMed]
Merigan W. H. (1996). Basic visual capacities and shape discrimination after lesions of extrastriate area V4 in macaques. Visual Neuroscience, 13, 51–60. [PubMed] [CrossRef] [PubMed]
Michaelis L. Menten M. L. (1913). Die kinetic der invertinwirkung. Biochemische Zeitschrift, 49, 333–369.
Møller A. P. (1992). Female swallow preference for symmetrical male sexual ornaments. Nature, 357, 238–240. [PubMed] [CrossRef] [PubMed]
Mullen K. T. Beaudot W. H. (2002). Comparison of color and luminance vision on a global shape discrimination task. Vision Research, 42, 565–575. [PubMed] [CrossRef] [PubMed]
Naka K. I. Rushton W. A. (1966). S-potentials from colour units in the retina of fish. The Journal of Physiology, 185, 584–599. [PubMed] [Article]
Palmer S. E. Beck, J. Hope, B. Rosenfeld A. (1983). The psychology of perceptual organization. Human and machine vision. (pp. 269–339). New York: Academic Press.
Pasupathy A. Connor C. E. (2002). Population coding of shape in area V4. Nature Neuroscience, 5, 1332–1338. [PubMed] [CrossRef] [PubMed]
Phillips G. C. Wilson H. R. (1984). Orientation bandwidths of spatial mechanisms measured by masking. Journal of the Optical Society of America A, Optics and Image Science, 1, 226–232. [PubMed] [CrossRef] [PubMed]
Poirier F. J. A. Frost B. J. (2005). Global orientation aftereffect in multi-attribute displays: Implications for the binding problem. Vision Research, 45, 497–506. [PubMed] [CrossRef] [PubMed]
Poirier F. J. A. Gurnsey R. (2005). Non-monotonic changes in performance with eccentricity modeled by multiple eccentricity dependent limitations. Vision Research, 45, 2436–2448. [PubMed] [CrossRef] [PubMed]
Poirier F. J. A. Wilson H. R. (2006). A biologically plausible model of human radial frequency perception. Vision Research, 46, 2443–2455. [PubMed] [CrossRef] [PubMed]
Poirier F. J. A. Wilson H. R. (2007). Object perception and masking: Contributions of sides and convexities. Vision Research, 47, 3001–3011. [PubMed] [CrossRef] [PubMed]
Rainville S. J. M. Kingdom F. A. A. (1999). Spatial-scale contribution to the detection of mirror symmetry in fractal noise. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 16, 2112–2123. [PubMed] [CrossRef] [PubMed]
Rainville S. J. M. Kingdom F. A. A. (2000). The functional role of oriented spatial filters in the perception of mirror symmetry—Psychophysics and modeling. Vision Research, 40, 2621–2644. [PubMed] [CrossRef] [PubMed]
Rainville S. J. M. Kingdom F. A. A. (2002). Scale invariance is driven by stimulus density. Vision Research, 42, 351–367. [PubMed] [CrossRef] [PubMed]
Rainville S. J. M. Yurganov G. Wilson H. R. (2005). Closed-contour shapes encoded through deviations from circularity in lateral-occipital complex (LOC: An fMRI study [Abstract]. Journal of Vision, 5, (8):471, [CrossRef]
Royer F. L. (1981). Detection of symmetry. Journal of Experimental Psychology: Human Perception and Performance, 7, 1186–1210. [PubMed] [CrossRef] [PubMed]
Sally S. L. Gurnsey R. (2001). Symmetry detection across the visual field. Spatial Vision, 14, 217–234. [PubMed] [CrossRef] [PubMed]
Sasaki Y. Vanduffel W. Knutsen T. Tyler C. Tootell R. (2005). Symmetry activates extrastriate visual cortex in human and nonhuman primates. Proceedings of the National Academy of Sciences of the United States of America, 102, 3159–3163. [PubMed] [Article] [CrossRef] [PubMed]
Sclar G. Maunsell J. H. R. Lennie P. (1990). Coding of image contrast in central visual pathways of the macaque monkey. Vision Research, 20, 645–669. [PubMed]
Sekuler A. B. Swimmer M. B. (2000). Interactions between symmetry and elongation in determining reference frames for object perception. Canadian Journal of Psychology, 54, 42–55. [PubMed] [CrossRef]
Shevelev I. A. Kamenkovich V. M. Sharaev G. A. (2003). The role of lines and corners of geometric figures in recognition performance. Acta Neurobiologiae Experimentalis, 63, 361–368. [PubMed] [PubMed]
Swaddle J. P. Cuthill I. C. (1994). Preference for symmetrical males by female zebra finches. Nature, 367, 165–166. [CrossRef]
Tanaka K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuroscience, 19, 109–139. [PubMed] [CrossRef] [PubMed]
Tapiovaara M. (1990). Ideal observer and absolute efficiency of detecting mirror symmetry in random images. Journal of the Optical Society of America A,Optics and Image Science, 7, 2245–2253. [PubMed] [CrossRef] [PubMed]
Tyler C. W. (1999). Human symmetry detection exhibits reverse eccentricity scaling. Visual Neuroscience, 16, 919–922. [PubMed] [CrossRef] [PubMed]
Tyler C. W. (2002). Human symmetry perception and its computational analysis. (pp. 1–400). Mahwah, NJ: Lawrence Erlbaum Associates.
Tyler C. W. Baseler H. A. Kontsevich L. L. Likova L. T. Wade A. R. Wandell B. A. (2005). Predominantly extra-retinotopic cortical response to pattern symmetry. NeuroImage, 24, 306–314. [PubMed] [CrossRef] [PubMed]
Tyler C. W. Hardage L. Tyler C. W. (2002). Mirror symmetry detection: Predominance of second-order pattern processing throughout the visual field. Human symmetry perception and its computational analysis. (pp. 157–172). Mahwah, NJ: Lawrence Erlbaum Associates.
Tyler C. W. Hardage L. Miller R. (1995). Multiple mechanisms for the detection of mirror symmetry. Spatial Vision, 9, 79–100. [PubMed] [CrossRef] [PubMed]
van der Helm P. A. Leeuwenberg E. L. J. (1996). Goodness of visual regularities: A nontransformational approach. Psychological Review, 103, 429–456. [PubMed] [CrossRef] [PubMed]
van der Helm P. A. Leeuwenberg E. L. J. (1999). A better approach to goodness: Reply to Wagemans (1999. Psychological Review, 106, 622–630. [CrossRef]
van der Helm P. A. Leeuwenberg E. L. J. (2004). Holographic goodness is not that bad: Reply to Olivers, Chater, and Watson (2004. Psychological Review, 111, 261–273. [CrossRef]
van der Vloed G. Csathó A. van der Helm P. A. (2005). Symmetry and repetition in perspective. Acta Psychologica, 120, 74–92. [PubMed] [CrossRef] [PubMed]
van der Vloed G. Csathó A. van der Helm P. A. (2007). Effects of asynchrony on symmetry perception. Psychological Research, 71, 170–177. [PubMed] [CrossRef] [PubMed]
van Essen D. C. Peters A. Jones E. G. (1985). Functional organization of primate visual cortex. Cerebral cortex. (3, pp. 259–329). New York: Plenum Press.
Wagemans J. (1997). Characteristics and models of human symmetry detection. Trends in Cognitive Sciences, 1, 346–352. [CrossRef] [PubMed]
Wagemans J. Van Gool L. d'Ydewalle G. (1991). Detection of symmetry in tachistoscopically presented dot patterns: Effects of multiple axes and skewing. Perception & Psychophysics, 50, 413–427. [PubMed] [CrossRef] [PubMed]
Wagemans J. Van Gool L. d'Ydewalle G. (1992). Orientational effects and component processes in symmetry detection. Quarterly Journal of Experimental Psychology, 44, 475–508. [CrossRef]
Wagemans J. Van Gool L. Swinnen V. Van Horebeek J. (1993). Higher order structure in regularity detection. Vision Research, 33, 1067–1088. [PubMed] [CrossRef] [PubMed]
Wenderoth P. (1995). The role of pattern outline in bilateral symmetry detection with briefly flashed dot patterns. Spatial Vision, 9, 57–77. [PubMed] [CrossRef] [PubMed]
Wenderoth P. (1996). The effect of contrast polarity of dot-pair partners on the detection of bilateral symmetry. Perception, 25, 757–772. [PubMed] [CrossRef] [PubMed]
Wenderoth P. Tyler C. W. (2002). The role of pattern outline in bilateral symmetry detection with briefly flashed dot patterns. Human symmetry perception and its computational analysis. (pp. 49–70). Mahwah, NJ: Lawrence Erlbaum Associates.
Wilkinson F. Wilson H. R. Habak C. (1998). Detection and recognition of radial frequency patterns. Vision Research, 38, 3555–3568. [PubMed] [CrossRef] [PubMed]
Wilson H. R. (1985). Discrimination of contour curvature: Data and theory. Journal of the Optical Society of America A, Optics and Image Science, 2, 1191–1199. [PubMed] [CrossRef] [PubMed]
Wilson H. R. Movshon J. A. Landy M. S. (1991). Pattern discrimination, visual filters, and spatial sampling irregularity. Computational models of visual processing. (pp. 153–168). Cambridge: MIT Press.
Wilson H. R. Ulinski P. S. Jones E. G. (1999). Non-Fourier cortical processes in texture, form, and motion perception. Cerebral cortex: Vol. 13. Models of cortical circuitry. (pp. 445–477). New York: Plenum.
Wilson H. R. Loffler G. Wilkinson F. (2002). Synthetic faces, face cubes, and the geometry of face space. Vision Research, 42, 2909–2923. [PubMed] [CrossRef] [PubMed]
Wilson H. R. McFarlane D. K. Phillips G. C. (1983). Spatial frequency tuning of orientation selective units estimated by oblique masking. Vision Research, 23, 873–882. [PubMed] [CrossRef] [PubMed]
Wilson H. R. Richards W. A. (1989). Mechanisms of contour curvature discrimination. Optical Society of America A, Optics and Image Science, 6, 106–115. [PubMed] [CrossRef]
Wilson H. R. Wilkinson F. (1998). Detection of global structure in Glass patterns: Implications for form vision. Vision Research, 38, 2933–2947. [PubMed] [CrossRef] [PubMed]
Wilson H. R. Wilkinson F. (2002). Symmetry perception: A novel approach for biological shapes. Vision Research, 42, 589–597. [PubMed] [CrossRef] [PubMed]
Wilson H. R. Wilkinson F. Asaad W. (1997). Concentric orientation summation in human form vision. Vision Research, 37, 2325–2330. [PubMed] [CrossRef] [PubMed]
Young M. P. (1992). Objective analysis of the topological organization of the primate cortical visual system. Nature, 358, 152–155. [PubMed] [CrossRef] [PubMed]
Zabrodsky H. Algom D. Tyler C. W. (2002). Continuous symmetry: A model for human figural perception. Human symmetry perception and its computational analysis. (pp. 290–303). Mahwah, NJ: Lawrence Erlbaum Associates.
Zhang L. Gerbino W. (1992). Symmetry in opposite-contrast dot patterns. Perception, 21, 95.
Figure 1
 
Stimuli used in Wilson and Wilkinson's (2002) experiment on shape symmetry. Shown in different columns are the different configurations used, and in different rows are the different phase differences. Symmetry is reduced as the different components are increasingly misaligned (i.e., as phase difference is increased), but the increased asymmetry is most easily detectable for certain patterns than others (e.g., “2 + 3” and “face” are easiest to notice, whereas “2 + 7” and “5 + 7” are hardest). Model responses were taken from these stimuli.
Figure 1
 
Stimuli used in Wilson and Wilkinson's (2002) experiment on shape symmetry. Shown in different columns are the different configurations used, and in different rows are the different phase differences. Symmetry is reduced as the different components are increasingly misaligned (i.e., as phase difference is increased), but the increased asymmetry is most easily detectable for certain patterns than others (e.g., “2 + 3” and “face” are easiest to notice, whereas “2 + 7” and “5 + 7” are hardest). Model responses were taken from these stimuli.
Figure 2
 
Overview of the shape perception model used to derive a population code of shape (Poirier & Wilson, 2006). (Left) Filters used in the computation of object center. Small-scale oriented filters encode the contour, and orthogonal large-scale filters positioned on either side from the center encode occurrences of concentric line elements. (Middle) Filters used in the computation of local curvature. Maximum response occurs when the contour passes through three oriented filters that are combined (multiplication shown here). (Right) Curvature information is relative to object center: curvature mechanisms scale with distance from object center, and they are oriented to prefer accentuated convexities relative to the curvature expected for a circle. Refer to the text and subsequent figures for the dimensions of the filters used.
Figure 2
 
Overview of the shape perception model used to derive a population code of shape (Poirier & Wilson, 2006). (Left) Filters used in the computation of object center. Small-scale oriented filters encode the contour, and orthogonal large-scale filters positioned on either side from the center encode occurrences of concentric line elements. (Middle) Filters used in the computation of local curvature. Maximum response occurs when the contour passes through three oriented filters that are combined (multiplication shown here). (Right) Curvature information is relative to object center: curvature mechanisms scale with distance from object center, and they are oriented to prefer accentuated convexities relative to the curvature expected for a circle. Refer to the text and subsequent figures for the dimensions of the filters used.
Figure 3
 
Sample filters and responses of the model at various stages of processing. (A) Sample radial frequency contours are convolved with (B) oriented filters for each of 8 evenly spaced orientations, (C) the output of which is thresholded. For each orientation, the output is convolved with (D) a pair of filters oriented orthogonally to first stage filters, and offset along their axis of orientation, the half-wave rectified output of which (E) is summed over the 8 orientations and (F) thresholded to recover the contour's center, which is estimated as the maximum of that 2D distribution. (G) Curvature mechanisms sample the responses of oriented filters at 5 locations (5 curvature mechanisms shown, each has 3 white and 2 black dots; 3 curvature mechanisms are shown with curved lines to emphasize samples belonging to the same mechanism), the overall scale of the curvature mechanism increases with distance from the object center and is oriented to produce preferential responding to peak curvatures pointing away from the center. After (H) multiplication and inhibition of the samples, (I) the curvature mechanisms respond preferentially at the location of convex peaks. The response of curvature mechanisms (I) is sampled using 2D Gaussian profiles (K; 30 samples shown overlapped). Curvature response is thus recovered as a function of orientation around the center (J, L).
Figure 3
 
Sample filters and responses of the model at various stages of processing. (A) Sample radial frequency contours are convolved with (B) oriented filters for each of 8 evenly spaced orientations, (C) the output of which is thresholded. For each orientation, the output is convolved with (D) a pair of filters oriented orthogonally to first stage filters, and offset along their axis of orientation, the half-wave rectified output of which (E) is summed over the 8 orientations and (F) thresholded to recover the contour's center, which is estimated as the maximum of that 2D distribution. (G) Curvature mechanisms sample the responses of oriented filters at 5 locations (5 curvature mechanisms shown, each has 3 white and 2 black dots; 3 curvature mechanisms are shown with curved lines to emphasize samples belonging to the same mechanism), the overall scale of the curvature mechanism increases with distance from the object center and is oriented to produce preferential responding to peak curvatures pointing away from the center. After (H) multiplication and inhibition of the samples, (I) the curvature mechanisms respond preferentially at the location of convex peaks. The response of curvature mechanisms (I) is sampled using 2D Gaussian profiles (K; 30 samples shown overlapped). Curvature response is thus recovered as a function of orientation around the center (J, L).
Figure 4
 
Diagram of the symmetry mechanism. Curvature mechanism responses are given as a function of orientation around the object's center (see Figures 2 and 3), which are first normalized to sum to unity. Then, local symmetry is implemented on each pair of normalized responses, such that the sum of responses is inhibited by an amount proportional to its absolute difference. That is, the sum is unchanged if both responses are equal, and reduced otherwise. Finally, global symmetry equals the sum of local symmetries consistent with a given symmetry axis orientation and equals unity for perfectly symmetrical patterns and reduces otherwise.
Figure 4
 
Diagram of the symmetry mechanism. Curvature mechanism responses are given as a function of orientation around the object's center (see Figures 2 and 3), which are first normalized to sum to unity. Then, local symmetry is implemented on each pair of normalized responses, such that the sum of responses is inhibited by an amount proportional to its absolute difference. That is, the sum is unchanged if both responses are equal, and reduced otherwise. Finally, global symmetry equals the sum of local symmetries consistent with a given symmetry axis orientation and equals unity for perfectly symmetrical patterns and reduces otherwise.
Figure 5
 
Sample computations of the symmetry mechanism for three patterns, namely: “2 + 3”, “2 + 7”, and “Face”. Columns are divided in two sections, i.e., for stimuli that are symmetrical or not (i.e., phase difference of 30°). For stimuli that are not symmetric, curvature responses are shown for the object (solid line) and its mirror image (dotted line), as well as some local computations of the symmetry model (i.e., solid line shows average response, dotted line shows difference, and dashed line shows curvature response after inhibition). Note that larger signals are more susceptible to inhibition when asymmetries occur. All axes are in arbitrary but common units. Global symmetry measures are 73.75%, 80.95%, and 75.75% for the “2 + 3”, “2 + 7”, and “face” stimuli, respectively.
Figure 5
 
Sample computations of the symmetry mechanism for three patterns, namely: “2 + 3”, “2 + 7”, and “Face”. Columns are divided in two sections, i.e., for stimuli that are symmetrical or not (i.e., phase difference of 30°). For stimuli that are not symmetric, curvature responses are shown for the object (solid line) and its mirror image (dotted line), as well as some local computations of the symmetry model (i.e., solid line shows average response, dotted line shows difference, and dashed line shows curvature response after inhibition). Note that larger signals are more susceptible to inhibition when asymmetries occur. All axes are in arbitrary but common units. Global symmetry measures are 73.75%, 80.95%, and 75.75% for the “2 + 3”, “2 + 7”, and “face” stimuli, respectively.
Figure 6
 
Model performance on a simple shape, on synthetic faces, on natural faces, and on two random-dot symmetrical patterns. Stimuli (row 1) are filtered to extract contours and object center (row 2; edges in white, center in black). Then, a population code of shape is built based on object-centric curvature signals (row 3), from which the model extracts axes of symmetry (row 4; lighter means more symmetrical). The model recovers the correct axes of symmetry for simple shapes and front-view faces but is biased toward the axis of elongation for other views of faces. This reflects a true shift of the symmetry axis rather than a preference for elongation (see Wilson & Wilkinson, 2002, for more details). In addition, the model extracts the correct symmetry axis in random-dot stimuli. See text for details.
Figure 6
 
Model performance on a simple shape, on synthetic faces, on natural faces, and on two random-dot symmetrical patterns. Stimuli (row 1) are filtered to extract contours and object center (row 2; edges in white, center in black). Then, a population code of shape is built based on object-centric curvature signals (row 3), from which the model extracts axes of symmetry (row 4; lighter means more symmetrical). The model recovers the correct axes of symmetry for simple shapes and front-view faces but is biased toward the axis of elongation for other views of faces. This reflects a true shift of the symmetry axis rather than a preference for elongation (see Wilson & Wilkinson, 2002, for more details). In addition, the model extracts the correct symmetry axis in random-dot stimuli. See text for details.
Figure 7
 
(Left) Model symmetry signal as a function of phase difference, for the different patterns used in Wilson and Wilkinson's (2002) experiment. The thick dashed line at 0.8 shows the threshold value. (Right) Threshold phase differences for model (y-axis) and humans (x-axis).
Figure 7
 
(Left) Model symmetry signal as a function of phase difference, for the different patterns used in Wilson and Wilkinson's (2002) experiment. The thick dashed line at 0.8 shows the threshold value. (Right) Threshold phase differences for model (y-axis) and humans (x-axis).
Figure 8
 
Comparison of compressive non-linearity parameters (slope and midpoint) from the previous publication (solid thick line) with those found to fit the symmetry data (dotted thick line). Also plotted are curvature mechanisms' response statistics for three data sets (i.e., circles, detection patterns for radial frequencies 3–6, and symmetry patterns; circles from left to right show the minimum, log-average, average, and maximum, respectively for each data set; note that the range shown for the circle represents simulation noise; data ranges are placed such that the average falls on the compressive non-linearity used in Poirier & Wilson, 2006). Clearly, the compressive non-linearity used previously appropriately increases sensitivity around threshold but compresses most of the relevant variance in the symmetry responses such that sensitivity is effectively decreased. See text for details.
Figure 8
 
Comparison of compressive non-linearity parameters (slope and midpoint) from the previous publication (solid thick line) with those found to fit the symmetry data (dotted thick line). Also plotted are curvature mechanisms' response statistics for three data sets (i.e., circles, detection patterns for radial frequencies 3–6, and symmetry patterns; circles from left to right show the minimum, log-average, average, and maximum, respectively for each data set; note that the range shown for the circle represents simulation noise; data ranges are placed such that the average falls on the compressive non-linearity used in Poirier & Wilson, 2006). Clearly, the compressive non-linearity used previously appropriately increases sensitivity around threshold but compresses most of the relevant variance in the symmetry responses such that sensitivity is effectively decreased. See text for details.
Figure 9
 
Interactions of facial features and head shape in determining symmetry. We modeled shape symmetry above, but it is possible that a similar feature-based symmetry mechanism would be used to encode symmetry of facial features. The mechanism used to find head location in the image could be used to constrain the search of symmetry axes positions, and the symmetry axis found based on shape information could constrain further the orientation of possible feature-based symmetry axes. Estimates of symmetry axes based on head shape and on internal features could then be combined.
Figure 9
 
Interactions of facial features and head shape in determining symmetry. We modeled shape symmetry above, but it is possible that a similar feature-based symmetry mechanism would be used to encode symmetry of facial features. The mechanism used to find head location in the image could be used to constrain the search of symmetry axes positions, and the symmetry axis found based on shape information could constrain further the orientation of possible feature-based symmetry axes. Estimates of symmetry axes based on head shape and on internal features could then be combined.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×