**We have shown in previous work that the perception of order in point patterns is consistent with an interval scale structure (Protonotarios, Baum, Johnston, Hunter, & Griffin, 2014). The psychophysical scaling method used relies on the confusion between stimuli with similar levels of order, and the resulting discrimination scale is expressed in just-noticeable differences (jnds). As with other perceptual dimensions, an interesting question is whether suprathreshold (perceptual) differences are consistent with distances between stimuli on the discrimination scale. To test that, we collected discrimination data, and data based on comparison of perceptual differences. The stimuli were jittered square lattices of dots, covering the range from total disorder (Poisson) to perfect order (square lattice), roughly equally spaced on the discrimination scale. Observers picked the most ordered pattern from a pair, and the pair of patterns with the greatest difference in order from two pairs. Although the judgments of perceptual difference were found to be consistent with an interval scale, like the discrimination judgments, no common interval scale that could predict both sets of data was possible. In particular, the midpattern of the perceptual scale is 11 jnds away from the ordered end, and 5 jnds from the disordered end of the discrimination scale.**

*X*with a linear transformation

*aX + b*, where

*a*,

*b*are real constants with

*a*> 0, does not distort the interval character, since the sign and the relative size of differences are preserved. We therefore transformed the output of the geometric measure so that certain significant patterns are anchored to memorable values. We called the resulting scale an absolute interval scale (

*a*-scale) for the measurement of order. On this scale, the anchored values 0 and 10 correspond to total randomness (Poisson point patterns) and perfect Bravais lattice, respectively, and each unit corresponds roughly to a jnd. We demonstrated its applicability by identifying two distinct processes in the pattern formation of the

*Drosophila*bristle cells during development.

*a*-scale algorithm predicts the perceptual order of patterns on a discrimination-based interval scale.

*a*-scale values, we have determined 21 jitter levels that on average are uniformly spaced on the

*a*-scale. We then selected 21 patterns that have close to the mean

*a*-scale value for their level of jitter. Figure 2a shows the boxplot of the

*a*-scale order values for the generated point patterns for varying jitter levels. A set of 100 patterns was generated at each jitter level. The spread in

*a*-scale values is larger for higher jitter levels. Figure 2b shows the

*a*-scale values and jitter levels of the selected patterns. In all figures with jitter level on the abscissa, the axis has been inverted so that scale values corresponding to high order values are on the right-hand side. The patterns at the highest jitter level (disordered end) were generated from a Poisson process, which is equivalent to applying an infinite amount of jitter to a square lattice. We additionally excluded patterns that contained points that were so close that they would overlap when displayed as dots; this happened with increasing frequency for larger amounts of jitter.

*M*= 25.4,

*SD*= 6.0 years). Our research adhered to the tenets of the Declaration of Helsinki for the protection of human subjects.

*r*= 4.0 cm, on a gray background. Participants viewed the patterns under comfortable room illumination. In figures in this article, the size of the dots in the patterns has been increased for visibility in reproduction. Presentation of stimuli and recording of responses were controlled using the MATLAB Psychtoolbox (Brainard, 1997). In both tasks participants were given unlimited time to respond for each judgment.

*S*is assumed to have a true value

_{i}*M*on a scale, and each separate perception

_{i}*ψ*of it is a noisy realization of the true value. When a magnitude comparison between two stimuli

_{i}*S*,

_{i}*S*takes place, the two noisy realizations are compared and the observer reports which one is larger (or smaller). Assuming the noise for each perception is identically distributed and independent, then for magnitude comparison, there exists a monotonic preference function

_{j}*P*: ℜ → [0, 1], which maps the signed difference between the true values, Δ

*M = M*, to the probability that the one will be preferred to the other. This preference function is the cumulative distribution function of the realization noise distribution convolved with itself. When the noise distributions of the set have sufficient overlap, then the preference rates will not all be 0 or 1 and fitting a model to the preference rate data allows the interval, not just ordinal, structure of the true values of the set to be estimated. If the noise is assumed normally distributed (Thurstone Model Case V) the preference function has the form of cumulative Gaussian distribution function (Thurstone, 1927), whereas if, for example, the noise is assumed Gumbel-distributed (Bradley–Terry Model), then the preference function has a logistic form (Bradley & Terry, 1952; David, 1988). In this article we will only consider Gaussian noise. However, this method of scaling is relatively robust to noise distribution assumptions (Stern, 1992).

_{i}– M_{j}*λ*+ (1 − 2

*λ*)

*P*(Δ

*M*), where

*λ*, positive but small, is the lapse rate parameter (Wichmann & Hill, 2001a). The model is parameterized by (a) the unknown true values of each stimulus, (b) the lapse rate parameter, and (c) any additional parameters used to vary the form of

*P*. We used gradient descent for model fitting, with multiple random starts to check for stability. The equal variance Gaussian noise model we used required no additional parameters.

*a*-scale values with 1 corresponding to the least ordered and 21 to the most ordered.

*a*-scale computation of order for jittered square lattice patterns, which is a different population than used to establish the

*a*-scale. Patterns 5 and 7 in both graphs appear to violate the ordinal scale. This is not surprising for two reasons. First, our

*a*-scale has limited accuracy in predicting order, which is comparable to the patterns dense spacing. Second, we have collected a finite number of responses and thus order estimates are uncertain. Bar size in Figure 6 does not allow concluding whether the observed violations are real (i.e., whether these correspond to significant perceptual differences). These violations do not affect the analysis since we do not rely at any point on the initial ranking of the patterns of Set A. The

*a*-scale has been employed only to achieve approximate equal spacing of the patterns on the discrimination scale.

*S*

_{1},

*S*

_{2},…,

*S*

_{N}, numbered in such a way so that the physical parameter,

*φ*, related to each stimulus is ranked as

_{i}*φ*

_{1}<

*φ*

_{2}< … <

*φ*

_{N}(here

*N*= 11). We assume that each stimulus,

*S*, in the set evokes a perceptual response for the degree of order, which can be numerically represented as

_{i}*ψ*=

_{i}*M*+

_{i}*N*(0,

*σ*), with

*M*being the true value of the attribute. We assume that when an observer compares the perceptual difference between pairs (

_{i}*S*,

_{i}*S*) and (

_{j}*S*,

_{k}*S*), they respond on the basis of the sign of |

_{l}*ψ*−

_{j}*ψ*| − |

_{i}*ψ*−

_{l}*ψ*|. If the pair differences |

_{k}*M*−

_{j}*M*| and |

_{i}*M*−

_{l}*M*| are always sufficiently larger than the noise level, then observers will never make an error about which pattern of a pair is the more ordered, and the response probability arises from a link function depending on (

_{k}*M*−

_{j}*M*) − (

_{i}*M*−

_{l}*M*). The link function is the cumulative distribution function of a Gaussian of variance 4 times the variance of the realization noise. We are justified in using this link function for our data because of the spacing of patterns in Set B used for the difference task. Within the discrimination data, pairs of patterns from Set B are correctly ordered in 96% of trials. In this article we only consider Gaussian noise when examining the magnitude difference comparison scaling. However, Maloney and Yang (2003) showed, with the use of simulations, that the resulting scale is robust with respect to the distributional assumptions for the noise.

_{k}*M*

_{1},

*M*

_{2}, …,

*M*and

_{N}*σ*.

*σ*) for each of the two sessions completed by each observer (a noticeable variation in the sensitivity per observer has been found also in other studies where the same difference scaling method was applied; e.g., Devinck & Knoblauch, 2012). For this model the empirical deviance was 7,051, inside the 95% interval of acceptable deviances [6,898, 7,213]. Therefore, also for the case of the magnitude difference comparison task, a common scale for the stimuli can fit the collected data for all participants, when independent sensitivities are allowed per session.

*W*, which is twice the difference of the log-likelihoods of the two models in comparison, follows a

*df*is the difference in the number of parameters between the two models (Wilks, 1938; in our case the two models differ by 1 degree of freedom). Although this is asymptotically true, it is not easy to estimate the number of observations that are necessary to provide a good such approximation (Wichmann & Hill, 2001a). We can, however, simulate the distribution of the statistic

*W*. By accepting the linear model that has been estimated with the ML method, we generated a large number of simulated data and refitted both the linear and the nonlinear models. For 500 repetitions we computed the

*W*statistic and examined how well the simulated distribution is approximated by the

*W*

_{1}= 386.68. Getting a value of

*W*, which is much greater than the 95% cut off of the

*W*

_{2}= 152.40, a number much greater than the 95% cut off of the

*W*

_{1}and

*W*

_{2}. A major reason is that the proposed functional forms are not necessarily the best among all possible. The two models offer two different interpretations: According to (a), we can assume that observers use different visual mechanisms or employ different criteria when judging sub- and suprathreshold differences. Both scales are consistent with an interval scale structure but are different from each other. Alternatively, assuming a common mechanism and a common transducer function, the internal noise has to vary across the perceptual dimension. For the constantly increasing noise model we examined, (b), there is approximately a 2-fold increase (1.85) in the standard deviation of the noise distribution in the direction towards disorder. Both interpretations agree qualitatively on their consequences about how jnd width varies with respect to the difference scale.

*a*-scale of order based on discrimination judgments is not compatible with perceptual judgments of large differences. The two views cannot be reconciled in one interval scale for the measurement of order in point patterns. Although there is no ordinal disagreement, for practical purposes in the analysis of evolving systems, differences, and therefore rates of change, are not consistent across the whole range for both types of judgment. In our experiment there is a smooth relationship between the two scales. However, the actual form may depend on the particular class of patterns.

*, 49 (4), 303–314.*

*Perception & Psychophysics**, 61 (3), 183–193.*

*Psychological Review**, 8 (4), 515–530.*

*Spatial Vision**, 39 (3/4), 324–345, doi:10.2307/2334029.*

*Biometrika**, 10 (4), 433–436, doi:10.1163/156856897X00357.*

*Spatial Vision**Calibrating MS-SSIM for compression distortions using MLDS. In*. New York: IEEE.

*2011 18th IEEE International Conference on Image Processing (ICIP)**, 21 (12), 4682–4694, doi:10.1109/TIP.2012.2210723.*

*IEEE Transactions on Image Processing**, 24 (11), 3418–3426, doi:10.1364/JOSAA.24.003418.*

*Journal of the Optical Society of America, A: Optics, Image Science, and Vision**, 250 (5), 949–956, doi:10.1002/pssb.201248553.*

*Physica Status Solidi (B)**, 8 (59), 787–798, doi:10.1098/rsif.2010.0488.*

*Journal of the Royal Society Interface**(pp. 463–477). Cambridge, MA: MIT Press.*

*The visual neurosciences**, 108 (49), 19552–19557, doi:10.1073/pnas.1113195108.*

*Proceedings of the National Academy of Sciences**. London: Oxford University Press.*

*The method of paired comparisons**, 7, 793–800.*

*Izvestia Akademii Nauk SSSR, Otdelenie Matematicheskikh I Estestvennykh Nauk**, 44 (2), 439–446, doi:10.3758/s13428-011-0167-8.*

*Behavior Research Methods**, 31 (4), A1–A6, doi:10.1364/JOSAA.31.0000A1.*

*Journal of the Optical Society of America, A: Optics, Image Science, and Vision**Physical Review E*,

*86*(4-1), 041505.

*, 7 (1), 1–26, doi:10.1214/aos/1176344552.*

*The Annals of Statistics**, 27 (5), 1232–1244, doi:10.1364/JOSAA.27.001232.*

*Journal of the Optical Society of America, A**, 22 (6), 812–820, doi:10.1177/0956797611408734.*

*Psychological Science**, 22 (4), 273–300, doi:10.1163/156856809788746309.*

*Spatial Vision**, 100 (2), 193–203.*

*The American Journal of Psychology**, 51 (13), 1397–1430, doi:10.1016/j.visres.2011.02.007.*

*Vision Research**, 60 (1), 81–89, doi:10.1002/cyto.a.20034.*

*Cytometry*,*Part A: The Journal of the International Society for Analytical Cytology**. London: Oxford University Press.*

*Measurement theory and practice: The world through quantification**, 18 (6), 623–632, doi:10.3758/BF03201438.*

*Behavior Research Methods, Instruments, & Computers**, 17 (2), 26007, doi:10.1117/1.JBO.17.2.026007.*

*Journal of Biomedical Optics**, 27 (7), 1247–1275, doi:10.1080/02699931.2013.782267.*

*Cognition & Emotion**, 9 (8): 362, doi:10.1167/9.8.362. [Abstract]*

*Journal of Vision**. New York: Academic Press.*

*Psychophysics: A practical introduction**, 25 (2), 1–26.*

*Journal of Statistical Software**. New York: Springer. Retrieved from http://www.springer.com/gp/book/9781461444749*

*Modeling psychophysical data in R**. London: Lund Humphries.*

*Principles of Gestalt psychology**. New York: Academic Press.*

*Foundations of measurement, Vol. I: Additive and polynomial representations**, 21 (9), 1208–1214, doi:10.1177/0956797610379861.*

*Psychological Science**, 101 (2), 271–277, doi:10.1037/0033-295X.101.2.271.*

*Psychological Review**, 1, 3–74.*

*Stevens' Handbook of Experimental Psychology**, in press, doi:10.1016/j.visres.2015.06.004*

*Vision Research**, 484 (7395), 542–545, doi:10.1038/nature10984.*

*Nature**, 27 (8), 788–799, doi:10.1016/j.image.2012.01.004.*

*Signal Processing: Image Communication**, 279 (1739), 2754–2760, doi:10.1098/rspb.2011.2645.*

*Proceedings of the Royal Society, B: Biological Sciences**A role for Gestalt principles of organisation in shaping preferences for non-natural spatial and dynamic patterns. In*. Bremen, Germany: Perception.

*ECVP 2013*(Vol. 42)*, 41 (20), 2669–2676, doi:10.1016/S0042-6989(01)00105-5.*

*Vision Research**, 11 (99), 20140342, doi:10.1098/rsif.2014.0342.*

*Journal of the Royal Society: Interface**, 47 (7), 974–989, doi:10.1016/j.visres.2006.12.010.*

*Vision Research**, 107 (4), 045501.*

*Physical Review Letters**, 28 (2), 784–805, doi:10.1103/PhysRevB.28.784.*

*Physical Review B**, 23 (1), 103–117, doi:10.1016/0165-4896(92)90040-C.*

*Mathematical Social Sciences**, 103 (2684), 677–680, doi:10.1126/science.103.2684.677.*

*Science**, 21 (2), 71–86.*

*Analytical Cellular Pathology: The Journal of the European Society for Analytical Cellular Pathology**, 34 (4), 273–286, doi:10.1037/h0070288.*

*Psychological Review**. New York: Wiley.*

*Theory and methods of scaling**, 62 (1), 993–1001, doi:10.1103/PhysRevE.62.993.*

*Physical Review E**, 4 (1), 36–52, doi:10.1068/i0515.*

*I-Perception**(pp. 3–47). London: Sage.*

*Handbook of cognition**, 71 (2), 447–480.*

*Physiological Reviews**, 26 (10), 1677–1691.*

*Vision Research**, 32 (8), 1493–1507.*

*Vision Research**, 63 (8), 1293–1313, doi:10.3758/BF03194544.*

*Perception & Psychophysics**, 63 (8), 1314–1329, doi:10.3758/BF03194545.*

*Perception & Psychophysics**, 9 (1), 60–62, doi:10.1214/aoms/1177732360.*

*The Annals of Mathematical Statistics**, 18 (1), 38–45, doi:10.1093/cercor/bhm029.*

*Cerebral*