**We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.**

^{49}C

_{2}). The number of part–part relations, however, is only 21 (

^{7}C

_{2}). This enabled us to quantitatively address an enduring question in object vision: Can distances between objects be understood in terms of their parts?

^{49}C

_{2}= 1,176; where

^{49}C

_{2}denotes the number of possible distinct pairs of 49 objects) can be explained using a relatively small number of part relations (

^{7}C

_{2}= 21).

^{49}C

_{2}= 1,176 perceived distances measured using visual search. The part summation model consisted of

^{7}C

_{2}, i.e., 21 part relations at corresponding and opposite locations and an equal number of within-object part relations. Together with a constant term, the model had 64 free parameters. We used similar models for smaller numbers of parts. In addition to the linear fit, we also transformed model predictions using a sigmoid function (integral of Gaussian with three parameters: mean, variance, and peak level) to account for the fact that the measured dissimilarity (1/RT) cannot increase indefinitely due to motor constraints on RT. In general the sigmoid nonlinearity yielded only subtle improvements in the quality of fit (

*r =*0.86 for linear model;

*r =*0.88 for the sigmoid model in Experiment 1).

*r =*0.85 ± 0.02,

*p*< 0.00005). Thus, the striking agreement between the model and data did not arise from overfitting.

*t*test between search times with one object as target vs. the other as target, criterion of

*p*= 0.05). This yielded only five object pairs (of 1,176) with a significant asymmetry in both groups, which rules out any further systematic analysis. We nonetheless tried to assess, as far as the model is concerned, whether model performance is affected by search asymmetries. To this end, for each object pair, we identified the target with the smaller average search time (i.e., the easy target) and took the dissimilarity for that pair to be the reciprocal of the search time. We then asked whether model fits using the easier searches across object pairs were any different from model fits based on the hard searches. We obtained equally striking correlations in both cases (model-data correlations:

*r =*0.83 for the easy target;

*r =*0.85 for the hard target,

*p*< 0.00005 in both cases). Thus, at least in our data, search asymmetries do not affect model fits.

*z*statistic = 12.34,

*p*< 0.00005, ranksum test).

^{49}C

_{2}) pairwise distances between the 49 objects. To assess whether search performance was consistent across subjects, we repeatedly split the subjects randomly into two groups and calculated the average “split-half” correlation. A large split-half correlation implies that subjects were highly consistent in their search performance. Its statistical significance (

*p*value) represents the probability of observing a correlation at least as large as that observed given the null hypothesis that the two subject groups are uncorrelated. This analysis yielded a highly consistent correlation (mean ±

*SD*:

*r =*0.80 ± 0.005,

*p*< 0.00005; Figure 1D). The high consistency in search RTs indicates a systematic and consistent perceptual space across subjects.

*r =*−0.31,

*p*< 0.00005). In other words, when subjects were faster, they were also more accurate.

*p*< 0.0005 for the main effect of pair type in an ANOVA with subject and pair type as factors).

_{AC}, d

_{BD}, d

_{AD}, d

_{BC}, d

_{AB}, and d

_{CD}. Further, perceived distance might be driven differently by part relations at corresponding locations (AC & BD), by part relations at opposite locations (AD & BC), and by part relations within the object (AB & CD) (Figure 2A). Thus, the net dissimilarity between objects AB and CD can be written as where

*d*

_{AC}and

*d*

_{BD}represent the dissimilarity between parts AC and BD when they are at corresponding locations in the two objects,

*x*

_{AD}and

*x*

_{BC}are the dissimilarities between parts AD and BC when they are at opposite locations, and

*w*

_{AB}and

*w*

_{CD}likewise are the dissimilarities between these parts when they occur within the object. The working of the model becomes clearer on writing down the dissimilarity between another pair of objects AB and CE where only one part has changed.

*d*

_{AC},

*x*

_{BC},

*w*

_{AB}) whereas other terms are present in one equation but not the other. For example, the contribution of the term

*d*

_{BE}is zero in the first equation but one in the second. One can then extrapolate this observation to the equations corresponding to the 1,176 dissimilarity measurements: Each term occurs frequently enough by itself for its contribution to be estimated independently of the others. Note that the number of part relations at corresponding locations (i.e., terms of type

*d*

_{AB},

*d*

_{AC}, etc.) are a total of

^{7}C

_{2}= 21 in number. In all, there are 21 part relations each for corresponding, opposite, and within-object locations, which together with the constant term amount to 64 unknown terms across 1,176 equations. The resulting set of linear equations can be written down as the matrix equation y = Xb where y is a vector of 1,176 dissimilarities, b is a vector containing the 64 unknown part relations, and X is a 1,176 × 64 matrix whose rows contain 0s and 1s representing whether or not a particular part pair contributes to the object pair corresponding to that row. This equation is then solved using standard linear regression to estimate the unknown vector b given y and X. Note that the model contains separate, independent terms for each type of part relation (corresponding, opposite, within) and therefore makes no assumption about how these terms may be related.

*r =*0.88,

*F*(63, 1113) = 49.23,

*p*< 0.001,

*r*

^{2}= 0.77; Figure 2B) and outperformed both simpler models (e.g., with part relations of only one kind) as well as those based on RT alone (see below). The performance of this model is even better than the split-half correlation (

*r =*0.80) described above; this is because the split-half correlation estimates the consistency of half the data whereas the model is fit to the full data set, which is more consistent. To estimate the true consistency of the full data set, we applied a standard correction called the Spearman-Brown formula, which estimates the correlation between two full data sets based on the correlation obtained between

*n*-way splits of the data. For a two-way split, i.e., the split-half correlation, the Spearman-Brown corrected correlation is

*r*= 2

_{c}*r*/(

*r*+ 1) where

*r*is the split-half correlation. Applying this correction to the split-half correlation yields

*r*= 0.88. Here and in all subsequent experiments, we have reported this corrected split-half correlation as a measure of data consistency. It can be seen here that the model data correlation (

_{c}*r =*0.88) is equal to the corrected split-half correlation (

*r*= 0.88), implying that the part summation model explains search dissimilarities as well as can be expected given the consistency of the data. We conclude that perceived distances between whole objects can be explained as a

_{c}*linear sum*of part relations.

*r =*0.9,

*p*< 0.001) and within objects (

*r =*−0.63,

*p*= 0.0023), suggesting that there is a common set of underlying part relations that are modulated by object-relative location (Figure 2C). Second, parts at corresponding locations exert a stronger influence compared to parts at opposite locations (Figure 2C). Third, part relations within an object have negative contribution, which means that objects with similar parts tend to become distinctive (Figure 2C). This negative weight is analogous to the finding that search becomes easy when distracters are similar (Duncan & Humphreys, 1989; Vighneshvel & Arun, 2013). To visualize the part relationships that drive the overall object dissimilarities, we performed multidimensional scaling on the estimated corresponding part dissimilarities. The resulting 2-D embedding of the part relationships is shown in Figure 2D. It can be seen that parts that are estimated as being dissimilar in Figure 2D result in objects containing these parts to also be dissimilar (Figure 1E).

*r =*0.60,

*p*= 0.005; Figure 2B) with no systematic deviation. The lower correlation of the model could be either due to the relatively fewer points or because subjects were themselves more variable in their responses for mirror pairs. We found the latter to be true (average split-half correlation of dissimilarities between mirror object pairs across two groups of subjects:

*r =*0.71,

*p*= 0.001). What makes the model explain mirror confusion? Consider what happens for a mirror pair AB versus BA. The net dissimilarity can be written as

*d*(AB, BA) =

*d*

_{AB}+

*d*

_{AB}+

*x*

_{AA}+

*x*

_{BB}+

*w*

_{AB}+

*w*

_{AB}. But the terms

*x*

_{AA}and

*x*

_{BB}are taken to be zero in the model. This reduces the net distance between objects, resulting in mirror confusion.

^{7}C

_{2}) observed distances between symmetric objects. Model predictions were strongly correlated with the observed distances (

*r =*0.78,

*p*< 0.001; Figure 2B). This correlation was close to the consistency between subjects for these symmetric object pairs (average split-half correlation:

*r =*0.76,

*p*= 0.0063). Despite these strong correlations, the model systematically underestimated the observed distances by a constant offset (mean slope: 1.07 with 95% confidence interval [0.64 1.5]; intercept: 0.29 with 95% confidence interval [0.03 0.55]). The constant offset obtained for symmetric object pairs was present equally strongly in both horizontally and vertically oriented objects (Experiment 3). This constant offset suggests that symmetry exerts an additive influence on perceived distances independent of the part relations in the model.

*N*is the number of observations, SS is the sum of squared error between the model and data, and

*K*is the number of free parameters in the model. In general, the more negative the AICc, the better the model. But for ease of comparison, we considered the absolute value of the AICc so that a larger value of the AICc indicates better quality of fit. Our results are summarized in Table 1. Standard deviations for the AICc were calculated by generating bootstrap resampled data, fitting the model each time, and calculating the AICc. The AICc values for the 1/RT model were significantly larger than the AICc values for the RT model, both when explaining RT or 1/RT values (

*p*< 0.00005, paired

*t*test across 1,176 bootstrap-derived estimates of AICc). We conclude that 1/RT models outperform RT models in terms of explaining search data.

*F*test. The null hypothesis in the partial

*F*test is that the full and reduced models are equivalent. Here too we found that the full model was significantly better in terms of the quality of fit as assessed using the partial

*F*test,

*F*(42, 1113) = 40.08 for full model versus corresponding part terms only;

*F*(42, 1113) = 52.14 for full versus opposite terms only;

*F*(42, 1113) = 72.2 for full versus within terms only; all

*p*s < 0.00005.

*N*

_{AC,BD}represents a term that is set to one when parts AC and BD are present and so on. The total number of such nonlinear terms is therefore

^{21}C

_{2}, which is 210. The nonlinear model predicted the observed data only slightly better than did the linear model (

*r =*0.86 compared to

*r =*0.85 for the linear model,

*p*< 0.00005) but at the cost of many additional parameters. To assess whether this improvement was significantly greater than expected given the additional degrees of freedom in the nonlinear model, we performed a partial

*F*test taking the nonlinear model as the full model and the linear model as the reduced model. The null hypothesis in the partial

*F*test is that the reduced model is equivalent to the full model. This test revealed no statistically significant difference between the two models,

*F*(210, 900) = 0.0151,

*p*= 1, partial

*F*test, implying that the nonlinear model did not perform significantly better than the linear model. We conclude that the part summation model is a linear sum of part relations.

*SD*]:

*r =*0.86 ± 0.03,

*p*< 0.00005). Importantly, these ratings were strongly correlated with search dissimilarity ratings measured for the same pairs in Experiment 1 (Figure 3A;

*r =*0.81,

*p*< 0.00005) even though the two data sets were collected from different groups of subjects. However, it can be seen that dissimilarity ratings tended to saturate at the extreme ends of the available range (i.e., for ratings below three and above eight). In other words, when objects became extremely similar or dissimilar, subjects tended to use a single rating that corresponded to the extreme ends of the range (Figure 3A). In contrast, there was no such clustering observed for search dissimilarities although they can and do saturate when the target–distracter dissimilarity is very large (Arun, 2012). Whereas the search dissimilarities were normally distributed (

*p*= 0.11, Lilliefors test for normality), the dissimilarity ratings differed significantly from a unimodal distribution (

*p*< 0.00005, dip = 0.1, Hartigan's dip test).

*r =*0.50,

*p*< 0.00005). Interestingly, for object pairs in this middle range, subjects were more consistent in the visual search task (correlation between search distances of two subject groups:

*r =*0.63,

*p*< 0.00005) compared to the subjective rating task (correlation between ratings of two subject groups:

*r =*0.45,

*p*< 0.005).

^{4}C

_{2}= six parameters each) and asked whether the observed dissimilarity ratings could be explained using a weighted sum of the part relations. Model predictions were again strongly correlated with the data overall (Figure 3B);

*r =*0.95,

*F*(18, 102) = 29.84,

*p*< 0.0005,

*r*

^{2}= 0.9. Model predictions were also strongly correlated in the middle range of dissimilarities (

*r =*0.73,

*p*< 0.00005 for 75 object pairs with dissimilarity rated between three and eight). However we only saw a weak effect of symmetry in the dissimilarity data, but this could be because of the small number of symmetric objects in this experiment.

^{36}C

_{2}= 630 pairs of objects in each set with two repeats per condition. Trials containing horizontal objects were randomly interleaved with trials containing vertical objects. Subjects had to perform a total of 2,520 (2 sets × 630 conditions × 2 repeats) correct trials. All other details are the same as in Experiment 1.

*r =*0.87 for linear model,

*r =*0.88 for nonlinear model,

*p*= 0.12,

*F*(105, 477) = 1.19 for a partial

*F*test comparing the two models. This was true for the vertical object searches as well:

*r =*0.86 for the linear model,

*r =*0.87 for the nonlinear model,

*p*= 0.063,

*F*(105, 477) = 1.25 for a partial

*F*test comparing the two models. We also confirmed that the linear part summation model was not overfitting the data by performing cross-validation as detailed in Experiment 1 (cross-validated model correlation:

*r =*0.84 ± 0.02 and

*r =*0.84 ± 0.03 for horizontal and vertical objects, respectively). We did not analyze search asymmetries as only six out of 1260 object pairs showed significant effect of asymmetry (4/630 and 2/630 pairs for horizontal and vertical objects, respectively).

*SD*]:

*r =*0.84 ± 0.01 and

*r =*0.83 ± 0.01). There was a strong correlation between search times across object pairs between the horizontal and vertical orientations, suggesting that object distances are fundamentally unaltered by overall orientation (

*r =*0.80,

*p*< 0.00005). However, horizontally oriented pairs were slightly harder in visual search compared to vertically oriented pairs (median RTs: 1086 ms for horizontal, 992 ms for vertical,

*p*< 0.00005, Wilcoxon signed rank test). Importantly, however, the part summation model produced excellent predictions for both horizontal and vertical objects:

*r =*0.87, with

*F*(45, 585) = 37.07,

*r*

^{2}= 0.76 for horizontal objects;

*r =*0.86,

*F*(45, 585) = 33.72,

*r*

^{2}= 0.74 for vertical objects;

*p*< 0.0005 (Figure 4A, B). Distances between symmetric objects were systematically different from model predictions by a constant offset for both object orientations with no obvious difference in the amount of offset (offsets: 0.34 for horizontal, 0.43 for vertical;

*p*= 0.32, Wilcoxon ranksum test on 15 bootstrap-derived offset estimates; Figure 4A, B). Mirror pairs were harder in horizontal orientation when compared to vertical orientation (Mean RT: 2.73 s for horizontal, 2.03 s for vertical),

*t*(28) = 3.8,

*p*< 0.00005, unpaired

*t*test. This is in agreement with previous reports that mirror confusion is stronger about the vertical axis (Rollenhagen & Olson, 2000). Finally, we compared part relations for horizontal and vertical orientations to elucidate why distances were larger in the horizontal orientation. Part relations at corresponding locations did not differ in magnitude between horizontally and vertically oriented objects. However, part relations at opposite locations were slightly weaker for vertical objects, and part relations at within-object locations were substantially weaker for vertical objects (Figure 4C). According to the model, then, vertical objects are more distinct because within-part relations are weaker. We conclude that part matching is not isotropic and occurs preferentially along the horizontal direction compared to the vertical direction.

^{6}C

_{2}= 15 distances between six symmetric objects). To further confirm this negative result, we performed an additional experiment in which we took a larger number of symmetric objects (

*n*= 12 objects) and measured all 66 pairs of distances between these objects in visual search for seven subjects (four females, aged 20–30 years). We reasoned that any difference in the strength of symmetry between horizontal and vertical objects should manifest as a systematic difference in the observed dissimilarity. We observed no such difference: Distances in both conditions were strongly correlated (

*r =*0.83,

*p*< 0.00005) and did not differ in magnitude (median distances: 1.17 s

^{−1}and 1.19 s

^{−1}for horizontal and vertical,

*p*= 0.79, Wilcoxon signed-rank test, Figure 4D). We conclude that the effect of symmetry does not depend on object orientation.

*r =*0.86 ± 0.01). In addition to the linear model, we also tested a model with extra nonlinear terms. The linear model was not significantly different from the nonlinear model:

*r =*0.88 for linear model and

*r =*0.89 for nonlinear model,

*p*= 0.092,

*F*(210, 900) = 1.15 for a partial

*F*test comparing the two models. For both models, we transformed the predicted dissimilarities using a sigmoid function to account for saturation in the search dissimilarities. We did not analyze the effect of search asymmetry on model performance as only 12 out of 1,176 pairs showed significant search asymmetry.

^{49}C

_{2}) object pairs and were highly consistent in their performance (average corrected split-half correlation between dissimilarities across two random groups of subjects:

*r =*0.85 ± 0.01,

*p*< 0.00005). We then fit the model to the observed dissimilarities as before and obtained striking fits (

*r =*0.88,

*F*(63, 1113) = 53.52,

*r*

^{2}= 0.77,

*p*< 0.00005; Figure 5A). Estimated part relations at corresponding locations were significantly correlated with relations at opposite locations (

*r =*0.97,

*p*< 0.00005) and within objects (

*r =*−0.62,

*p*< 0.005), suggesting that there is a common set of underlying part relations modulated by location (Figure 5B). Part relations were visualized using multidimensional scaling (Figure 5C). These part relations were similar to part relations in Experiment 1 (

*r =*0.79 between corresponding location terms,

*r =*0.85 for opposite locations, and

*r =*0.83 for within-object terms,

*p*< 0.00005 in all cases).

*r =*0.88,

*p*< 0.00005) but were offset by a fixed amount. The slope of the best fitting line between observed and predicted dissimilarities did not differ significantly from one (slope = 1.06 with [0.64, 1.49] as 95% confidence interval), and the offset was significantly different from zero (offset = 0.28 with [0.013, 0.54] as 95% confidence interval). This offset was slightly smaller than the offset observed for symmetric objects in Experiment 1 (0.3 vs. 0.28) but cannot be directly compared because the data is from two different groups of subjects. Nonetheless, to compare the offsets with this caveat, we calculated the difference between the observed dissimilarity and the model prediction for each of the 21 repeated/symmetric object pairs and compared them using an unpaired

*t*test. This revealed a difference that approached significance (

*t*(40) = 1.75,

*p*= 0.087, unpaired

*t*test).

^{216}C

_{2}= 23,220) of possible object pairs.

^{6}C

_{2}= 15 terms, and the complete model contained a total of 106 (seven groups of 15 terms each and a constant term). The predicted dissimilarities from the linear part summation model were transformed using a sigmoid function to account for saturation in search dissimilarities (1/RT). We also confirmed that the model was not overfitting using a cross-validated measure of performance (mean ±

*SD*of cross-validated correlation:

*r =*0.78 ± 0.04). Further, we found that there were only six out of 700 pairs that exhibited significant search asymmetry. Hence, we did not explore the effect of search asymmetry on model performance in detail.

*SD*]:

*r =*0.83 ± 0.01,

*p*< 0.00005). Upon fitting the observed dissimilarities using the part summation model, we obtained striking fits (

*r =*0.84,

*F*(105, 595) = 12.69,

*r*

^{2}= 0.71,

*p*< 0.00005; Figure 6A). The estimated part relations yielded several interesting insights. First, all groups of part relations were significantly correlated (median correlation between groups:

*r =*0.81, median

*p*value:

*p*= 0.000051), suggesting that there is a common set of part relations that is modulated by location. Second, the magnitude of the part relations estimated at the different locations varied systematically (Figure 6B): Part relations at corresponding locations were strongest as before compared to all other terms and approached significance for some comparisons (Figure 6B). Importantly, the magnitude of part relations for the far part was systematically smaller than the near and medium parts for both opposite and within-object location terms. We conclude that part matching is spatially tuned and decays with distance.

^{36}C

_{2}) pairs of connected objects and 630 pairs of disconnected objects. The trials involving connected objects were randomly interleaved between trials involving disconnected objects. In case of searches involving disconnected objects, the spacing between items in the array (3°) was larger than the separation between the two parts (1°). This ensured that the two isolated parts still grouped together by spatial proximity cues.

*r =*0.87 ± 0.03 and

*r =*0.85 ± 0.02 for connected and disconnected objects, respectively). For connected objects, the linear part summation model was not significantly different from a model with extra nonlinear terms:

*r =*0.88 for linear model and

*r =*0.91 for nonlinear model,

*p*= 0.07,

*F*(105, 477) = 1.24 for a partial

*F*test comparing the two models. This was true for disconnected objects as well:

*r =*0.86 for linear model and

*r =*0.89 for nonlinear model,

*p*= 0.31,

*F*(105, 477) = 1.07 for a partial

*F*test comparing the two models. Finally, very few searches (

*n*= 5) exhibited a statistically significant asymmetry across both groups, so we did not analyze them separately.

*SD*]:

*r =*0.86 ± 0.01 for connected objects and

*r =*0.83 ± 0.01 for disconnected objects,

*p*< 0.00005). To assess how dissimilarities change with stem deletion, we compared search dissimilarities for every pair of objects when the parts were connected versus when they were separated. This revealed a strong positive correlation (

*r =*0.7,

*p*< 0.00005). However, searches involving connected objects were slightly easier than searches involving disconnected objects (average search times: 1118 ms for connected, 1382 ms for disconnected objects,

*z*= 15.64,

*p*< 0.00005, ranksum test). We then fit the model containing corresponding, within, and across terms to the dissimilarities for normal and stem-deleted object pairs. Model performance was equally good for connected objects (

*r =*0.88,

*F*(45, 585) = 36.89,

*r*

^{2}= 0.77,

*p*< 0.00005 [Figure 7A]), and for disconnected objects (

*r =*0.86,

*F*(45, 585) = 33.43,

*r*= 0.74,

^{2}*p*< 0.00005 [Figure 7B]). These correlations were not significantly different (

*p*= 0.14, Fisher

*z*test). Here too, symmetric objects were systematically more distinct by a constant offset (best-fitting slope: 1.05 with a 95% confidence interval [0.52 1.59]; intercept: 0.34 with a 95% confidence interval [0.12 0.56]).

*r =*0.87 and

*r =*−0.53,

*p*< 0.05 for connected objects; Figure 7C) and for disconnected objects (

*r =*0.91 and

*r =*−0.51,

*p*< 0.05; Figure 7C). Thus, both connected and disconnected objects have consistent part relations across locations. However, part relations for connected and disconnected objects were only weakly correlated and often not even statistically significant (

*r =*0.45, 0.57, and 0.19 for connected vs. disconnected part relations at corresponding, opposite, and within-object locations,

*p*= 0.096, 0.025, and 0.5, respectively; Figure 7C). Thus, part relations are fundamentally different for isolated and embedded objects but sum linearly in both cases.

^{49}C

_{2}= 1,176).

*r =*0.77 ± 0.03 and

*r =*0.81 ± 0.02 for unnatural and natural part sets, respectively). The incidence of search asymmetries was very low (20/492 and 14/492 searches for unnatural and natural sets, respectively), and hence, we did not explore this further.

*SD*]:

*r =*0.89 ± 0.02 for unnatural and

*r =*0.89 ± 0.02,

*p*< 0.00005). Here also, we tried to fit the part summation model to the observed data. Because the parts on either end were different in identity, we were only able to use part relations at corresponding locations in the model. Hence the model had only 43 free parameters (21 part relations each on left and right sides of the object and a constant term).

*r =*0.80,

*F*(42, 450) = 19.34,

*r*

^{2}= 0.64,

*p*< 0.0005; Figure 8C), compared to the natural part set (

*r =*0.84,

*F*(42, 450) = 25.18,

*r*

^{2}= 0.71,

*p*< 0.0005; Figure 8D). This difference was statistically significant as assessed using bootstrap resampling of the correlation coefficients (

*p*< 0.0005, Wilcoxon signed-rank test on 492 bootstrap-derived estimates of correlations). Across all bootstrap-derived samples, the natural part model correlations were higher than the unnatural part model correlations about 99% of the time (Figure 8E). However this comparison is based on different sets of objects, and the difference might be due to the objects being different rather than because of natural or unnatural fragments. We therefore compared the two models on the 21 object pairs common to both sets. The natural fragment model performed slightly but significantly better than the unnatural fragment model (

*r =*0.73 for natural,

*r =*0.57 for unnatural;

*z*= 3.79,

*p*< 0.005, Wilcoxon's ranksum test based on 21 bootstrap derived estimates of correlations; Figure 8E). Across all bootstrap-derived samples, the natural part model correlations were higher than the unnatural part model correlations about 90% of the time. Because the unnatural part model was still reasonably successful in explaining perceived distances, we surmise that the underlying process involves contour matching rather than part matching. However, using natural parts confers a slight advantage in explaining object distances. We conclude that the contour matching process is modulated by part decomposition but not determined by it.

^{49}C

_{2}) pairs of holistic objects.

*r =*0.88 ± 0.01). We also found that the linear model was not significantly different from a nonlinear model:

*r =*0.89 for linear model and

*r =*0.9 for nonlinear model,

*p*= 1,

*F*(210, 900) = 0.22 for a partial

*F*test comparing the two models. In addition, the incidence of search asymmetry was very low (four out of 1,176 pairs), and hence, we did not explore this further.

*SD*]:

*r =*0.87 ± 0.01,

*p*< 0.00005). As before, we fit the part summation model to the observed data and obtained excellent fits (

*r =*0.88,

*F*(63, 1113) = 50.56,

*p*< 0.00005,

*r*

^{2}= 0.77; Figure 9B). Here too, part relations were consistent across locations (corresponding vs. opposite:

*r =*0.91,

*p*< 0.00005 and corresponding vs. within-object:

*r =*0.79,

*p*< 0.00005; Figure 9C). Thus, distances between holistic objects also can be understood in terms of their parts. Likewise, symmetric objects were systematically more distinct by a constant offset (best-fitting slope: 0.9 with a 95% confidence interval [0.63 1.18]; intercept: 0.47 with a 95% confidence interval [0.11 0.82]).

*r =*0.92 ± 0.03,

*p*< 0.00005; asymmetric pairs:

*r =*0.94 ± 0.01,

*p*< 0.00005; two-part object pairs:

*r =*0.78 ± 0.06,

*p*< 0.00005). We then asked whether predicted dissimilarities from the part summation model (based on models estimated in the previous experiments from an independent set of subjects) would predict the dissimilarities observed in this experiment. Model predictions were striking across all three groups with no qualitative difference (model-data correlations: symmetric pairs:

*r =*0.88,

*r*

^{2}= 0.77; asymmetric pairs:

*r =*0.92,

*r*

^{2}= 0.85; and two-part pairs:

*r =*0.88,

*r*

^{2}= 0.77; all correlations

*p*< 0.00005). Thus, object dissimilarities were unaffected by the experimental context in which objects were observed. We conclude that holistic objects are explained by the part summation model because the dissimilarities are fundamentally driven by a contour-matching process.

^{36}C

_{2}search conditions involving every pair of stimuli. We also measured dissimilarities between all possible pairs (

^{6}C

_{2}= 15) of shapes and between all possible pairs of textures (

^{6}C

_{2}= 15). For the shape-only conditions, shapes were shown as silhouettes with a uniform white fill. For the texture-only pairs, the textures were shown as squares filled with the corresponding textures. Examples of these searches are shown in Figure 8.

*r =*0.9 ± 0.01). In addition, we also found that there was only one search asymmetry and, hence, did not explore the effect of search asymmetry on model performance.

^{36}C

_{2}) pair-wise dissimilarities between 36 objects differing in both shape and texture using visual search (Figure 10). In addition, we measured 15 (

^{6}C

_{2}) shape–shape dissimilarities and 15 texture–texture dissimilarities to confirm model predictions. Subjects were extremely consistent in their responses (average split-half correlation between dissimilarities across two random groups of subjects:

*r =*0.88 ± 0.01,

*p*< 0.001). An example search in which the target differed in both shape and texture is shown in Figure 10A. It can be seen that this search is slightly easier than searches in which the target differs only in shape (Figure 10B) or in texture (Figure 10C). Thus, both shape and texture differences combine in visual search, and we set out to investigate the precise functional manner in which they combine using a similar model as before. In the model, the net dissimilarity between two objects different in both shape and texture is the sum of the dissimilarity between the shapes of the two objects and the dissimilarity between the textures of the two objects. The model parameters (31 parameters, 15 each for shape and texture and a constant term) were estimated using linear regression as before. Observed dissimilarities were explained extremely well by the model (

*r =*0.91,

*F*(30, 630) = 66.11,

*r*

^{2}= 0.83,

*p*< 0.001; Figure 10F). To visualize the underlying shape and texture relations, we performed multidimensional scaling as before. These revealed systematic patterns of shape and texture distances, which underlie the observed dissimilarities (Figure 10D, E). We then compared the shape and texture relations estimated by the model with those observed using the shape-only and texture-only conditions in visual search. These model parameters were strongly correlated with their observed counterparts (

*r =*0.87 for shape–shape dissimilarities;

*r =*0.86 for texture–texture dissimilarities;

*p*< 0.001). We conclude that shape and texture sum linearly in object vision.

*r =*0.94 ± 0.01,

*p*< 0.001). We selected for further analysis the object pairs that showed a significant difference in RT between the normal and variant conditions (

*p*< 0.05 for a main effect of variant type in an ANOVA with subject and variant type as factors). This yielded 32 pairs for the 6° change and 50 pairs for the 9° orientation and 19 and 39 pairs for the 50% and 75% changes in length. We speculate that the remaining pairs did not show an effect because of signal-to-noise issues—in other words, that they would also show a significant effect with a larger number of individual trials. Testing this systematically is beyond the scope of this study given the limited number of trials (

*n*= 4) per search.

*p*< 0.001, Wilcoxon signed-rank test on bootstrap-derived offset values equal in number to the data points).

*r =*0.93 ± 0.01,

*p*< 0.00005). As in the previous experiment, we selected for further analysis only those objects that showed a significant difference in the search times between normal and variant conditions (

*p*< 0.05 for main effect of pair type in an ANOVA with subject and pair type as factors). This procedure yielded 51 pairs of objects. Searches involving variant pairs were always easier than searches involving normal pairs as expected because the variant pairs differed both in local and global attributes (Figure 12A). As observed in the previous experiment, search dissimilarities for variant pairs were greater than search dissimilarities for normal pairs by a constant offset (Figure 12B). The slope of the best-fitting line did not differ from unity (slope = 1.08 with 95% confidence interval [0.99 1.17]), and the intercept was significantly different from zero (intercept = 0.11 with 95% confidence interval [0.08 0.15]). Thus, a change in spatial separation of parts introduced a fixed offset to the dissimilarity already present due to local part differences. The fixed offset shows that this global attribute combines additively with local features.

*d*(AB,AC) =

*d*

_{AA}+

*d*

_{BC}+

*x*

_{AC}+

*x*

_{AB}−

*w*

_{AB}+

*w*

_{AC}. Any emergent feature present in the objects AB and AC would result in a failure of the model to predict the observed dissimilarity because the model only considers each part in isolation. This approach offers a quantitative framework to identify emergent features and study how they combine with other features.

*d*

_{AC},

*d*

_{BD},

*d*

_{AD},

*d*

_{BC},

*d*

_{AB}, and

*d*

_{CD}. In Experiments 11 and 12, we have shown that if object AB is rotated or its part configuration is altered, the distance between AB and CD increases by a fixed amount. These findings place constraints on how objects may be represented in the brain. For instance, if objects AB and CD are represented as vectors in an underlying multidimensional space, then the distance between these two vectors, according to our findings, is not a simple vector distance. Rather, it is a complex function that is influenced systematically by (a) spatially tuned part-matching processes and (b) changes in global attributes. Thus, our results challenge the commonly held view that objects can be thought of as vectors in some multidimensional space.

*f*(x + y) =

*f*(x) +

*f*(y), and (b) scaling, i.e.,

*f*(

*ax*) =

*af*(

*x*). In our context, if object AB was represented by vectors a and b, and CD was represented using vectors c and d, then we have shown that

*d*({a,b},{c,d}) =

*w*

_{1}

*d*(a,c) +

*w*

_{1}

*d*(b,d) +

*w*

_{2}

*d*(a,d) +

*w*

_{2}

*d*(b,c) −

*w*

_{3}

*d*(a,b) −

*w*

_{3}

*d*(c,d), where d is a distance metric on features, and

*w*

_{1},

*w*

_{2},

*w*

_{3}represent the relative weights associated with each type of comparison. Thus, our results show that the perceived distance between two collections of features is a linear sum of pair-wise distances between all features. Thus, we have demonstrated that perceived distance has an additivity property, but we have not demonstrated scaling. This is impossible in our study because we do not assume any explicit parts-based representation, rendering meaningless any notion of scaling.

*θ*is given by

*d*(Δ

*θ*) =

*k*Δ

*θ*, where

*k*is a constant (Arun, 2012). This, in turn, implies that

*d*(

*a*Δ

*θ*) =

*ka*Δ

*θ*= ad(Δ

*θ*), which confirms scaling. More recently, we have shown this to be true for several other features such as length, intensity, and aspect ratio (Pramod & Arun, 2014). Thus, these results together with our present findings confirm both additivity and scaling of distances in perceptual space, indicative of full linearity.

*, 74, 86–92.*

*Vision Research**, 63 (4), 516–556.*

*American Journal of Psychology**, 6 (4), 975–996.*

*Symmetry**, 94 (2), 115–147.*

*Psychological Review**, 51, 46–55.*

*Cortex**, 10, 433–436.*

*Spatial Vision**, 16 (1), 136–142.*

*Perception & Psychophysics**, 96 (3), 433–458.*

*Psychological Review**Journal of Experimental Psychology*.

*, 9 (2), 242–257.*

*Human Perception and Performance**, 1, 225–241.*

*Cognitive Psychology**, 11 (2), 179–182.*

*Perception and Psychophysics**, 63 (1), 29–78.*

*Cognition**, 2 (6), 233–248.*

*Perception and Psychophysics**, 112 (1), 24–38.*

*Psychological Bulletin**, 5, 135–139.*

*Psychonomic Bulletin and Review**, 12, 4–34.*

*Journal of Mathematical Psychology**, 200 (1140), 269–294.*

*Proceedings of the Royal Society of London B: Biological Sciences**, 97 (5), 3532–3543.*

*Journal of Neurophysiology**, 101 (4), 1867–1875.*

*Journal of Neurophysiology**, 9, 353–383.*

*Cognitive Psychology**, 60 (7), 1101–1116.*

*Perception and Psychophysics**. Cambridge, MA: MIT Press.*

*Vision science: Photons to phenomenology**, 37 (5), 1331–1349.*

*Journal of Experimental Psychology: Human Perception and Performance**, 15 (4), 635–649.*

*Journal of Experimental Psychology: Human Perception and Performance**, 3 (3), 422–435.*

*Journal of Experimental Psychology: Human Perception and Performance**, 287 (5457), 1506–1508.*

*Science**, 1, 54–87.*

*Journal of Mathematical Psychology**, 171 (3972), 701–703.*

*Science**, 29 (24), 7788–7796.*

*Journal of Neuroscience**, 30, 7948–7960.*

*Journal of Neuroscience**, 30 (4), 379–393.*

*Psychometrika**, 84, 327–352.*

*Psychological Review**, 1 (9), 346–352.*

*Trends in Cognitive Sciences**, 138 (6), 1172–1217.*

*Psychological Bulletin**, 138 (6), 1218–1252.*

*Psychological Bulletin**, 10 (1), 30–32.*

*Perception & Psychophysics**, 24 (5), 399–414.*

*Perception and Psychophysics**, 63 (3), 381–389.*

*Perception and Psychophysics**, 5 (6), 495–501.*

*Nature Reviews Neuroscience**, 64 (7), 1039–1054.*

*Perception and Psychophysics*