In the analysis, several variables needed to be controlled for, such as the
eccentricity and
size of objects. It is known that these variables influence eye-movement measures, because observers tend to fixate near the center of the screen when viewing scenes on computer monitors (Tatler,
2007) and larger objects tend to be fixated more frequently. The eccentricity of an object (the distance from its center to the center of the screen) and its size (number of pixels) were calculated according to the coordinates provided by LabelMe. In order to control for low-level visual features in our analyses, we computed saliency, luminance contrast, and edge-content information of LabelMe objects. Saliency was calculated by the freely available computer software “Saliency Map Algorithm” (
http://www.klab.caltech.edu/∼harel/share/gbvs.php, retrieved on December 25, 2011) by Harel, Koch, and Perona (
2006) using the standard Itti, Koch, and Niebur (
1998) saliency map based on color, intensity, orientation, and contrast as shown in
Figure 1b. The average saliency value of pixels inside an object boundary was used to represent object saliency. Luminance contrast was defined as the gray-level standard deviation of pixels enclosed in an object. For computing edge-content information, images were convolved with four Gabor filters, orientated at 0, 45, 90, and 135 degrees. Tatler et al. (
2005) suggested to set the spatial frequency of the Gabor carrier to values between 0.42 and 10.8 cycles per degree, and we chose a value of 6.75 cycles per degree. All computations followed Tatler et al. (
2005) and Baddeley and Tatler (
2006) except that a popular boundary padding method, the built-in Matlab function “symmetric” was used and that the results were smoothed by a Gaussian filter (
σ = 0.5 degrees). The average value of pixels inside an object boundary of the edge-content information map (shown in
Figure 1c) was used to represent that object's edge-content information.