Our previous research examined the effects of target eccentricity and global stimulus density on target detection during active visual search in monkey. Here, eye movement data collected from three human subjects on a standard single-color Ts and Ls task with varying set sizes were used to analyze the probability of target detection as a function of local stimulus density. Search performance was found to exhibit a systematic dependence on local stimulus density around the target and as a function of target eccentricity when density is calculated with respect to cortical space, in accordance with a model of the retinocortical geometrical transformation of image data onto the surface of V1. Density as measured by nearest neighbor separation and target image size as calculated from target eccentricity were found to contribute independently to search performance when measured with respect to cortical space but not with standard visual space. Density relationships to performance did not differ when target and nearest neighbor were on opposite sides of the vertical meridian, underscoring the hypothesis that such interactions were occurring within higher visual areas. The cortical separation of items appears to be the major determinant of array set size effects in active visual search.

*eccentricity effect*is more pronounced at more peripheral target locations for larger set sizes (Carrasco, Evert, Chang, & Katz, 1995). The reduction in visual acuity because of the decrease of spatial resolution, coupled with lateral inhibition factors due to the increase in receptive field size with increasing eccentricity, has been cited to account for these results (Carrasco & Frieder, 1997; Levi, Klein, & Aitsebaomo, 1985; Toet & Levi, 1992). In this study, we attempt to separate those factors by representing the visual stimuli on the surface of a model of V1 cortex and measuring both stimulus separations and stimulus size.

^{2}) background at random locations on a 24.5 × 36.5 deg video display field using a minimal center-to-center separation of 2 deg. This procedure produced displays with variable local stimulus density but prevented confounds due to stimulus overlap. Custom software running on a Windows-based PC drove stimulus presentation on a 21-in. Sony display monitor (GDM-F520) at a resolution of 800 × 600 pixels (70 Hz update rate) and a viewing distance of 57 cm, which yielded a resolution of 22 pixels/deg.

^{2}. Otherwise, eye position samples were averaged and the centroid thus obtained was regarded as the point of fixation. A calibration/validation sequence was executed at the beginning of every session. The calibration step consisted of fixating a small dot presented sequentially for 1 s at each of nine known locations that together spanned the entire display area. A similarly constructed validation step followed. The target presentation sequences for both calibration and validation steps were generated at random every time. The average total setup time, including head mount positioning and at least one calibration/validation sequence, was 7.5 min.

*n*= 3) were truncated when this pattern occurred.

*r*divided by the area contained within that radius. Previously, we determined a weighted density measure that accounted for the proximity of each of three stimuli flanking the central one. Here, to simplify this metric, we used the distance from the reference object to its nearest neighboring item as an estimate of the local density around the reference stimulus. This distance was measured in degrees of visual angle when local stimulus density was calculated with respect to visual space and in millimeters of cortex when calculated with respect to cortical space.

*M*(

*w*), we measured the separations between stimulus representations in primary visual cortex using a 3D model of the surface of primary visual cortex, area V1. We associate each point in visual space

**v**(

*θ,w*) described by azimuth

*θ*and eccentricity

*w*with a corresponding point in cortical space

**c**(

*r,z,φ*) described by the radius

*r*of the 3D model surface at that eccentricity, the distance

*z*along the axis of rotation (both given in millimeters), and a rotational angle

*φ*(identical to the azimuthal angle

*θ*), according to the following set of equations (Rovamo & Virsu, 1984):

*M*(

*w*) = 0.065

*w*+ 0.054 (deg/mm). Using the above equations, we constructed 3D model views of the surface of primary visual cortex (see Figure 1). These views portray the surface of primary visual cortex in an accurate manner, assuming only local isotropy and the magnification factor. The same surface is found by implementing the Daniel and Whitteridge (1961) equations, and a similar surface is produced by using the magnification equations used by them, differing in size naturally because their magnification factors were for an average nonhuman primate. There are several notable departures from the orthographic projections depicted in the study of Daniel and Whitteridge, particularly in that the peripheral extents of the meridians do not meet at a point. We address those issues elsewhere. For the purposes of display, we have seamlessly joined the two hemifield representations into a single 3D view. Onto this surface, we can project the images of stimuli to illustrate the differences in size and spacing resulting from the cortical magnification. This illustration simply sets the basic constraints on the areal representations of the stimuli. One should bear in mind that there is some debate as to whether different meridia have somewhat different magnification factors (Blasdel & Campbell, 2001; Schiessl & McLoughlin, 2003). Those factors as well as variation in the magnification factor (every individual has a unique magnification factor) produce locally minor differences in the overall surface picture.

*conspicuity zone,*within which stimulus information can be extracted effectively (Motter & Belky, 1998b). This zone can be described mathematically as a sensitivity function. With a quantitative characterization of focal attention thus constructed, one can further examine its properties in response to systematic manipulations of stimulus conditions, particularly to changes in stimulus density.

*P*= 0.63 − 0.073 × ecc_mm + 0.092 × sep_mm. Standard errors for the fit constants are 0.056, 0.001, and 0.002, respectively. The factor ecc_mm is the target eccentricity in millimeters along the surface of the model, and the factor sep_mm is the separation in millimeters between target and nearest neighbor along the surface of the model. Both factors were found to contribute (Wald statistic,

*p*< .001) to the logistic equation independently and with no evidence for multicollinearity (variance inflation factor <2). We assume that the eccentricity factor is related to acuity and that target size, not eccentricity per se, governs target detection. Therefore, in Figure 6, the data are replotted in terms of target size in square millimeters rather than in terms of eccentricity. The estimate is based on the area of a circle enclosing the target and having a diameter of 1.42 deg. The calculation is based on the radius length of the circle projected along an equal eccentricity line at the eccentricity of the target and does not correct for the distortion along a meridian. Those differences are relatively minor once beyond a few degrees from the fovea as can be seen in the letter “T” depictions in Figure 1. As target size increases above 2–3 mm

^{2}, target detection rises rapidly as long as the separation from the nearest interfering stimulus is also relatively large. On the other hand, at the largest target sizes—targets located close to the fovea—the separation distance does become less relevant to target detection. As depicted in Figure 6, eccentricity, in terms of target size, and stimulus density, in terms of nearest neighbor distance, are equally important in determining target detection.

*p*> .10) for hemifield differences. Overall, Figure 10 is consistent with the idea that behavioral performance does not depend solely on processing at the level of V1.

*x*degrees are represented in cortex by ever decreasing distances. Because detection probability decreases with decreasing separation, the probability of detection therefore decreases with increasing eccentricity. This relationship generates the curves in Figure 3, where what is being depicted is the decrease in detection probability with increasing eccentricity for the average target to distractor distance set by the number of items in the array. The density of the items around the target determines how close to the point of fixation a target must be to be identified correctly. This simple principle accounts for both the array set size and the eccentricity effects that constrain target detection during active visual search. Previously, we interpreted the decreasing detection probability curves in figures like Figures 3 and 4 as depicting the limits of the zone of focal attention, or conspicuity, within which targets could be detected (Motter & Belky, 1998b). Indeed, this still may be the case. However, the changing size of this zone with different array set sizes does not reflect an active dynamic zoom but instead represents the passive spatial range associated with the average stimulus density of each array set size.