The mean gray level was subtracted from images for computational simulations. Simulations were made with the target scale and orientation levels the same as those used in the psychophysical experiment. The three model observers were fitted to the data with the same likelihood method introduced in the previous study (
Oluk & Geisler, 2023). The method implicitly fits the hits and correct rejection rate of the subjects to maximize the likelihood of the data by varying two parameters. The first parameter is the scale factor on the target amplitude. The second parameter is a criterion. Here, we made a slight adjustment to the likelihood method. Sometimes, because of a finite number of simulated trials, the model achieves 100% or 0% correct, making the likelihood of the data zero. To avoid the issue, we assumed there would be a single error if we doubled the number of simulated trials. We calculated rates based on this assumption. If target orientation and scale are known, all three model observers perform identically and are optimal. For 64 blocked conditions in the low uncertainty condition, only a single overall scalar on amplitude was varied in the fitting procedure. Criteria were calculated from the data and fixed. The rest of the fitting procedure was the same as for the high uncertainty condition. The data from both conditions were fitted simultaneously to test principles about how the low-uncertainty condition is related to the high-uncertainty condition.
Intrinsic position uncertainties were simulated by including additional potential visual targets whose centers vary around the actual center of the presented target. The maximum position uncertainty was 12 pixels in radius, resulting in a 25 × 25 pixel square that involves all possible center locations centered around the true center. However, introducing 624 new locations is computationally expensive (for each new template location, there are 64 templates due to orientation and scale uncertainty, so the total number of templates would be 39,936) and unnecessary because these templates are highly correlated. Thus, new templates were generated for two-pixel steps, effectively creating 13 × 13 possible center locations. Because of that, the amount of position uncertainty is starting from 2 pixels to 12 pixels with steps of 2. However, we found that this type of position uncertainty has little effect on large templates because only a tiny percentage of templates are sampling new spatial locations. Thus, we simulated another hypothesis, which is that intrinsic position uncertainty might be fixed in terms of the number of neurons. Neurons with larger receptive fields also tile the visual space, but their centers are more separated. Thus, for neurons with larger receptive fields, the same amount of position uncertainty would correspond to larger shifts in the center of the target. In this case, the implementation of position uncertainty stays the same as for the smallest target. However, for any other targets, steps (and radius) are scaled up by the smallest scale divided by its scale (for example, the largest target has a scale of 1, so a 12-pixel radius multiplied by 4.5 and rounded to the 54-pixel radius, and steps are 9 pixels steps).