Abstract
Maximum-likelihood estimation of the parameters of a psychometric function typically occurs through an iterative search for the maximum value in the likelihood function defined across the parameter space. This procedure is subject to failure. First, iterative search procedures may find a local, not global, maximum in the likelihood function. This issue can be adequately avoided by performing a brute-force search through a sufficiently fine-grained grid across parameter space and using the highest likelihood in the grid as a seed for a subsequent iterative search procedure. However, the procedure also fails when the likelihood function does not contain a maximum. This is the case when either a step function or a constant function is associated with a higher likelihood than the model function can attain with finite parameter values. In such cases iterative search procedures may erroneously report having successfully converged on a maximum in the likelihood function. The parameter estimates that result from such false convergences are largely arbitrary. As such, the estimates of parameters, their standard errors and confidence intervals, whose derivation included such false convergences will be systematically inaccurate. Here I describe a method by which false convergences can be reliably detected. Using simulations, I systematically investigate how stimulus placement, number of trials, parameters estimated, and task (2AFC, 4AFC, etc) affect the probability that the likelihood function will not contain a maximum at finite parameter values. Importantly, simulations indicate that as long as a real maximum exists in the likelihood functions of both the data as well as bootstrap simulations, standard errors derived by a standard bootstrap procedure are essentially unbiased. This result holds across a wide variety of stimulus placement strategies, including adaptive placement strategies, pattern of free parameters, and tasks.
Meeting abstract presented at VSS 2017