In the last 40 years, however, various statistical optimization procedures have been developed which, in theory, have a number of advantages over traditional Staircase methods. This includes algorithms such as Best-PEST (Pentland,
1980), QUEST (Watson & Pelli,
1983), QUEST+ (Watson,
2017), ZEST (King-Smith, Grigsby, Vingrys, Benes, & Supowit,
1994), FAST (Vul, Bergsma, & MacLeod,
2010), Psi (Kontsevich & Tyler,
1999), Psi-marginal (Prins,
2013), qCSF (Lesmes, Lu, Baek, & Albright,
2010), MUEST (Snoeren & Puts,
1997), UML (Shen & Richards,
2012), and various unnamed methods (Green,
1993; King-Smith & Rose,
1997; Kujala & Lukka,
2006); for reviews, see Emerson (
1986); Kingdom and Prins (
2010); and Madigan and Williams (
1987). These include both maximum likelihood and maximum a priori methods; however, following convention, we shall hereafter refer to both collectively as Maximum Likelihood (ML) estimators. In all cases, the variable(s) of interest are treated as unknown values in a parametric model, and after every trial the probability of each possible parameter value being true is computed explicitly (for mathematical details, see Kontsevich & Tyler,
1999; Watson,
2017). Framing the problem in this way confers several advantages. First, it becomes possible to compute the expected most informative stimulus to present on the next trial, thereby making the test more efficient—preventing, for example, the “slow downward crawl” that is often observed at the start of Staircases. Second, information can be integrated across multiple sources, including prior information (e.g., from normative data, or the individual's previous test results). Third, multiple parameters can be estimated simultaneously. For instance, the whole psychometric function can be measured instead of only its threshold, or we can quantify how a given threshold covaries with some second parameter—such as how detection thresholds vary with frequency, in the case of contrast sensitivity and audiometry. Finally, ML estimators also have a number of other attractive features, including the ability to specify dynamic stopping criteria based on statistical confidence (Alcala-Quintana & García-Pérez,
2005; Anderson,
2003; McKendrick & Turpin,
2005), and the ability to explicitly model and account for lapse rates (Prins,
2012,
2013; Wichmann & Hill,
2001).