Throughout the article we have assumed a log-additive form for our models, writing the intensity function as
for a set of covariates
ν_{1}, … ,
ν_{n}. This choice may seem arbitrary—for example, one could use
a type of mixture model similar to those used in Vincent et al. (
2009). Since
λ needs to be always positive, we would have to assume restrictions on the coefficients, but in principle this decomposition is just as valid. Both
Equations 7 and
8 are actually special cases of the following:
for some function Φ (analogous to the inverse link function in generalized linear models, see McCullagh & Nelder,
1989). In the case of
Equation 7 we have Φ (
x) = exp(
x) and in the case of 8 we have Φ (
x) =
x. Other options are available, for instance Park, Horwitz, and Pillow (
2011) use the following function in the context of spike train analysis:
which approximates the exponential for small values of
x and the identity for large ones. Single-index models treat Φ as an unknown and attempt to estimate it from the data (McCullagh & Nelder,
1989). From a practical point of view the log-additive form we use is the most convenient, since it makes for a log-likelihood function that is easy to compute and optimize and does not require restrictions on the space of parameters. From a theoretical perspective, the log-additive model is compatible with a view that sees the brain as combining multiple interest maps
ν_{1},
ν_{2}, … into a master map that forms the basis of eye-movement guidance. The mixture model implies on the contrary that each saccade comes from a roll of dice in which one chooses the next fixation according to one of the
ν_{i}s. Concretely speaking, if the different interest maps are given by, e.g., contrast and edges, then each saccade is either contrast driven with a certain probability or, on the contrary, edges driven.