As noted above, for the optimal weighted average, weights
wi can be very disparate. Since we seek to select a subset of the simulated neurons whose regular average (all weights are equal) follows closely the physiological experiments, we relax the selection problem by solving a regularized version of the optimization problem (
5) as follows:
\begin{equation}
\begin{array}{ll} \displaystyle \mathop {\mathrm{minimize}}_{\mathbf {w}} & \Vert \mathbf {M} \mathbf {w} - \mathbf {t} \Vert _{2}^2 + \lambda \, \Vert \mathbf {w}\Vert _{2}^2 \\[6pt]
\mathrm{subject}\, \mathrm{to} & w_{i} \ge 0, \; i = 1, . . . ,n \, \, \, \, \mathrm{and} \\[6pt]
&\displaystyle\sum _{i=1}^n w_i = 1, \end{array} \quad
\end{equation}
where λ > 0 is the trade-off parameter that promotes weight equalization. For λ = 0, which is equivalent to solving (
4), we found that only
\(14\%\) of the simulated model neurons have the weights
wi ≥ 2
e−3 with only a handful of them containing large values that account for
\(\sum _i^n w_i = 1\). As λ increases, the regularization term pushes the weights towards the center of the simplex. For instance, for λ = 0.8, we found that approximately
\(40\%\) of the model neurons have weights
wi ≥ 2
e−3. The subset of model neurons is selected by applying a threshold to the estimated weights, as proposed in
Li, Sundar Rangapuram, and Slawski (2016), and then choosing the 103 neurons with the highest weights. However, a main difference from
Li et al. (2016) is that our two-stage procedure is applied to the solution of (
5) instead of (
4). This approach also yields an excellent fit to the V2 data for L2 model neurons, as shown in
Figure 7 (third column). We used λ = 0.8 for all fits; lower λ increased the fitting error but did not alter the trends (and vice versa).