In the fourth experiment of our study, we examine an extension of our approach to validate the capacity of our model to optimize for such customized, more realistic, phosphene mappings. Again, we train our end-to-end model, using the ADE20k dataset on the semantic boundary reconstruction task, as described in the previous experiment. However, rather than same-sized phosphenes, placed on a distorted rectangular grid, this time, we use a phosphene map that is inspired by the aforementioned studies of
Srivastava et al. (2007,
2009), who simulate phosphenes in the lower left quadrant of the visual field, with phosphene densities and phosphene sizes adjusted in relation to the eccentricity in the visual field to simulate the effect of cortical magnification. For the phosphene simulation, we formalize a custom phosphene map as a set of
n pre-defined 256×256 greyscale images, {
P1,
P2, …,
Pn}, that each display a single Gaussian-shaped phosphene at a specific location. In our experiment, the number of phosphenes
n is set to 650, 488, or 325. For each image
Pi, we generated a phosphene at polar angle
\({\phi _i}\sim U( {\pi,\frac{3}{2}\pi } )\), eccentricity
\({r_i} = {x_i} + 2x_i^2\;\)with
\({x_i}\sim U( {0,\;1} )\) and size σ
i = 2
ri + 1. After conversion to Cartesian coordinates,
Pi covers a square area in the lower left quadrant, bounded by corners (0, − 1) and (− 1, 0). Note that the described procedure reflects an arbitrary example mapping, which may be replaced to yield any prespecified set of phosphenes. The final SPV image (the output of the simulator) is calculated by taking a weighted sum over all images in the phosphene map:
\begin{eqnarray}SPV = \sum \nolimits_{i = 1}^n {w_i}\;{P_i}\quad {w_i} \in \left\{ {0,\;1} \right\}\quad \end{eqnarray}