Open Access
Article  |   February 2022
Visual–tactile shape perception in the visually restored with artificial vision
Author Affiliations
  • Noelle R. B. Stiles
    Department of Ophthalmology, University of Southern California, Los Angeles, CA, USA
    [email protected]
  • James D. Weiland
    Departments of Biomedical Engineering and Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
    [email protected]
  • Vivek R. Patel
    Department of Ophthalmology, University of California, Irvine, Irvine, CA, USA
    [email protected]
Journal of Vision February 2022, Vol.22, 14. doi:https://doi.org/10.1167/jov.22.2.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Noelle R. B. Stiles, James D. Weiland, Vivek R. Patel; Visual–tactile shape perception in the visually restored with artificial vision. Journal of Vision 2022;22(2):14. https://doi.org/10.1167/jov.22.2.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Retinal prostheses partially restore vision to late blind patients with retinitis pigmentosa through electrical stimulation of still-viable retinal ganglion cells. We investigated whether the late blind can perform visual–tactile shape matching following the partial restoration of vision via retinal prostheses after decades of blindness.

We tested for visual–visual, tactile–tactile, and visual–tactile two-dimensional shape matching with six Argus II retinal prosthesis patients, ten sighted controls, and eight sighted controls with simulated ultra-low vision. In the Argus II patients, the visual–visual shape matching performance was significantly greater than chance. Although the visual–tactile shape matching performance of the Argus II patients was not significantly greater than chance, it was significantly higher with longer duration of prosthesis use. The sighted controls using natural vision and the sighted controls with simulated ultra-low vision both performed the visual–visual and visual–tactile shape matching tasks significantly more accurately than the Argus II patients. The tactile–tactile matching was not significantly different between the Argus II patients and sighted controls with or without simulated ultra-low vision.

These results show that experienced retinal prosthesis patients can match shapes across the senses and integrate artificial vision with somatosensation. The correlation of retinal prosthesis patients’ crossmodal shape matching performance with the duration of device use supports the value of experience to crossmodal shape learning. These crossmodal shape matching results in Argus II patients are the first step toward understanding crossmodal perception after artificial visual restoration.

Introduction
Retinal prostheses partially restore sight to retinitis pigmentosa patients by stimulating surviving retinal ganglion cells with a microelectrode array (Humayun et al., 2012; Luo & da Cruz, 2014, 2016; Weiland, Cho, & Humayun, 2011; Zhou, Dorn, & Greenberg, 2013; Zrenner, 2013). Argus II retinal prostheses use a glasses-mounted camera to capture visual information, which is transmitted to a belt-worn visual processing unit (VPU) (Luo & da Cruz, 2016; Zhou et al., 2013) (Figure 1). The VPU translates the video stream into stimulation parameters and then sends the signal via wire back to the glasses, where a radiofrequency (RF) coil transmits the signal to a second RF coil around the eye. The signal is decoded by an implanted microcircuit. Based on the information in the signal, the microcircuit outputs electrical stimulation pulses that are applied to the microstimulator array, which is proximity coupled to the retina. The prosthesis provides a resolution of 60 electrodes spanning a rectangle of 11 × 18 degrees of visual angle (He, Huang, Caspi, Roy, & Montezuma, 2019). 
Figure 1.
 
Image of the external components of the Argus II retinal prosthesis system (details in the Methods section).
Figure 1.
 
Image of the external components of the Argus II retinal prosthesis system (details in the Methods section).
The prosthesis patients are able to perform basic visual tasks such as identifying the direction of motion of a line, reach and grasp tasks, basic navigation, and object recognition (Humayun et al., 2012; Kotecha, Zhong, Stewart, & da Cruz, 2014; Luo, Zhong, Merlini, Anaflous, Arsiero, Stanga, & da Cruz, 2014; Stronks & Dagnelie, 2014). Argus II retinal prostheses are implanted in late blind patients with light perception or less, with over 350 Argus II prostheses implanted to date (Second Sight, 2019). 
In this paper, we investigate whether the visual perception of shape can be learned with artificial vision generated by retinal prostheses and whether this artificial visual shape perception can be matched crossmodally with the tactile perception of shape. In addition, we study the duration of prosthesis use required for Argus II retinal prosthesis patients to visually and crossmodally match shapes. This training and relearning period allows for the recalibration of their visual cortical network to the unusual properties of artificial vision, and the recalibration of their spatial perception across the senses. In particular, we postulate that spatial learning by retinal prosthesis patients can be divided into two parts: (1) visual spatial recalibration between the new artificial vision and the memory of natural vision, and (2) multisensory spatial recalibration between the new artificial vision and the other senses (tactile, in this case) (Figure 2). 
Figure 2.
 
Three diagrams of the hypothesized learning phases for shape perception in the late blind with artificial vision. Visual learning was evaluated in this study with a visual–visual shape matching task, and crossmodal learning was evaluated with a visual–tactile shape matching task. The gradual increase in crossmodal learning model (bottom) is shown as two stages. The first stage has a low-level of crossmodal learning (not shown), allowing visual learning to predominate (blue, left). The second stage of learning (after bifurcation) has stronger crossmodal learning (green, middle) than the first step, with substantial learning in both the visual (blue) and crossmodal (green) domains.
Figure 2.
 
Three diagrams of the hypothesized learning phases for shape perception in the late blind with artificial vision. Visual learning was evaluated in this study with a visual–visual shape matching task, and crossmodal learning was evaluated with a visual–tactile shape matching task. The gradual increase in crossmodal learning model (bottom) is shown as two stages. The first stage has a low-level of crossmodal learning (not shown), allowing visual learning to predominate (blue, left). The second stage of learning (after bifurcation) has stronger crossmodal learning (green, middle) than the first step, with substantial learning in both the visual (blue) and crossmodal (green) domains.
The visual phase of learning requires patients to relate artificial vision to pre-existing natural visual processing (Figure 2, in blue). The existing visual processing hierarchy is innately adapted to natural visual input and therefore requires reorganization to process the new artificial visual input. In particular, artificial vision (Argus II prosthesis vision) is quite different than natural vision, making adaptation and recalibration critical to restoring functionality. Argus II artificial vision utilizes a head-mounted camera to control gaze instead of the eye-pointed view of natural vision. In addition, the Argus II patients have low spatial and temporal resolution relative to natural vision, and a different visual magnification relative to natural vision (phosphenes generated from the same electrode size can have a range of shapes and sizes) (Luo & da Cruz, 2016; Luo, Zhong, Clemo, & da Cruz, 2016; Zhou et al., 2013). We evaluated the recalibration step between artificial vision and natural vision through a visual–visual shape matching task (with artificial vision), where the Argus II patients determine whether two sequentially presented visual two-dimensional (2D) shapes are the same or different (abbreviated herein as 2AFC-SD). 
Crossmodal learning is also required by Argus II patients in order to associate the new visual processing of artificial vision to the other senses (i.e., multisensory recalibration) (Figure 2, in green). The crossmodal learning requires the existing spatial mapping of tactile sensation, for example, to be related to the new artificial vision spatial mapping. Since the vision of Argus II patients is substantially different from natural vision, it is likely that the crossmodal matching of shape must be learned and may have different properties than crossmodal perception in the naturally sighted. Furthermore, years of blindness reorganizes sensory processing in the brain and can cause neural network changes that bridge early sensory regions (Amedi et al., 2007; Pascual-Leone & Hamilton, 2001; Poirier, De Volder, & Scheiber, 2007; Sadato, Pascual-Leone, Grafman, Ibanez, Deiber, Dold, & Hallett, 1996). This type of cortical reorganization may also impact the ability of the visually restored to have crossmodal integration. We evaluated crossmodal learning in Argus II patients with a visual–tactile shape matching task, where 2D visual shapes are matched to 2D tactile shapes. 
The timeline of multisensory spatial learning (visual–tactile matching) was also studied relative to the visual spatial learning (visual–visual matching) to determine whether the two phases of learning are sequential (Figure 2, top), simultaneous (Figure 2, middle), or a mixture of simultaneous and sequential (i.e., gradually increasing crossmodal learning with continuous visual learning) (Figure 2, bottom). 
Overall, in this paper we investigate whether patients with artificial vision can learn to match 2D geometric shapes visually, tactilely, and crossmodally (visual–tactile matching). In addition, we determine the strength of unimodal and bimodal matching in Argus II patients relative to duration of prosthesis use in order to investigate the timeline of visuospatial learning. Individual performance in visual, tactile, and crossmodal matching was also compared among Argus II visual restoration patients (n = 6), sighted controls (n = 10), and sighted controls with simulated ultra-low vision (n = 8) to determine the relative strength of each shape matching task. 
Methods
Participants
Six patients blinded by retinitis pigmentosa with implanted Argus II retinal prostheses participated in this study (two females, four males) (mean Argus II patient age, 65.33 years; SD = 11.08 years; range, 46–76 years) (Table 1). All Argus II patients had light perception or less in both eyes (Table 1) and wore an eyepatch during the study if they reported natural light perception. The Argus II patients had an average of 29.33 years of blindness and an average of 23.92 months since Argus II device implantation (duration of prosthesis use) (Table 1). The Argus II device function, patient training, and frequency of use are detailed in the Methods section: The Argus II retinal prosthesis device. Two Argus II participants previously had an Argus I implanted in their other eye before receiving the Argus II implant. For these two patients, the duration of the Argus I use was counted within the period of blindness due to the significantly lower resolution of the Argus I device (16 electrodes total). 
Table 1.
 
Argus II patient information. Argus II patients self-reported their demographic information, including age, gender, duration blind, duration with Argus II, and visual perception (light perception or no light perception). If the patient reported light perception, an eye patch was used to block any natural visual perception during the Argus II tasks. F, female; M, male; LP, light perception.
Table 1.
 
Argus II patient information. Argus II patients self-reported their demographic information, including age, gender, duration blind, duration with Argus II, and visual perception (light perception or no light perception). If the patient reported light perception, an eye patch was used to block any natural visual perception during the Argus II tasks. F, female; M, male; LP, light perception.
Ten age-matched sighted participants performed the experiments (seven females, three males) (mean sighted participant age, 63.5 years; SD = 4.70 years; range, 55–69 years) (Supplementary Table S1). The sighted controls used their natural visual and tactile perception to perform the tasks. The experiment with the sighted controls was performed with the same protocol and methods as the experiment with Argus II patients. 
Eight sighted participants with simulated ultra-low vision also performed the experiments (five females, three males) (mean sighted participant age, 41.88 years; SD = 15.48 years; range, 25–68 years) (Supplementary Table S2). The visual acuity of the right eye (eye patch on left eye) and the visual acuity of the right eye with simulated ultra-low vision are reported in Supplementary Table S2. The right eye visual acuity in these sighted controls ranged from 20/20 to 20/80+4 (Supplementary Table S2). The right eye visual acuity with simulated ultra-low vision ranged between 20/600 and 20/1000–2 with a mean of 20/775+2 (Supplementary Table S2). The procedure for the measurement of visual acuity and the simulation of ultra-low vision are detailed in the Methods section: Simulation of ultra-low vision with sighted controls. 
All participants gave written informed consent, and all experiments were approved by the University of Southern California internal review board. This research adhered to the tenets of the Declaration of Helsinki. 
Argus II retinal prosthesis device
The Argus II retinal prosthesis is manufactured by Second Sight Medical Products (Humayun et al., 2012; Luo & da Cruz, 2014, 2016; Weiland et al., 2011; Zhou et al., 2013; Zrenner, 2013). The device provides visual perception to those blinded by retinitis pigmentosa by stimulating still-viable retinal ganglion cells with an epiretinal microelectrode array. The array connects via wire to a scleral buckle, which includes a programmable stimulator and coil. The implanted coil receives wireless data and power from an RF coil on a pair of glasses (Figure 1). The visual environment is captured by a small camera mounted on the bridge of the pair of glasses. The visual stream from the camera is transmitted via wire to a VPU (worn on a belt) that processes the visual information and sends stimulation parameters back up through the wire to the glasses-mounted RF coil. The Argus II device has a resolution of 6 × 10 pixels, presented over a 11 × 18 degree field of view (He et al., 2019). 
About 1 month after surgical implantation of the device, the Argus II patients return to the medical center for device calibration and the initial device experience. The patient is then allowed to take the glasses and VPU home for personal use. The patients can participate in rehabilitation training provided by Second Sight, which often lasts 3 to 4 hours per day for several days. Of the six Argus II patients that participated in this study, five performed the rehabilitation training (Supplementary Table S3). Argus II patients reported using the Argus II device at a frequency ranging from once per day to three or four times per week (Supplementary Table S3). Additional information on the rehabilitation training and the frequency of device use for each participant are detailed in Supplementary Table S3
Experimental setup
The experiment was performed at the University of Southern California Keck School of Medicine in a dedicated experimental room. A table covered with black felt was placed against a large, closed window covered in black felt fabric. The experimental room was lit with the internal artificial lights and by an open window that allowed natural light to enter the room. The experiment was videotaped in order to enable additional data analysis and to provide video examples of the tasks (with participant permission). The videos were used to check the stimuli presented and the participant answers for select trials. 
The Argus II participants used their Argus II retinal prosthesis device to view the white shapes placed on top of the black felt fabric, and their natural tactile perception (one or both hands) to touch the shapes placed underneath the black felt fabric. The Argus II participants were occasionally too short in stature to view the shapes on the table while seated without viewing the shapes at an oblique angle. In this case, they were allowed to stand at the edge of the table and look down at the shapes from a less oblique view. 
Experimental stimuli
Four shapes were used for the tactile and visual portions of the experiment (Figure 3A and Supplementary Figure S1). The shapes were cut out of white poster board, which gave them each a thickness of about one quarter of an inch. Shape 1 was a vertical rectangle with the dimensions of 8 inches high by 1.5 inches wide. Shape 2 was a horizontal rectangle with the dimensions of 1.5 inches high by 8 inches wide. Shape 3 was a large circle with a diameter of 5.5 inches. Shape 4 was a small circle with a diameter of 1.5 inches. Each shape was labeled with its shape number (shape numbers 1–4) on the back side; these shape numbers corresponded to the stimuli numbers prerecorded in the experimental notebook (numbers were recorded in the order they would be presented during the task). Shape 1 and shape 2 comprised one physical item (or shape), which was oriented horizontally (shape 2) or vertically (shape 1) during the stimuli presentation. During the tactile trials, by placing the poster board shapes on the table surface between two layers (one above and one below) of black felt fabric, the shape edges could be felt by touch but not seen. The participant would then place their hands on top of the table and feel the tactile shape edge through the top layer of black felt fabric. During the visual trials, the shapes were placed on top of both layers of the black felt fabric on the table and viewed (without touch) as high-contrast white shapes. 
Figure 3.
 
Shape matching task schematics. (A) The shapes tested in the shape matching tasks are demonstrated in the diagram at the top of the figure. (B) A schematic is shown at the bottom of the figure depicting the three types of object shape comparisons performed in the shape matching tasks.
Figure 3.
 
Shape matching task schematics. (A) The shapes tested in the shape matching tasks are demonstrated in the diagram at the top of the figure. (B) A schematic is shown at the bottom of the figure depicting the three types of object shape comparisons performed in the shape matching tasks.
Experimental preparation and stimuli randomization
An experimental notebook was used to record the participants’ responses to each part of the experimental tasks. This notebook was setup before the experiment, which enabled the randomization of the stimuli order (using the randperm function in MATLAB; MathWorks, Natick, MA) for each of the blocks of the experimental trials (tactile–tactile matching block, visual–tactile matching block, and visual–visual matching block) (Figure 3B). Within each block, the stimuli were tested in pairs, and the experiment included all possible shape pairings repeated once (one for each stimuli pair order) with each of the shapes paired with themselves repeated one additional time. The only shape pairing excluded was shape 2 (horizontal rectangle) paired with shape 3 (large circle) due to redundancy. Therefore, the total number of trials was 18 for each block. In other words, four shapes × four shapes produces 16 combinations, minus two pairings (for the shape 2 and shape 3 pairing that was excluded) and plus four pairings (for the four extra same-shape pairs, i.e., shape 2 vs. shape 2). With three experimental blocks (tactile–tactile matching block, visual–tactile matching block, and visual–visual matching block), and 18 trials per block, there were 54 trials total for the experiment. 
Overall experimental task
As mentioned above, the experiment was performed in three blocks of 18 trials each. The first block was the tactile–tactile matching block, the second block was the visual–tactile matching block, and the third block was the visual–visual matching block (Figure 3B). The experimental blocks were performed in this order to minimize the amount of visual shape learning transferred between the blocks. The visual–tactile matching block employs a touch-to-vision matching task, in which the tactile stimulus is always presented first. Participants were allowed to take short breaks between the experimental blocks as needed. Participants were provided no feedback on performance during the experiment. 
The experiment was designed such that each shape was viewed one at a time and the participant indicated whether the second shape was the same as or different from the first. This procedure was used (instead of a match to sample procedure in which two different shapes would be simultaneously presented) in order to simplify the visual portion of the task by not requiring the interpretation of two objects simultaneously. 
Tactile–tactile shape matching block: Experimental task details
The Argus II participants turned off the Argus II device during the tactile–tactile matching task. The Argus II patients, sighted controls, and the sighted controls with simulated ultra-low vision used their natural tactile perception for this task. The participant was seated at a table covered with two layers of black felt—one to cover the table and one to place the shapes underneath for tactile exploration. The experimenter at the table sat on the left side of the participant. An experimental trial began with the experimenter placing all of the shapes under one of the black felt layers on the table. The experimenter then took one shape and moved it under the black felt fabric and placed it in front of the participant. The experimenter told the participant to explore the shape under the black felt by placing their hands on top of the fabric and feeling for the shape. Following the exploration of the first shape, the experimenter moved the shape under the black felt fabric away from the participant and replaced it with another shape (still under the black felt fabric). The participant was then asked to explore the second shape with touch and determine whether it was the same as or different from the first shape. The participant was told (and was periodically reminded) that the shape was considered different if it had a different size, shape, or orientation. The participant felt the shape under the black felt and reported if it was the same as or different from the previous shape. Participant responses were recorded in a notebook. 
Visual–tactile shape matching block: Experimental task details
The Argus II participants used their Argus II device (artificial vision) and their natural tactile perception during the visual–tactile shape matching task. The sighted participants used their natural tactile perception and natural visual perception for this task. The sighted participants with simulated ultra-low vision used their natural tactile perception and the simulated ultra-low vision for this task. 
The participant and experimenter were seated at a table covered with black felt (same configuration as for the tactile–tactile matching block). The experimenter began the task by moving a shape under the black felt fabric in front of the participant, and then asking them to explore the shape. After the participant completed their tactile exploration on top of the black felt fabric, the first shape was moved under the black felt fabric away from the participant. The experimenter then placed the second shape on top of the black felt fabric in front of the participant (white object on black felt fabric), and then asked the participant to view the shape, but not to touch it. The participant was asked, following visual exploration, whether the second shape was the same as or different from the first shape. The participant was told prior to the trial (and periodically reminded) that the second shape was considered different if it had a different size, shape, or orientation than the first shape. The participant reported whether the shape was the same or different. Participant responses were recorded in a notebook. The shape was removed and placed back under the black felt fabric. 
Visual–visual shape matching block: Experimental task details
The Argus II patients used their Argus II device (artificial vision) for the visual–visual matching task. The sighted participants used their natural visual perception for this task. The sighted participants with simulated ultra-low vision used the simulated ultra-low vision for this task. At the beginning of the task and between trials, the shapes were all kept under a layer of black felt fabric in front of the experimenter. The task began when the experimenter removed the first shape from under the black felt fabric and placed it in front of the participant on top of the black felt fabric (white object on black felt fabric). The participant viewed the shape without touching it, and then the experimenter removed the shape and placed it under the black felt fabric. A second shape was next removed from under the black felt fabric and placed in front of the participant for comparison to the first shape. The participant reported whether the second shape was the same as or different from the first, and the experimenter recorded this response. The participant was told (and was periodically reminded) that the second shape was considered different if it had a different size, shape, or orientation than the first shape. 
Simulation of ultra-low vision with sighted controls
The simulation of ultra-low vision was generated with a monocular eye patch and an opaque face mask with a small viewing window (Supplementary Figure S2). The opaque face mask was produced by covering a clear face shield with black duct tape such that the interior of the shield was entirely black, except for an 0.28 × 0.435-inch window. The window size was calculated to be approximately 11 × 18 degrees visual field of view (with approximately a 1.40-inch distance of the viewing window to the right eye), and was further verified to have this approximate angular dimension by the viewing of test stimuli at fixed distances by the experimenter. (Note that these numbers are approximate, given that each subject had slightly different eye and head geometry relative to the mask.) Before the black duct tape was added to the mask surface, four Bangerter occlusion foils (blurring filters designed to simulate low vision: 20/70, 0.3; 20/200, 0.1; 20/200, 0.1; 20/100, 0.2) were placed over the region of the right eye window. The tape was then used to secure the Bangerter occlusion foils into place. During the experiment, the participant first put on a monocular eye patch over the left eye, and then put on the opaque face mask, aligning the small window in the opaque face mask with their right eye. To prevent substantial variation in the window field of view, subjects were not allowed to pull and then hold the opaque face mask closer to their right eye after it had been positioned on their head. 
The acuity of the participants’ right eye vision with and without the opaque face mask was evaluated with a low vision letter chart with Sloan letters in logMAR increments from Precision Vision (Woodstock, IL). The right eye visual acuity was measured with a monocular eye patch over the left eye (Supplementary Table S2). The right eye simulated ultra-low visual acuity was measured with the participant wearing a left eye patch and the opaque face mask (Supplementary Table S2). 
Sighted controls with simulated ultra-low vision performed the shape matching tasks using the same protocol as the Argus II patients and the other normally sighted controls. During the tactile–tactile matching task, the participants were allowed to remove the mask and eye patch (due to the purely tactile nature of the task), whereas for the visual–tactile and visual–visual matching tasks the participants were required to wear the opaque face mask and the monocular eye patch. 
Statistical analyses
To correct for the proportional data's violation of the normality and homogeneity of variance assumptions in parametric statistical tests, the participant's performance (percent correct) was transformed using an arcsine square root transformation (asin and sqrt functions in MATLAB) (Ahrens, Cox, & Budhwar, 1990; Studebaker, 1985). Parametric statistical tests were then performed on the transformed data (chance level was transformed for the significance calculations of the transformed data) (reported in the Results section). The mean and standard deviation were also calculated based on the transformed data; however, they were then reverse transformed to enable intuitive interpretation (using the sin function in MATLAB) (reported in the Results section). For the correlation analyses, the percent correct data was transformed but the demographic information, consisting of the duration of prosthesis use and the duration of blindness, was not transformed. The figures present the data untransformed. 
All statistical tests performed were based on a priori hypotheses. Based on our a priori hypothesis that the mean accuracy significantly varied among the participant groups for each of the shape tasks (tactile–tactile task, visual–visual task, and visual–tactile task) one-way ANOVAs were performed. This approach reduced the total number of comparisons by only comparing performance across the groups for each task (Argus II patients, sighted controls, and sighted controls with simulated ultra-low vision) and not comparing across tasks (tactile–tactile, visual–visual, and visual–tactile matching). If the groups were deemed significantly different with the one-way ANOVA, further statistical tests of pairwise comparisons among the participant groups were performed. Pairwise comparisons among the three participant groups were performed with the multcompare function and with a Bonferroni multiple comparisons correction. The correlation analyses were performed with Pearson’s correlation by using the corr function in MATLAB. Statistical significance was calculated relative to chance using the ttest function with default settings in MATLAB. 
Results
Shape matching in Argus II patients
Argus II patients had significantly greater than chance performance (2AFC-SD, 0.5 fraction correct) at the tactile–tactile matching and the visual–visual matching tasks (tactile–tactile matching: M = 1, SD = 0.02, t(5) = 12.94, p = 4.91 × 10–5; visual–visual matching: M = 0.73, SD = 0.05, t(5) = 2.66, p = 0.04) (Figure 4). The cross-sensory matching performance (visual–tactile) was not significantly greater than chance due to variability across the patient group, although two patients performed the visual–tactile matching task with high accuracy (visual–tactile matching: M = 0.75, SD = 0.16, t(5) = 1.57, p = 0.18) (Figure 4). (Note that the accuracy of matching particular object pairs was also evaluated across the Argus II patient group; details are provided in Supplementary Figure S3.) 
Figure 4.
 
Shape matching task results with Argus II patients (n = 6), sighted controls (n = 10), and sighted controls with simulated ultra-low vision (n = 8). (A) Fraction correct for the three experimental blocks (tactile–tactile, visual–visual, and visual–tactile) in Argus II patients. (B) Average fraction correct for the shape matching task for the Argus II patient group in comparison to the sighted participant group and the sighted participant group with simulated ultra-low vision. The dashed line represents chance (2AFC, or 0.50 fraction correct). The error bars are 1 SD for the full length. The data in (A) does not have error bars for the individual participant results, as these data represent the fraction correct for each participant; the average fraction correct does have error bars across the patient group.
Figure 4.
 
Shape matching task results with Argus II patients (n = 6), sighted controls (n = 10), and sighted controls with simulated ultra-low vision (n = 8). (A) Fraction correct for the three experimental blocks (tactile–tactile, visual–visual, and visual–tactile) in Argus II patients. (B) Average fraction correct for the shape matching task for the Argus II patient group in comparison to the sighted participant group and the sighted participant group with simulated ultra-low vision. The dashed line represents chance (2AFC, or 0.50 fraction correct). The error bars are 1 SD for the full length. The data in (A) does not have error bars for the individual participant results, as these data represent the fraction correct for each participant; the average fraction correct does have error bars across the patient group.
Shape matching in sighted controls
A control participant group of age-matched sighted individuals was tested as a comparison to the Argus II patient group (sighted participant age: M = 63.5 years, SD = 4.70 years; Argus II patient age: M = 65.33 years, SD = 11.08 years). The sighted participants (n = 10) performed the same task as the Argus II patients, but used their natural visual and tactile perception to compare the shapes. The sighted participants performed the tactile–tactile, visual–visual, and visual–tactile shape matching significantly above chance (2AFC-SD, 0.5 fraction correct) (tactile–tactile matching: M = 0.98, SD = 0.02, t(9) = 14.48, p = 1.54 × 10–7; visual–visual matching: M = 0.99, SD = 0.02, t(9) = 15.09, p = 1.07 × 10–7; visual–tactile matching: M = 0.99, SD = 0.02, t(9) = 16.28, p = 5.53 × 10–8) (Figure 4B and Supplementary Figure S4). 
Shape matching in sighted controls with simulated ultra-low vision
A control group of sighted participants with simulated ultra-low vision also performed the shape matching tasks as a comparison to the Argus II patient group. The sighted ultra-low vision simulation participants performed the tactile–tactile, visual–visual, and visual–tactile shape matching significantly above chance (2AFC-SD, 0.5 fraction correct) (tactile–tactile matching: M = 0.96, SD = 0.03, t(7) = 9.49, p = 3.01 × 10–5; visual–visual matching: M = 0.99, SD = 0.02, t(7) = 12.67, p = 4.42 × 10–6; visual–tactile matching: M = 0.97, SD = 0.02, t(7) = 11.35, p = 9.24 × 10–6) (Figure 4B and Supplementary Figure S5). 
Shape matching comparison among Argus II patients, sighted controls, and sighted controls with simulated ultra-low vision
The tactile–tactile task one-way ANOVA did not show a significant variation across the participant groups (Argus II patients, sighted controls, and sighted controls with simulated ultra-low vision) (F(2, 21) = 1.51, p = 0.25) (Figure 4B). The visual–visual task one-way ANOVA showed a significant variation across the participant groups (Argus II patients, sighted controls, and sighted controls with simulated ultra-low vision) (F(2, 21) = 15.98, p = 6.05 × 10–5). When comparisons were made between the groups, the Argus II patients had significantly different performance relative to the sighted controls and the sighted controls with simulated ultra-low vision (sighted controls vs. Argus II patients: p = 1.66 × 10–4; sighted controls with simulated ultra-low vision vs. Argus II patients: p = 1.52 × 10–4) (Figure 4B). The two control groups were not significantly different (sighted controls vs. sighted controls with simulated ultra-low vision: p = 1) (Figure 4B). 
The visual–tactile task one-way ANOVA showed a significant variation across the participant groups (Argus II patients, sighted controls, and sighted controls with simulated ultra-low vision) (F(2, 21) = 6.79, p = 5.30 × 10–3). When comparisons were made between the groups, the Argus II patients had significantly less accurate performance relative to the sighted controls and the sighted controls with simulated ultra-low vision (sighted controls vs. Argus II patients: p = 4.82 × 10–3; sighted controls with simulated ultra-low vision vs. Argus II patients: p = 0.04) (Figure 4B). The two control groups were not significantly different (sighted controls vs. sighted controls with simulated ultra-low vision: p = 1) (Figure 4B). 
Argus II patient shape matching correlations
Correlation analyses were performed between the Argus II patient shape matching performance (visual–visual and visual–tactile matching) and the duration of Argus II retinal prosthesis use and the duration of blindness. The visual–tactile matching was found to significantly correlate with the duration of prosthesis use (rho = 0.88, p = 0.02) (Figure 5). Visual shape matching performance did not significantly correlate with the duration of prosthesis use (rho = 0.76, p = 0.08). In addition, the visual–tactile and visual–visual shape matching performance did not significantly correlate with the duration of blindness (visual–tactile matching: rho = –0.63, p = 0.18; visual–visual matching: rho = –0.59, p = 0.22). The Argus II patient visual–tactile matching performance significantly correlated with their visual–visual matching performance (rho = 0.91, p = 0.01). 
Figure 5.
 
Shape matching performance relative to duration of Argus II use, showing the fraction correct for visual–visual matching and visual–tactile matching in Argus II patients relative to the duration of prosthesis use in months. Each participant is represented by two data points, one in gray and one in purple (n = 6). Linear fits of the visual–tactile matching and the visual–visual matching are shown as dotted lines in the color matching the relevant data points. Chance is 0.50 fraction correct (2AFC) for all of the shape matching tasks. The duration of prosthesis use is presented in months.
Figure 5.
 
Shape matching performance relative to duration of Argus II use, showing the fraction correct for visual–visual matching and visual–tactile matching in Argus II patients relative to the duration of prosthesis use in months. Each participant is represented by two data points, one in gray and one in purple (n = 6). Linear fits of the visual–tactile matching and the visual–visual matching are shown as dotted lines in the color matching the relevant data points. Chance is 0.50 fraction correct (2AFC) for all of the shape matching tasks. The duration of prosthesis use is presented in months.
Discussion
Shape matching was shown to be learned by Argus II retinal prosthesis patients visually and by the two most experienced patients crossmodally. Despite the simplicity of the shape task, the visual and crossmodal shape matching was still significantly stronger in the sighted controls, and the sighted controls with simulated ultra-low vision relative to the Argus II patients, in part due to the substantial variability across the Argus II users. Overall, these results support the hypothesis that the late blind with artificial vision can learn both visual shape perception and crossmodal (visual–tactile) shape perception with sufficient training. 
Crossmodality and visual restoration
Very few studies have investigated crossmodal learning and interactions in the late blind visually restored. Strategies for visual restoration in the late blind include retinal prostheses, cortical prostheses (Beauchamp et al., 2020; Niketeghad & Pouratian, 2019), gene therapy (Apte, 2018; Lam et al., 2019; Mowad et al., 2020), optogenetic therapy (Sahel et al., 2021), and stem cell approaches (Kashani et al., 2018; Roska & Sahel, 2018). Retinal prosthesis research has largely focused on visual tasks with an emphasis on basic visual recognition and localization. However, Garcia, Petrini, Rubin, da Cruz, and Nardini (2015) investigated the interaction of non-visual self-motion cues and artificial vision during navigation and found that half of the patients showed multisensory integration in one task (i.e., two out of four Argus II patients). In addition, Stiles, Patel, and Weiland (2021) recently demonstrated that Argus II patients have auditory–visual crossmodal correspondences and that auditory cueing can accelerate Argus II patients visual search. For other types of visual restoration, Saenz, Lewis, Huth, Fine, and Koch (2008) showed that two blind individuals had crossmodal and visual responses in MT+/V5 co-existing after vision restoration. In addition, Mowad et al. (2020) performed neuroimaging on gene therapy patients before and after treatment and showed an increase in visual region activation during a tactile task following partial visual restoration. 
The high variability of crossmodal perception in Argus II patients may be due to the variability of their restored visual capabilities, as well as the cortical crossmodal reorganization that can occur during blindness. In particular, changes in the tactile–visual cortical neural network have been shown to occur during blindness, in which tactile stimuli have been shown to activate visual cortex (Cunningham et al., 2015; Sadato et al., 1996). This crossmodal reorganization could impact the restoration of crossmodal perception. In addition, the differences between Argus II perception and natural visual perception could also make artificial vision more difficult to relate to the other senses. 
This paper adds to the late blind visual restoration literature by showing that patients with partial visual restoration with retinal prostheses and extended device training can match visual and tactile shapes. The rehabilitation of crossmodal interactions is critical for the optimal integration of sensory information in the visually restored and may indicate potential multisensory training techniques that could improve patient outcomes (discussed further below). 
Several papers have studied the potential for crossmodal interactions following recovery from cataracts in the congenitally blind during the critical period (Chen et al., 2016; Guerreiro, Putzar, & Röder, 2015; Putzar, Goerendt, Lange, Rösler, & Röder, 2007; Sourav, Kekunnaya, Shareef, Banerjee, Bottari, & Röder, 2019). Congenital cataract patient recovery has multiple differences relative to late blind patient recovery studied in this paper (Held, Ostrovsky, de Gelder, Gandhi, Ganesh, Mathur, & Sinha, 2011). Congenital cataract patients have natural high-resolution vision restored, in contrast to Argus II patients, who have artificial vision restored, which has a low resolution, fading of stimuli, and abnormal phosphene shapes (Luo & da Cruz, 2016; Luo et al., 2016; Zhou et al., 2013). In addition, congenital cataract patients have visual restoration during the critical period, in which the brain is particularly plastic and adaptable to visual stimuli, whereas Argus II patients have visual restoration in the later decades of life when cortical plasticity is waning (Freitas et al., 2011; Pascual-Leone et al., 2011). Finally, most of the cataract patients studied have been congenitally blind with no visual experience prior to visual restoration, whereas the Argus II patients have been late blind with visual perception up to young adulthood followed by decades of blindness. Therefore, both the properties of the restored vision and the function and plasticity of the retina and brain are different in congenital cataract patients relative to Argus II patients. 
Comparison of Argus II patients and sighted controls with simulated ultra-low vision
Sighted controls with simulated ultra-low vision performed the shape matching task visually, tactilely, and crossmodally. These sighted controls had their visual acuity reduced to an average of 20/775+2, monocular perception (right eye only), and a visual field of 11° × 18°. Despite all of the reductions in resolution and field of view and the removal of binocularity, the sighted controls with simulated ultra-low vision did not perform significantly differently at the shape tasks than the sighted controls with natural vision. Likely this similarity in performance is due to the simplicity of the task (see Image Supplementary Table S1 for low-resolution images of the experimental shapes), as well as the considerable adaptability of the visual and crossmodal shape processing of the human brain. 
The simulated ultra-low vision presented to the sighted controls was designed to be similar to that provided by the Argus II device. The Argus II device has been shown to have a visual acuity of up to 20/1260 as of 2012 (Humayun et al., 2012), 20/200 with the use of digital zoom (Sahel, Mohand-Said, Stanga, Caspi, & Greenberg, 2013). Another study that was published in 2013 evaluated the ability of 21 Argus II patients to recognize letters. Of the six highest performing Argus II patients in that study, the best subject recognized letters up to 1.7° in size, and the lowest performer in the top six subjects recognized letters up to 34.2° in size (da Cruz et al., 2013). Our best-performing sighted control with simulated ultra-low vision recognized letters up to 2.64° in size, or 20/600 Snellen acuity, which is less accurate than the best Argus II patient evaluated by da Cruz. Similar to our simulated ultra-low vision, the Argus II device provides monocular perception and a visual field of 11° × 18° (see Image Supplementary Table S1 for images of the experimental shapes at a resolution equivalent to the Argus II device). (Note that the sighted controls had no training on the new simulated ultra-low vision, whereas the Argus II patients had training and frequently used their devices for months; see Supplementary Table S3.) (Note that despite these similarities, the sighted controls with simulated ultra-low vision likely have a higher visual resolution than the lowest performing Argus II patients in this study and other studies. Therefore, this sighted control group is most relevant to the four most accurate Argus II patients on the visual–visual matching task, which have visual–visual matching at least 15% above chance). 
Although most of the Argus II patients tested were likely similar in resolution and field of view to our sighted controls with simulated ultra-low vision, our results show that our sighted controls were significantly more accurate at visual–visual and visual–tactile matching of the shapes relative to the Argus II patients (Figure 4). Overall, this indicates that the challenges of the Argus II patients to the perception of shape visually and crossmodally are not likely due to the monocularity, field of view, or the ultra-low resolution of the device. (Note that there was no significant difference between Argus II patients and sighted controls with simulated ultra-low vision at the tactile–tactile matching task. This indicates that the differences with the visual–visual matching and visual–tactile matching tasks between participant groups were not due to task understanding.) 
In contrast, the challenges of shape perception in vision and particularly across the senses with the Argus II device are more likely due to crossmodal plasticity during blindness (Cunningham et al., 2015), visual phosphene shape (Beyeler, Nanduri, Weiland, Rokem, Boynton, & Fine, 2019), surgical implantation (Beyeler et al., 2019), duration of blindness, perceptual fading (Avraham, Jung, Yitzhaky, & Peli, 2021; Fornos et al., 2012; Zrenner et al., 2011), and retinal remodeling during blindness (Marc & Jones, 2003; Shintani, Shechtman, & Gurwood, 2009). In particular, the elongated and variable shapes of the individual phosphenes generated by retinal prostheses could generate false shape cues that hamper the perception of the real object shape as defined by the relative brightness of the phosphenes. In other words, like the Lincoln effect, the higher frequency phosphene edges could mask the lower frequency visual shape information (Harmon & Julesz, 1973). Ongoing psychophysical, biomedical engineering, and neuroscience research studies are further investigating each of these factors in order to develop mitigation strategies, which would improve patient outcomes across the retinal prosthesis patient population. 
Argus II patient intersubject variability and visual function assessment
As mentioned in the previous section, the ultra-low vision sighted controls are a comparison to the high-functioning Argus II patients, which is supported by the related studies that we cited with similar visual acuities in Argus II patients. However, it is certainly possible (and likely) that some of the Argus II patients we tested were not in this category. Although all of the patients used the device regularly, many did not achieve the level of resolution evaluated in the sighted controls with simulated ultra-low vision. To evaluate these low-functioning Argus II patients, we also performed the visual–visual matching control task, which is an indication of whether the patients can accurately perform the visual portion of the crossmodal task. 
The visual shape matching task is in many ways a more basic version of visual acuity tasks. In particular, the shapes that patients are matching in the visual–visual matching task are the building blocks of letters (circles and rectangles), and in that way the shape task is likely easier than a visual acuity task, which has more complex shapes. Our shape matching task, like a visual acuity task, is a matching to memory task. The key difference between these two tasks is that the visual shape matching task requires immediate memory without semantic labels, whereas visual acuity tasks require longer term memory with more training and semantic labeling. Overall, we argue that our visual shape task is a very basic task, which is why the ultra-low vision patients can on average perform it accurately. 
Two Argus II patients performed the visual–visual matching highly accurately (greater than 35% above chance). Two different Argus II patients were in the middle of the range of visual performance (visual–visual matching greater than 15% above chance but less than 25% above chance). However, although they could perform the visual–visual matching task above chance, they could not perform the visual–tactile task above chance. The performance of the sighted controls with simulated ultra-low vision certainly was comparable to that of the two highest performing patients, and is perhaps also relevant to that of the two mid-level performing patients. However, at least two of the Argus II patients could not perform the visual shape task accurately, as their visual–visual matching was less than 15% above chance. Therefore, we would not expect them to perform the crossmodal matching task well, and in fact they did not perform it accurately. For these lower performing patients, the visual–visual baseline task acts as the relevant control measure for the crossmodal task. Overall, the data presented in this paper are particularly interesting because they show a range of visual shape matching capabilities. Furthermore, we carefully controlled the wide range of perceptual skills of Argus II patients by using a tailored visual task and specifically selected control groups. 
In this study, we also found that the duration of prosthesis use significantly correlated with our patients’ shape task performance, implying that rehabilitation training and device use are critical elements to visual rehabilitation with retinal prostheses. This result also highlights that potential modifications in patient training could generate improvements in patient performance over months or years of device use; we discuss these possibilities in detail below. 
Impact on rehabilitation training for artificial vision
Sensory integration across the modalities has been shown in the sighted to generate broad improvements to task performance. Not only does integration between the senses occur in a statistically optimal fashion (Ernst & Banks, 2002), but it also improves outcomes following training (Seitz, Kim, & Shams, 2006). In particular, Seitz and colleagues showed that co-presented auditory and visual information during training can improve visual outcomes in the sighted (Barakat, Seitz, & Shams, 2015; Kim, Seitz, & Shams, 2008; Seitz et al., 2006; Shams & Seitz, 2008). 
Although crossmodal matching of shape has been postulated to have different mechanisms and levels of integration than perceptual illusions and other forms of multisensory learning (Stein et al., 2010), we argue that shape matching across the senses could be a first step toward other forms of crossmodal communication and integration in the visually restored. With visual restoration patients, the potential for crossmodal matching shown in this paper (in a subset of patients) argues for the use of tactile stimuli in Argus II training. We hypothesize that multisensory training in the visually restored might provide better visual outcomes than visual training alone for a range of tasks, including shape perception, object localization, and navigation. Furthermore, the experience of the Argus II users in their daily lives is inherently multisensory; therefore, the incorporation of multiple modalities in training would mirror the patient's natural experience. Multisensory training has the potential to also encourage the integration of additional sensory information with the Argus II visual perception, enabling more accurate judgments. 
Timeline of visual and crossmodal shape learning
An interesting question related to this paper is whether visual shape learning and crossmodal shape learning following visual restoration in the late blind occur in parallel or in sequential steps. However, the interaction between visual learning and crossmodal learning is difficult to disambiguate. Nevertheless, the performance in both the visual–tactile and visual–visual matching tasks are correlated across the Argus II patient group, which implies that patients learn to perform these tasks on associated timelines. This is likely due to the overlapping skills required for the visual–visual and visual–tactile matching tasks. In particular, when the performance of the Argus II patients in the visual–visual and visual–tactile tasks are directly compared, there appears to be a potential push–pull effect. In other words, crossmodal matching initially under-performs unimodal visual matching, but with additional training crossmodal matching outperforms visual matching. This result is consistent with bimodal visual–tactile matching lagging unimodal visual matching until a critical time point in which the crossmodal matching supersedes visual matching. We hypothesize that this time point could be when the visual shape perception has begun to permit object recognition (such as recognizing a circle, a sharp edge, or corners), rather than just the matching of visual stimulus properties (e.g., object brightness). Object recognition could allow for the transfer of shape matching skills to the tactile domain. This would enable the use of the higher tactile acuity (than the Argus II visual acuity) to make visual–tactile matching more accurate than the visual–visual matching. 
We hypothesize that visual–visual matching could be first relearned to a visual threshold, and then a second phase of crossmodal shape learning would increase in parallel to the ongoing visual learning. In other words, there could be an initial learning phase in which visual learning predominates, and then a secondary learning phase in which crossmodal and visual learning improve in parallel (Figure 2, bottom). Therefore, the data collected in this study support the third model of visual and crossmodal learning presented in Figure 2 (bottom), in which crossmodal learning gradually increases as the patient gains more experience with the new artificial visual perception. Nevertheless, the comparison of the visual and crossmodal learning timelines is difficult given the limited number of participants in this study and the correlational nature of the analyses performed; therefore, additional research is required to verify these results. 
Conclusion
This paper shows that Argus II patients with ultra-low-resolution vision can match visual shapes. Shape matching across the senses correlated with the duration of prosthesis use, and select patients crossmodally matched shapes with high accuracy. These results indicate the importance of training and experience to matching shapes crossmodally between somatosensation and an artificial visual sense. This research is a key first step toward studying multimodality in the artificially visually restored, and also suggests the potential benefits of the design and testing of multimodal training algorithms for visual rehabilitation. 
Acknowledgments
The authors thank Mark S. Humayun for providing laboratory space in the Ginsburg Institute for Biomedical Therapeutics to perform these experiments. 
Supported by the Roski Eye Institute at the University of Southern California; the National Institutes of Health, National Eye Institute; the National Institutes of Health BRAIN Initiative; the Philanthropic Educational Organization Scholar Award Program; and the Arnold O. Beckman Postdoctoral Scholars Fellowship Program. 
Commercial relationships: JDW has received in-kind research support from Second Sight Medical Products, Inc. 
Corresponding author: Noelle R. B. Stiles. 
Address: Department of Ophthalmology, University of Southern California, Los Angeles, CA, USA. 
References
Ahrens, W. H., Cox, D. J., & Budhwar, G. (1990). Use of the arcsine and square root transformations for subjectively determined percentage data. Weed Science, 38(4–5), 452–458.
Amedi, A., Stern, W. M., Camprodon, J. A., Bermpohl, F., Merabet, L., Rotman, S., … Pascual-Leone, A. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience, 10(6), 687–689. [CrossRef] [PubMed]
Apte, R. S. (2018). Gene therapy for retinal degeneration. Cell, 173(1), 5. [CrossRef] [PubMed]
Avraham, D., Jung, J.-H., Yitzhaky, Y., & Peli, E. (2021). Retinal prosthetic vision simulation: Temporal aspects. Journal of Neural Engineering, 18(4), 0460d0469. [CrossRef]
Barakat, B., Seitz, A. R., & Shams, L. (2015). Visual rhythm perception improves through auditory but not visual training. Current Biology, 25(2), R60–R61. [CrossRef] [PubMed]
Beauchamp, M. S., Oswalt, D., Sun, P., Foster, B. L., Magnotti, J. F., Niketeghad, S., … Yoshor, D. (2020). Dynamic stimulation of visual cortex produces form vision in sighted and blind humans. Cell, 181(4), 774–783.e5. [PubMed]
Beyeler, M., Nanduri, D., Weiland, J. D., Rokem, A., Boynton, G. M., & Fine, I. (2019). A model of ganglion axon pathways accounts for percepts elicited by retinal implants. Scientific Reports, 9(1), 9199. [PubMed]
Chen, J., Wu, E.-D., Chen, X., Zhu, L.-H., Li, X., Thorn, F., … Qu, J. (2016). Rapid integration of tactile and visual information by a newly sighted child. Current Biology, 26(8), 1069–1074. [PubMed]
Cunningham, S. I., Shi, Y., Weiland, J. D., Falabella, P., de Koo, L. C. O., Zacks, D. N., … Tjan, B. S. (2015). Feasibility of structural and functional MRI acquisition with unpowered implants in Argus II retinal prosthesis patients: A case study. Translational Vision Science & Technology, 4(6), 6. [PubMed]
da Cruz, L., Coley, B. F., Dorn, J., Merlini, F., Filley, E., Christopher, P., … Dagnelie, G. (2013). The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss. British Journal of Ophthalmology, 97(5), 632–636.
Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415(6870), 429–433. [PubMed]
Fornos, A. P., Sommerhalder, J., da Cruz, L., Sahel, J. A., Mohand-Said, S., Hafezi, F., … Pelizzone, M. (2012). Temporal properties of visual perception on electrical stimulation of the retina. Investigative Ophthalmology & Visual Science, 53(6), 2720–2731. [PubMed]
Freitas, C., Perez, J., Knobel, M., Tormos, J. M., Oberman, L. M., Eldaief, M., … Pascual-Leone, A. (2011). Changes in cortical plasticity across the lifespan. Frontiers in Aging Neuroscience, 3, 5. [PubMed]
Garcia, S., Petrini, K., Rubin, G. S., da Cruz, L., & Nardini, M. (2015). Visual and non-visual navigation in blind patients with a retinal prosthesis. PLoS One, 10(7), e0134369. [PubMed]
Guerreiro, M. J., Putzar, L., & Röder, B. (2015). The effect of early visual deprivation on the neural bases of multisensory processing. Brain, 138(6), 1499–1504. [PubMed]
Harmon, L. D., & Julesz, B. (1973). Masking in visual recognition: Effects of two-dimensional filtered noise. Science, 180(4091), 1194–1197. [PubMed]
He, Y., Huang, N. T., Caspi, A., Roy, A., & Montezuma, S. R. (2019). Trade-off between field-of-view and resolution in the thermal-integrated Argus II system. Translational Vision Science & Technology, 8(4), 29. [PubMed]
Held, R., Ostrovsky, Y., de Gelder, B., Gandhi, T., Ganesh, S., Mathur, U., & Sinha, P. (2011). The newly sighted fail to match seen with felt. Nature Neuroscience, 14(5), 551–553. [PubMed]
Humayun, M. S., Dorn, J. D., da Cruz, L., Dagnelie, G., Sahel, J.-A., Stanga, P. E., … Greenberg, R. J. (2012). Interim results from the international trial of Second Sight's visual prosthesis. Ophthalmology, 119(4), 779–788. [PubMed]
Kashani, A. H., Lebkowski, J. S., Rahhal, F. M., Avery, R. L., Salehi-Had, H., Dang, W., … Humayun, M. S. (2018). A bioengineered retinal pigment epithelial monolayer for advanced, dry age-related macular degeneration. Science Translational Medicine, 10(435), eaao4097. [PubMed]
Kim, R. S., Seitz, A. R., & Shams, L. (2008). Benefits of stimulus congruency for multisensory facilitation of visual learning. PLoS One, 3(1), e1532. [PubMed]
Kotecha, A., Zhong, J., Stewart, D., & da Cruz, L. (2014). The Argus II prosthesis facilitates reaching and grasping tasks: A case series. BMC Ophthalmology, 14(1), 71. [PubMed]
Lam, B. L., Davis, J. L., Gregori, N. Z., MacLaren, R. E., Girach, A., Verriotto, J. D., … Feuer, W. J. (2019). Choroideremia gene therapy phase 2 clinical trial: 24-month results. American Journal of Ophthalmology, 197, 65–73. [PubMed]
Luo, Y. H.-L., & da Cruz, L. (2014). A review and update on the current status of retinal prostheses (bionic eye). British Medical Bulletin, 109(1), 31–44. [PubMed]
Luo, Y. H.-L., & da Cruz, L. (2016). The Argus II retinal prosthesis system. Progress in Retinal and Eye Research, 50, 89–107. [PubMed]
Luo, Y. H.-L., Zhong, J. J., Clemo, M., & da Cruz, L. (2016). Long-term repeatability and reproducibility of phosphene characteristics in chronically implanted Argus II retinal prosthesis subjects. American Journal of Ophthalmology, 170, 100–109. [PubMed]
Luo, Y. H.-L., Zhong, J., Merlini, F., Anaflous, F., Arsiero, M., Stanga, P. E., & da Cruz, L. (2014). The use of Argus II retinal prosthesis to identify common objects in blind subjects with outer retinal dystrophies. Investigative Ophthalmology & Visual Science, 55(13), 1834–1834.
Marc, R. E., & Jones, B. W. (2003). Retinal remodeling in inherited photoreceptor degenerations. Molecular Neurobiology, 28(2), 139–147. [PubMed]
Mowad, T. G., Willett, A. E., Mahmoudian, M., Lipin, M., Heinecke, A., Maguire, A. M., … Ashtari, M. (2020). Compensatory cross-modal plasticity persists after sight restoration. Frontiers in Neuroscience, 14, 291. [PubMed]
Niketeghad, S., & Pouratian, N. (2019). Brain machine interfaces for vision restoration: The current state of cortical visual prosthetics. Neurotherapeutics, 16(1), 134–143. [PubMed]
Pascual-Leone, A., Freitas, C., Oberman, L., Horvath, J. C., Halko, M., Eldaief, M., … Rotenberg, A. (2011). Characterizing brain cortical plasticity and network dynamics across the age-span in health and disease with TMS-EEG and TMS-fMRI. Brain Topography, 24(3–4), 302–315. [PubMed]
Pascual-Leone, A., & Hamilton, R. (2001). The metamodal organization of the brain. Progress in Brain Research, 134, 427–445. [PubMed]
Poirier, C., De Volder, A. G., & Scheiber, C. (2007). What neuroimaging tells us about sensory substitution. Neuroscience & Biobehavioral Reviews, 31(7), 1064–1070.
Putzar, L., Goerendt, I., Lange, K., Rösler, F., & Röder, B. (2007). Early visual deprivation impairs multisensory interactions in humans. Nature Neuroscience, 10(10), 1243–1245. [PubMed]
Roska, B., & Sahel, J.-A. (2018). Restoring vision. Nature, 557(7705), 359. [PubMed]
Sadato, N., Pascual-Leone, A., Grafman, J., Ibanez, V., Deiber, M. P., Dold, G., & Hallett, M. (1996). Activation of the primary visual cortex by Braille reading in blind subjects. Nature, 380(6574), 526–528. [PubMed]
Saenz, M., Lewis, L. B., Huth, A. G., Fine, I., & Koch, C. (2008). Visual motion area MT+/V5 responds to auditory motion in human sight-recovery subjects. The Journal of Neuroscience, 28(20), 5141–5148. [PubMed]
Sahel, J.-A., Boulanger-Scemama, E., Pagot, C., Arleo, A., Galluppi, F., Martel, J. N., … Roska, B. (2021). Partial recovery of visual function in a blind patient after optogenetic therapy. Nature Medicine, 27(7), 1223–1229. [PubMed]
Sahel, J.-A., Mohand-Said, S., Stanga, P., Caspi, A., & Greenberg, R. J. (2013). Acuboost: Enhancing the maximum acuity of the Argus II Retinal Prosthesis System. Investigative Ophthalmology & Visual Science, 54(15), 1389–1389.
Second Sight. (2019) FAQ. Retrieved from https://www.secondsight.com/faq/#.
Seitz, A. R., Kim, R., & Shams, L. (2006). Sound facilitates visual learning. Current Biology, 16(14), 1422–1427. [PubMed]
Shams, L., & Seitz, A. R. (2008). Benefits of multisensory learning. Trends in Cognitive Sciences, 12(11), 411–417. [PubMed]
Shintani, K., Shechtman, D. L., & Gurwood, A. S. (2009). Review and update: Current treatment trends for patients with retinitis pigmentosa. Optometry, 80(7), 384–401. [PubMed]
Sourav, S., Kekunnaya, R., Shareef, I., Banerjee, S., Bottari, D., & Röder, B. (2019). A protracted sensitive period regulates the development of cross-modal sound–shape associations in humans. Psychological Science, 30(10), 1473–1482. [PubMed]
Stein, B. E., Burr, D., Constantinidis, C., Laurienti, P. J., Alex Meredith, M., Perrault, T. J.,, Jr., … Lewkowicz, D. J. (2010). Semantic confusion regarding the development of multisensory integration: A practical solution. European Journal of Neuroscience, 31(10), 1713–1720.
Stiles, N. R., Patel, V. R., & Weiland, J. D. (2021). Multisensory perception in Argus II retinal prosthesis patients: Leveraging auditory-visual mappings to enhance prosthesis outcomes. Vision Research, 182, 58–68. [PubMed]
Stronks, H. C., & Dagnelie, G. (2014). The functional performance of the Argus II retinal prosthesis. Expert Review of Medical Devices, 11(1), 23–30. [PubMed]
Studebaker, G. A. (1985). A “rationalized” arcsine transform. Journal of Speech, Language, and Hearing Research, 28(3), 455–462.
Weiland, J. D., Cho, A. K., & Humayun, M. S. (2011). Retinal prostheses: Current clinical results and future needs. Ophthalmology, 118(11), 2227–2237. [PubMed]
Zhou, D. D., Dorn, J. D., & Greenberg, R. J. (2013). The Argus II retinal prosthesis system: An overview. Paper presented at the 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). Piscataway, NJ: Institute of Electrical and Electronics Engineers.
Zrenner, E. (2013). Fighting blindness with microelectronics. Science Translational Medicine, 5(210), 210ps216.
Zrenner, E., Bartz-Schmidt, K. U., Benav, H., Besch, D., Bruckmann, A., Gabel, V.-P., … Wilke, R. (2011). Subretinal electronic chips allow blind patients to read letters and combine them to words. Proceedings of the Royal Society B: Biological Sciences, 278(1711), 1489–1497.
Figure 1.
 
Image of the external components of the Argus II retinal prosthesis system (details in the Methods section).
Figure 1.
 
Image of the external components of the Argus II retinal prosthesis system (details in the Methods section).
Figure 2.
 
Three diagrams of the hypothesized learning phases for shape perception in the late blind with artificial vision. Visual learning was evaluated in this study with a visual–visual shape matching task, and crossmodal learning was evaluated with a visual–tactile shape matching task. The gradual increase in crossmodal learning model (bottom) is shown as two stages. The first stage has a low-level of crossmodal learning (not shown), allowing visual learning to predominate (blue, left). The second stage of learning (after bifurcation) has stronger crossmodal learning (green, middle) than the first step, with substantial learning in both the visual (blue) and crossmodal (green) domains.
Figure 2.
 
Three diagrams of the hypothesized learning phases for shape perception in the late blind with artificial vision. Visual learning was evaluated in this study with a visual–visual shape matching task, and crossmodal learning was evaluated with a visual–tactile shape matching task. The gradual increase in crossmodal learning model (bottom) is shown as two stages. The first stage has a low-level of crossmodal learning (not shown), allowing visual learning to predominate (blue, left). The second stage of learning (after bifurcation) has stronger crossmodal learning (green, middle) than the first step, with substantial learning in both the visual (blue) and crossmodal (green) domains.
Figure 3.
 
Shape matching task schematics. (A) The shapes tested in the shape matching tasks are demonstrated in the diagram at the top of the figure. (B) A schematic is shown at the bottom of the figure depicting the three types of object shape comparisons performed in the shape matching tasks.
Figure 3.
 
Shape matching task schematics. (A) The shapes tested in the shape matching tasks are demonstrated in the diagram at the top of the figure. (B) A schematic is shown at the bottom of the figure depicting the three types of object shape comparisons performed in the shape matching tasks.
Figure 4.
 
Shape matching task results with Argus II patients (n = 6), sighted controls (n = 10), and sighted controls with simulated ultra-low vision (n = 8). (A) Fraction correct for the three experimental blocks (tactile–tactile, visual–visual, and visual–tactile) in Argus II patients. (B) Average fraction correct for the shape matching task for the Argus II patient group in comparison to the sighted participant group and the sighted participant group with simulated ultra-low vision. The dashed line represents chance (2AFC, or 0.50 fraction correct). The error bars are 1 SD for the full length. The data in (A) does not have error bars for the individual participant results, as these data represent the fraction correct for each participant; the average fraction correct does have error bars across the patient group.
Figure 4.
 
Shape matching task results with Argus II patients (n = 6), sighted controls (n = 10), and sighted controls with simulated ultra-low vision (n = 8). (A) Fraction correct for the three experimental blocks (tactile–tactile, visual–visual, and visual–tactile) in Argus II patients. (B) Average fraction correct for the shape matching task for the Argus II patient group in comparison to the sighted participant group and the sighted participant group with simulated ultra-low vision. The dashed line represents chance (2AFC, or 0.50 fraction correct). The error bars are 1 SD for the full length. The data in (A) does not have error bars for the individual participant results, as these data represent the fraction correct for each participant; the average fraction correct does have error bars across the patient group.
Figure 5.
 
Shape matching performance relative to duration of Argus II use, showing the fraction correct for visual–visual matching and visual–tactile matching in Argus II patients relative to the duration of prosthesis use in months. Each participant is represented by two data points, one in gray and one in purple (n = 6). Linear fits of the visual–tactile matching and the visual–visual matching are shown as dotted lines in the color matching the relevant data points. Chance is 0.50 fraction correct (2AFC) for all of the shape matching tasks. The duration of prosthesis use is presented in months.
Figure 5.
 
Shape matching performance relative to duration of Argus II use, showing the fraction correct for visual–visual matching and visual–tactile matching in Argus II patients relative to the duration of prosthesis use in months. Each participant is represented by two data points, one in gray and one in purple (n = 6). Linear fits of the visual–tactile matching and the visual–visual matching are shown as dotted lines in the color matching the relevant data points. Chance is 0.50 fraction correct (2AFC) for all of the shape matching tasks. The duration of prosthesis use is presented in months.
Table 1.
 
Argus II patient information. Argus II patients self-reported their demographic information, including age, gender, duration blind, duration with Argus II, and visual perception (light perception or no light perception). If the patient reported light perception, an eye patch was used to block any natural visual perception during the Argus II tasks. F, female; M, male; LP, light perception.
Table 1.
 
Argus II patient information. Argus II patients self-reported their demographic information, including age, gender, duration blind, duration with Argus II, and visual perception (light perception or no light perception). If the patient reported light perception, an eye patch was used to block any natural visual perception during the Argus II tasks. F, female; M, male; LP, light perception.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×