Data were analyzed using R (
R Core Team, 2020). Participant information regarding refractive error and axial length is described as mean and standard deviation (
SD). Neural contrast sensitivity and neural contrast adaptation (determined by changes in post-adaptation neural contrast sensitivity from pre-adaptation neural contrast sensitivity) are reported as mean and standard error (
SE) in logarithmic scale units (log contrast sensitivity [logCS]). Before statistical analysis, datasets underwent outlier removal using a boxplot analysis, with data points < 1.80 logCS and > 2.55 logCS excluded. The datasets were then grouped separately for the performed measurement number (1–6), time (pre- or post-adaptation), and condition (control, scattering, or defocus) across the participants. Pre-adaptation data points 1.5 interquartile ranges (IQRs) outside of the median were removed. To allow for differences in neural contrast sensitivity after adaptation to the lens condition, a larger outlier range was selected for the post-adaptation results. Therefore, data points 3 IQRs away from the median were excluded from further analysis. Furthermore, pre-adaptation data points were averaged across the six consecutively performed contrast sensitivity measurements (per condition and for each participant). Post-adaptation data points were grouped for the first three (1–3) and the second three (4–6) measurements. The change of post-adaptation data (for each measurement group) was calculated from the averaged pre-adaptation neural contrast sensitivity. The repeatability, the 95% repeatability limit (
McAlinden, Khadka, & Pesudovs, 2011), and the time efficiency of the testing procedure were analyzed based on the neural contrast sensitivity results prior to adaptation. Furthermore, the reproducibility was assessed by Bland–Altman analysis (
Euser, Dekker, & le Cessie, 2008) and as the intraclass correlation coefficient (ICC) based on the pre-adaptation neural contrast sensitivity values across the three lens conditions. The ICC is interpreted as excellent between 0.75 and 1.00, as good between 0.60 and 0.74, as fair between 0.40 and 0.59, and as poor for values smaller than 0.40 (
Cicchetti, 1994). Normal distribution of the data was verified with the Lilliefors test. Statistical analysis was then conducted using repeated-measures analysis of variance (ANOVA) with sphericity correction by Mauchly's sphericity test, as well as post hoc testing of pairwise
t-tests with Bonferroni correction and pairwise Friedman rank-sum test. The following factors were analyzed according to neural contrast sensitivity and adaptation: three levels of condition (control, scattering, and defocus) and two levels of time (pre- and post-adaptation). Additionally, adaptation in dependence on testing time was analyzed by comparing the post-adaptation neural contrast sensitivity from baseline. Because the average trial duration was about 25 seconds (
Figure 5), the data were clustered into 25 ± 12.5-second sequences as follows: ≤12.5 seconds, >12.5 to ≤37.5 seconds (referred to as “25 seconds”), >37.5 to ≤62.5 seconds (referred to as “50 seconds”), and so on through >150 seconds (referred to as “>150 seconds”). The clustered data points were analyzed using a linear mixed model and the post hoc test of estimated marginal means, with the dependent variable of neural contrast adaptation, the random effect of participants, and two fixed effects of condition (control, scattering, and defocus) and time (25, 50, 75, 100, 125, 150, and >150 seconds). To note, data for ≤12.5 seconds were removed due to too few data points. The significance level set for all types of analysis was α = 0.05 and significant effects were defined by
p < 0.05.