Abstract
The visual appearance of text on a screen can have a large impact on how efficiently we extract information from it. While many studies have examined visual factors in reading with sentences and passages, these stimuli are often challenging to use in standard psychophysical paradigms (e.g., forced-choice tasks). Single-word reading lends itself more easily to these techniques but does not always reflect how we read in real life. To facilitate the study of visual factors in reading, we developed a sentence classification task that uses true or false sentences and varies the presentation duration of these stimuli to determine a duration threshold. For this, we developed a human-validated sentence corpus of sentences taken from GenericsKB, a repository of internet-derived sentences. The sentences were filtered based on word and character length, truthfulness, and word frequency. We validated the database by having participants rate the truthfulness of each sentence, which showed high inter-rater agreement (mean ICC of 0.98, n=79). We have used this corpus tool to examine visual factors in reading using variable fonts, which are fonts where each parameter, such as slant and width, can be customized on a continuous axis. We measured participants’ (n=8) duration thresholds for five parameters (thin stroke, thick stroke, width, weight, and slant) at five different settings on each axis by varying the duration of the displayed sentences. Thin stroke and thick stroke showed a U-shaped trend where extreme settings had longer duration thresholds and the middle setting had the fastest duration threshold. Our sentence corpus and paradigm allow researchers to use forced-choice psychophysical methods to study reading based on naturally-occurring sentences. By understanding how font settings affect reading performance, this work supports the goal of determining what font helps each reader.