August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Both mOTS-words and pOTS-words prefer emoji stimuli over text stimuli during a reading task
Author Affiliations
  • Alexia Dalski
    Department of Psychology, Philipps-Universität Marburg Germany
    Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Germany
  • Holly Kular
    Department of Psychology, Stanford University, USA
  • Julia G. Jorgensen
    Department of Psychology, Stanford University, USA
  • Kalanit Grill-Spector
    Department of Psychology, Stanford University, USA
    Wu Tsai Neurosciences Institute, Stanford University, USA
  • Mareike Grotheer
    Department of Psychology, Philipps-Universität Marburg Germany
    Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Germany
Journal of Vision August 2023, Vol.23, 5263. doi:https://doi.org/10.1167/jov.23.9.5263
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexia Dalski, Holly Kular, Julia G. Jorgensen, Kalanit Grill-Spector, Mareike Grotheer; Both mOTS-words and pOTS-words prefer emoji stimuli over text stimuli during a reading task. Journal of Vision 2023;23(9):5263. https://doi.org/10.1167/jov.23.9.5263.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual word form area in the occipitotemporal sulcus, here referred to as OTS-words, responds more strongly to text than other visual stimuli and plays a critical role in reading. Here we hypothesized, that this regions preference for text may be driven by a preference for reading tasks, as in most prior fMRI studies only the text stimuli were readable. To test this, we performed three fMRI experiments (N=15) and systematically varied the participant’s task and the visual stimulus, investigating mOTS-words and pOTS-words subregions. In experiment 1, we contrasted text stimuli with non-readable visual stimuli (faces, limbs, houses, and objects). In experiment 2, we used a fMRI adaptation paradigm, presenting the same or different compound words in text or emoji formats. In experiment 3, participants performed either a reading or a color task on compound words, presented in text or emoji format. Using experiment 1 data, we identified left mOTS-words and pOTS-words in all participants by contrasting text stimuli with non-readable stimuli. In experiment 2, pOTS-words, but not mOTS-words, showed fMRI adaptation for compound words in both text and emoji formats. In experiment 3, surprisingly, both mOTS-words and pOTS-words showed higher responses to compound words in emoji than text formats. Moreover, mOTS-words, but not pOTS-words, also showed higher responses during the reading than color task and more so for words in the emoji format. Multivariate analyses of experiment 3 data showed that distributed responses in pOTS-words encode the visual stimulus, whereas distributed responses in mOTS-words encode both the stimulus and the task. Together, our findings suggest that the function of the OTS-words subregions goes beyond the specific visual processing of text and that these regions are flexibly recruited whenever semantic meaning needs to be assigned to visual input.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×