August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Use of facial expressions to estimate level of attention while watching video lectures.
Author Affiliations
  • Renjun miao
    TOHOKU UNIVERSITY
    POPER
  • Haruka Kato
    TOHOKU UNIVERSITY
  • Yasuhiro Hatori
    TOHOKU UNIVERSITY
  • Yoshiyuki Sato
    TOHOKU UNIVERSITY
  • Satoshi Shioiri
    TOHOKU UNIVERSITY
Journal of Vision August 2023, Vol.23, 4726. doi:https://doi.org/10.1167/jov.23.9.4726
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Renjun miao, Haruka Kato, Yasuhiro Hatori, Yoshiyuki Sato, Satoshi Shioiri; Use of facial expressions to estimate level of attention while watching video lectures.. Journal of Vision 2023;23(9):4726. https://doi.org/10.1167/jov.23.9.4726.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Because of the COVID-19, online lectures have become very popular. The lack of interactivity in online lectures makes it difficult for instructors to estimate how much attention students pay to his/her lectures. The aim of this study was to develop a method to estimate students' attentional state from facial features while participating in an online lecture. We conducted an experiment to measure attention levels while watching a video lecture, using a reaction time (RT) measurement to detect stimuli (white noise) irrelevant to the lecture. We assume that RT to such a stimulus would be longer when participants are focusing on the lecture than when they are not, so we could estimate how focused learners are on the lecture when white noise is presented from the RT measurement. During the experiment, learner's face was recorded by a video camera for the purpose of predicting RTs. We applied a machine learning method (light GBM) to estimate the RTs from facial features extracted as action units (AU) by an open-source software (OpenFace). The model obtained from light GBM showed that RT to the irrelevant stimuli can be estimated in some amount from AUs. This suggests that facial expressions are useful for predicting attention state or concentration level while learning. An alternative interpretation of RT lengthening is the decrease of arousal level. Some participants sometimes appeared sleepy and we estimated sleepy faces from AUs related to blink. We re-analyzed the data without RTs when the AUs defined the face as sleepy and found similar results. This supports the conclusion that facial expressions are useful for predicting concentration level while learning.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×