Predicting Students' Attention Level with Interpretable Facial and Head Dynamic Features in an Online Tutoring System - Robotics Institute Carnegie Mellon University

Predicting Students’ Attention Level with Interpretable Facial and Head Dynamic Features in an Online Tutoring System

Shimeng Peng, Lujie Chen, Chufan Gao, and Richard Jiarui Tong
Conference Paper, Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI '20) (Student Abstract Track), pp. 13895 - 13896, April, 2020

Abstract

Engaged learners are effective learners. Even though it is widely recognized that engagement plays a vital role in learning effectiveness, engagement remains to be an elusive psychological construct that is yet to find a consensus definition and reliable measurement. In this study, we attempted to discover the plausible operational definitions of engagement within an online learning context. We achieved this goal by first deriving a set of interpretable features on dynamics of eyes, head and mouth movement from facial landmarks extractions of video recording when students interacting with an online tutoring system. We then assessed their predicative value for engagement which was approximated by synchronized measurements from commercial EEG brainwave headset worn by students. Our preliminary results show that those features reduce root mean-squared error by 29% compared with default predictor and we found that the random forest model performs better than a linear regressor.

BibTeX

@conference{Peng-2020-126856,
author = {Shimeng Peng and Lujie Chen and Chufan Gao and Richard Jiarui Tong},
title = {Predicting Students' Attention Level with Interpretable Facial and Head Dynamic Features in an Online Tutoring System},
booktitle = {Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI '20) (Student Abstract Track)},
year = {2020},
month = {April},
pages = {13895 - 13896},
}