RCTs can serve as a Novel Evaluation Standard for Knowledge Tracing
Algorithms in Real-World Applications
The individualization of learning contents using recommender systems based on knowledge tracing algorithms within online learning environments promises large individual and social benefits – both for individual learners themselves as well as for society. However, the optimal design of knowledge tracing algorithms remains an open question. More so, the effectiveness of prediction-based task assignment with regard to actual learning outcomes commonly remains uninvestigated rendering it impossible to make recommendations for real-world applications. We address this problem by proposing a comprehensive three-step evaluation standard for knowledge tracing algorithms including a randomized controlled experiment trial (RCT) that we deploy on a large digital self-learning platform. We develop a knowledge tracing machine learning algorithm based on two convolutional neural networks (CNNs) that we use to assigns tasks to 4,365 learners according to their learning paths. To test our algorithm in the field, learners are randomized into three groups: two treatment groups that get tasks assigned based on group-based and individual predictions of our algorithm and one control group that gets tasks assigned randomly. We analyze the difference between the three randomly assigned groups with respect to the effort that learners provide and their performance on the platform. Despite the fact that our trained model shows good performance in predicting learning outcomes on cross-validation splits and performs similar to commonly used knowledge tracing algorithms on prediction tasks in our and in publicly available benchmark datasets, we do not find differences between our groups. Our results shed light on the importance of careful evaluation of algorithms in real-world settings and the multiple challenges associated with the algorithm-based individualization of learning paths.
View on arXiv