Modifying Deep Knowledge Tracing for Multi-step Problems
Proceedings of the 15th International Conference on Educational Data Mining
2022
Abstract
Previous studies suggest that Deep Knowledge Tracing (or DKT) has fundamental limitations that prevent it from supporting mastery learning on multi-step problems [15, 17]. Although DKT is quite accurate at predicting observed correctness in offline knowledge tracing settings, it often generates inconsistent predictions for knowledge components when used online. We believe this issue arises because DKT’s loss function does not evaluate predictions for skills and steps that do not have an observed ground truth value. To address this problem and enable DKT to better support online knowledge tracing, we propose the use of a novel loss function for training DKT. In addition to evaluating predictions that have ground truth observations, our new loss function also evaluates predictions for skills that do not have observations by using the ground truth label from the next observation of correctness for that skill. This approach ensures the model makes more consistent predictions for steps without observations, which are exactly the predictions that are needed to support mastery learning. We evaluated a DKT model that was trained using this updated loss by visualizing its predictions for a sample student learning sequence. Our analysis shows that the modified loss function produced improvements in the consistency of DKT model’s predictions.
BibTeX
@inproceedings{zhang-edm-2022,
title = {Modifying Deep Knowledge Tracing for Multi-step Problems},
author = {Zhang, Qiao and Chen, Zeyu and Lalwani, Natasha and MacLellan, Christopher J.},
booktitle = {Proceedings of the 15th International Conference on Educational Data Mining},
pages = {684-688},
year = {2022},
doi = {10.5281/zenodo.6853145},
}
