STAND: Data-Efficient and Self-Aware Precondition Induction for Interactive Task Learning

Daniel Weitekamp, Glen Smith, Kenneth R. Koedinger, Christopher J. MacLellan

Proceedings of the Forty-Third International Conference on Machine Learning

2026

Abstract

In interactive task learning (ITL), AI agents learn new capabilities from limited human instruction provided during task execution. STAND is a new method of data-efficient rule precondition induction specifically designed for these human-in-the-loop training scenarios. A key feature of STAND is its self-awareness of its own learning—it can provide accurate metrics of training progress back to users. STAND beats popular methods like XGBoost, decision trees, random forests, and version spaces at small-data precondition induction tasks, and is highly accurate at estimating when its performance improves on holdout examples. In our evaluations, we find that STAND shows more monotonic improvement than other models with low rates of error recurrence. These features of STAND support a more consistent training experience, enabling human instructors to estimate when they are finished training and providing active-learning support by identifying trouble spots where more training is required.

Topics:Interactive Task LearningInteractive Machine LearningData-efficient Machine LearningSelf-aware Learning

BibTeX

@inproceedings{weitekamp-icml-2026,
  title     = {STAND: Data-Efficient and Self-Aware Precondition Induction for Interactive Task Learning},
  author    = {Weitekamp, Daniel and Smith, Glen and Koedinger, Kenneth R. and MacLellan, Christopher J.},
  booktitle = {Proceedings of the Forty-Third International Conference on Machine Learning},
  year      = {2026},
}

← All Publications