L.E.A.R.N: A Hybrid Architecture for Language-Guided Induction of Hierarchical Task Networks
Proceedings of the Twelfth Annual Conference on Advances in Cognitive Systems
2025
Abstract
Hierarchical Task Networks (HTNs) provide an interpretable, structured framework for problem-solving but are often brittle, relying on a fixed set of predefined operators and requiring numerous examples to learn new procedures. In contrast, Large Language Models (LLMs) offer generative flexibility but lack the reliability and transparency required for robust cognitive systems. This paper introduces L.E.A.R.N (Learning by Example Authoring and Reasoning Network), a hybrid cognitive architecture that integrates the strengths of both approaches. L.E.A.R.N utilizes an LLM to generate candidate solution traces and, when necessary, propose new primitive operators. This output is then verified and structured within an HTN, which grounds the knowledge and ensures correctness. This approach shifts the human's role from a demonstrator to a verifier, significantly reducing the authoring burden. Our experimental evaluation shows that L.E.A.R.N learns expert problem-solving skills, such as solving quadratic equations, faster and with fewer demonstrations than an HTN-only baseline, while still providing the explainability and reliability that purely generative models lack. The architecture represents a step toward more adaptive and flexible cognitive systems.
BibTeX
@inproceedings{smith-acs-2025,
title = {L.E.A.R.N: A Hybrid Architecture for Language-Guided Induction of Hierarchical Task Networks},
author = {Smith, Glen and MacLellan, Christopher J.},
booktitle = {Proceedings of the Twelfth Annual Conference on Advances in Cognitive Systems},
pages = {178-195},
year = {2025},
}
