In this blog post, I attempt to layout a high-level research vision for the Teachable AI Lab. This includes outlining (1) research thrust areaa, (2) application focus areas, and (3) general themes for the lab.

Research Thrust Areas

We aim to accomplish this mission through research in three thrust areas:

  • Teachable Systems: How do we build systems people can teach and interact with, like they would another human, while still taking advantage of key non-human features of AI/ML systems?

  • Human-like AI/ML: How do we develop cognitive systems that can learn like humans (incrementally, with few examples, etc.) and that produce human relatable/explainable/understandable outputs? The emphasis will be on both developing distinct AI and ML components as well as on putting these components together to create integrated systems.

  • Computational Models of Human Learning and Decision Making: How can we leverage human data to guide human-like computational model design? How can we leverage these human-like models to better understand human decision making and learning?

Application Focus Areas

Orthogonal to these reserach thrusts we are also investigating a number of application areas including:

  • Education: Our primary focus will be on the development of technologies to support ed tech, such as intelligent tutoring systems and educational games as well as system to do things like act as digital teaching assistants to professors and teachers.

    • K12: Interested in developing new technologies to support K12 education as well.
    • Higher-Ed Training Technologies: Interested in developing new technologies to teach CS, AI/ML, and Data Science
    • Medical Training Technologies: One type of training technology would be to support medical training; planning to investigate VR-based tutoring systems that provide situated instruction
  • Participatory AI Create virtual agent technology that can support people in doing tasks. We will focus on themes around empowering users to change the AI systems that they interact with, so they can correct incorrect behaviors and better support the users—even those that are not technical. Typically you’d need to have someone engineer the AI model, but our system will be “teachable”, so it can learn from interactions with the users the knowledge that it needs. The learning model will be based on the same models we are developing based on human data in educational tasks, so they will learn like human students do. We will also leverage what we know about human-human learning to build teachable systems that follow the teaching patterns people would naturally use with people.

  • Web-based Personal Assistant: Agent will support people in completing tasks within the web browser and on the desktop. The general idea is to have a personal assistant like Siri or Alexa that learns from interactions with people (voice/text as well as examples/feedback) and can support them in learning/automating basic tasks they do through the web browser or other desktop apps. Will be similar to this system https://almond.stanford.edu/, but will support more natural interactions (voice and situated examples, rather than programming through GUI) as well as the ability to interactive learning through these interactions.

  • Virtual-Physical Personal Agent: Aiming to bridge educational game based models and personal assistant models to create agents that can also support learning from the user in both physical and virtual spaces. E.g., the system might teach a user how to cook a particular dish using verbal instructions, but if there are gaps in the recipe the user might teach the system how to make a modification, which will be updated for future interactions/users. We’ll assume the agent has sensors for perceiving the user in these physical spaces. My initial push into this space will center on using VR tech to put people into simulated physical spaces that we know everything about (e.g., a simulated kitchen). This has nice applications for other kinds of work-place related support, e.g., teaching people how to change the oil on their car (and similarly learning models for changing oil on car through interaction with knowledgeable users). Initial agents will be disembodied, but with my VR approach, we can put virtual robots into the space that are able to engage in situated interactive behavior with their human collaborators.

Key Themes for Lab

  • We will develop technology that is both use-inspired and technology-inspired
  • Leverage what we know about humans, how they learn, act, teach, and translate those into relatable and understandable AI systems
  • Big challenge in AI is how do we put the AI pieces together rather than building single AI/ML components
  • Human-Centered
  • We will identify and build what is best for the human users, not what is more easily built based on the current technology (we’ll use WoZ to cheat).
  • Human-AI Symbiosis
  • Systems-level AI systems (unified architectures) rather than individual AI / ML components
  • Holistic systems not single components, how do the human-AI pieces fit together and how do individual AI and ML components come together to make intelligent thinking and learning systems?
  • Human relatable
  • Human understandable
  • Natural for humans to use (no training needed, does what is expected)
  • Human usable
  • Theory based (leverage what we know about human learning and scientific studies of ML approaches) rather than purely data driven
  • Test human thinking and learning theories / improve our understanding of people
  • Human like and human inspired