Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Applying Probabilistic Models for Knowledge Diagnosis and Educational Game Design

Abstract

Computer-based learning environments offer the potential for innovative assessments of student knowledge and personalized instruction for learners. However, there are a number of challenges to realizing this potential. Many psychological models are not specific enough to directly deploy in instructional systems, and computational challenges can arise when considering the implications of a particular theory of learning. While learners' interactions with virtual environments encode significant information about their understanding, existing statistical tools are insufficient for interpreting these interactions. This research develops computational models of teaching and learning and combines these models with machine learning algorithms to interpret learners' actions and customize instruction based on these interpretations. This approach results in frameworks that can be adapted to a variety of educational domains, with the frameworks clearly separating components that can be shared across tasks and components that are customized based on the educational content. Using this approach, this dissertation addresses three major questions: (1) How can one diagnose learners' knowledge from their behavior in games and virtual laboratories? (2) How can one predict whether a game will be diagnostic of learners' knowledge? and (3) How can one customize instruction in a computer-based tutor based on a model of learning in a domain?

The first question involves automatically assessing student knowledge via observed behavior in complex interactive environments, such as virtual laboratories and games. These environments require students to plan their behavior and take multiple actions to achieve their goals. Unlike in many traditional assessments, students' actions in these environments are not independent given their knowledge and each individual action cannot be classified as correct or incorrect. To address this issue, I develop a Bayesian inverse planning framework for inferring learners' knowledge from observing their actions. The framework is a variation of inverse reinforcement learning and uses Markov decision processes to model how people choose actions given their knowledge. Through behavioral experiments, I show that this framework can infer learners' stated beliefs, with accuracy similar to human observers, and that feedback based on the framework improves learning efficiency. To extend this framework to educational applications outside of the laboratory, I extended the inverse planning framework to diagnose students' algebra skills from worked solutions to linear equations, separating different sources of mathematical errors. I tested the framework by developing an online algebra tutor that provides students with the opportunity to practice solving equations and automatically diagnoses their understanding after they have solved sufficient equations. Preliminary experiments demonstrate that Bayesian inverse planning provides a good fit for the majority of participants' behaviors, and that its diagnoses are consistent with results of a more conventional assessment.

The results of the previous studies showed that not all tasks result in learner behavior that can be used to perfectly diagnose knowledge. In many cases, actions may be ambiguous, resulting in a diagnosis that places some probability on one possible knowledge state and some probability on another. I developed an optimal game design framework to predict how much information will be gained by observing a player or players' actions if they were to play a particular game: gaining more information from a game means that the diagnosis is less ambiguous. This framework extends optimal experiment design methods in statistics. It can limit the trial and error necessary to create games for education and behavioral research by suggesting game design choices while still leveraging the skills of a human designer to create the initial design. Behavioral results from a concept learning game demonstrate that the predicted information gain is correlated with the actual information gain and that the best designs can result in twice as much information as an uninformed design.

The final part of this dissertation considers how to personalize instruction in a computer tutor, relying on knowledge about the domain and an estimate of the students' knowledge. This builds on the idea of assessing learners' knowledge from their actions and considers more broadly how to sequence assessment and personalized instruction. In a computer-based tutor, there may be a cost to time spent on assessment, as the time could alternatively have been spent allowing the learner to work through new material; however, this time spent on assessment may also be beneficial by providing information to allow the computer to choose material more effectively. I show that partially observable Markov decision processes can be used to model the tutoring process and decide what pedagogical action to choose based on a model of the domain and the learner. The resulting automated instructional policies result in faster learning of numeric concepts than baseline policies.

My research demonstrates that applying a computational modeling approach to a diverse set of problems in computer-assisted learning results in new machine learning algorithms for interpreting and responding to complex behavioral data. The frameworks developed in this research provide a systematic and scalable way to create personalized responses to learners. These frameworks show the potential of interactive educational technologies to not only provide content to learners but to infer their understanding from innovative assessments and provide personalized guidance and instruction.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View