Probabilistic Graph Models for Mastery and Recommendations
Unicon is investing in ways of assisting instructors in the classroom and employing analysis of content and assessment relationships to inform instruction. With that goal in mind, described below is an effort to develop a model to evaluate mastery of a topic and to make content and/or learning activity recommendations based on past student performance.
The objective of this exercise is to be able to identify and recommend optimal paths through content items and activities for learners to use to achieve mastery of a topic. Included in this approach is treating mastery as not directly observable (i.e. a specific cut score on a test that defines achievement of mastery) and evaluating a sequence of observations to estimate if mastery has been achieved. To put this into more concrete terms, consider several stakeholder perspectives or user stories:
- As an instructor, I would like a digital assistant to recommend next steps to a student based on their current status of learning a new topic
- As an instructor, I want to know if a student has mastered a new concept or skill
- As a curriculum designer, I would like to know what paths through content are most effective
Without going into a long treatise on Markov Models (See references below), Markov Models (also referred to as a Markov Chain or Process) are often used to describe systems that have discrete states and there are probabilities associated with changing from one state to another. The key assumption for Markov Models is that the future state depends only on the current state and not on any previous "history" or states. Note: The assumption of independent states may raise concern in the context of learning where moving from one state to another may not be independent (i.e. skills build on prior skills). However, restricting the scope to a single topic or small unit of learning, the states can be constructed to be independent within that atomic unit of learning.
This blog post that will cover the conceptual background on the graph model and the algorithms used to develop the model parameters. After parameter estimation, we will use the model to estimate if mastery has been achieved and identify a "good" sequence of learning activities. Synthetically generated data will be used to illustrate the approach. If you would like to experiment with the model, the Jupyter Notebooks I used are available to play with.
The Model
Hidden Markov Models (HMM) have states that cannot be directly observed or measured and must be inferred from the observable states. Consider a fairly simple set of states for learning a topic. Each of the observable states are discrete records of the student's incremental activity.
The observable states are:
- Used lecture content, e.g. either attended a lecture or watched a lecture video
- Used text content, e.g. read a section of the textbook or other body of text content
- Played a game (or other interactive or simulation) that covers the associated concept/skill
- Took a test on the topic and scored less than 85% correct
- Took a test on the topic and achieved a score equal to or greater than 85% correct
The hidden states to be inferred are:
- Not mastered - the student is not yet competent with the topic or skill that this unit covers
- Mastered - the student is competent with the skill/topic
The model is represented visually below, with initial estimates of the probabilities of transitioning from one state to the next and the emission probabilities associated with the states and the observable features of the model. These initial estimates are just semi-educated "best guesses" based on an intuitive understanding of the content items. We'll cover estimating the parameters from data in the next section.
Fig. 1 - Hidden Markov Model
Application of the Model
Estimating HMM Parameters
Referring back to the objectives or user stories defined in the introduction, we can work towards being able to apply the model. First, while we have some intuitive guesses for the state change probabilities, can we derive or "learn" estimates for these model parameters from data? The Baum-Welch algorithm is a common approach to estimating the HMM parameters given multiple sequences of observations. See the references for a detailed description and derivation of Baum-Welch.
As an example, consider a series of observations, as below:
- Took content 1; took content 2; played game; scored >= 85 on test
- Took content 1; played game; scored < 85 on test
- Played game; scored < 85 on test
- Took content 2; took content 1; scored >= 85 on test
- Took content 1; took content 2; scored >= 85 on test
Running a series of 50 of these observed sequences with the same frequency (i.e. 10x each observation), we can derive the HMM parameters:
The initial state probabilities are:
Not Mastered | Mastered |
---|---|
~1.00 | ~0.00 |
The hidden state transition probabilities are:
Not Mastered | Mastered | |
---|---|---|
Not Mastered | 0.304 | 0.696 |
Mastered | 0.0002 | 0.9998 |
The emission probabilities for the observables and associated hidden states are:
Took Content 1 | Took Content 2 | Played Game | Scored < 85 | Scored >= 85 | |
---|---|---|---|---|---|
Not Mastered | 0.499 | 0.221 | 0.274 | 0.006 | 0.0004 |
Mastered | 0.003 | 0.0115 | 0.147 | 0.323 | 0.411 |
Now that we have computed the parameters of the HMM model given a set of observed sequences, we can apply the model to the other use cases.
What is the Mastery Path for a Given Student?
Given a sequence of observations for a given student, what does the state of mastered/not mastered look like throughout the sequence of observations, and did he or she achieve mastery at the end of the sequence? The Viterbi algorithm is a process for finding the most likely sequence of hidden states given a sequence of observations. The Viterbi algorithm iterates over each time step (or observation) finding the maximum probability of any path that gets to state i at time t, that also has the correct observations for the sequence up to time t.
First Scenario
In this example, the student tries to take the test after just playing the game and fails, then watches the lecture video and fails again. After reading the text content and playing the game, the student is able to pass the test.
Observed State |
Mastery State |
---|---|
Played Game | Not Mastered |
< 85% on test | Not Mastered |
Took content 1 | Not Mastered |
< 85% on test | Not Mastered |
Took content 2 | Not Mastered |
Played game | Not Mastered |
>= 85% on test | Mastered |
Second Scenario
In this example, the student follows a linear progression through all the unit content, passes the test on the first try, then perhaps takes a break by playing the game one more time.
Observed State |
Mastery State |
---|---|
Took content 1 | Not Mastered |
Took content 2 | Not Mastered |
Played game | Not Mastered |
>= 85% on test | Mastered |
Played game | Mastered |
What is a Good Path to Mastery?
Given an estimate of the HMM parameters as determined above, we can evaluate the probability of a given sequence occurring using the HMM Forward Algorithm. We can evaluate a number of possible sequences that end at the Mastered state and find a locally optimal sequence to use as a recommended path through the content.
With the model trained by Baum-Welch above, let's evaluate the probability of occurrence of the possible learning paths (not every combination has been evaluated):
Sequence | Probability |
---|---|
Scored >= 85 | 0.0004 |
Took Content 1; Scored >= 85 | 0.143 |
Took Content 2; Scored >= 85 | 0.063 |
Played Game; Scored >= 85 | 0.078 |
Took Content 1; Played Game; Scored >= 85 | 0.033 |
Took Content 2; Played Game; Scored >= 85 | 0.014 |
Played Game; Took Content 1; Scored >= 85 | 0.012 |
Played Game; Took Content 2; Scored >= 85 | 0.014 |
Took Content 1; Took Content 2; Played Game; Scored >= 85 | 0.005 |
Took Content 2; Took Content 1; Played Game; Scored >= 85 | 0.002 |
Took Content 1; Played Game; Took Content 2; Scored >= 85 | 0.004 |
Played Game; Took Content 1; Took Content 2; Scored >= 85 | 0.002 |
Took Content 2; Played Game; Took Content 1; Scored >= 85 | 0.001 |
Played Game; Took Content 2; Took Content 1; Scored >= 85 | 0.001 |
Based on this model, the best recommendation appears to be to take Content 1 and then take the test.
Next Up...
In the next post in this series, I will cover the proof of concept implementation to show how interacting with learning activities is used to update the current student state estimate (mastered or not mastered) and provide content recommendations based on the current student state and what activities and/or content have already been used.
Also, look for related posts on integrating Reinforcement Learning-based content recommendations into a classroom digital assistant chatbot from another Unicon colleague.
References
- Introduction to Hidden Markov Models with Python Networkx and Sklearn
- Viterbi Algorithm (Wikepedia)
- Baum-Welch Algorithm (Wikepedia)
- A Gentle Tutorial of the EM Algorithm and its Application to Parmeter Estimation for Gaussian Mixture and Hidden Markov Models, Bilmes, UC Berkeley, TR-97-021, April 1998
- Sklearn
- Python Hidden Markov Model Implementations
