Institute of Information Theory and Automation

You are here

Adaptive Testing using Bayesian Networks (PhD Defense of Martin Plajner)

Defense type: 
Ph.D.
Date of Event: 
2021-04-22
Venue: 
FJFI ČVUT, posluchárna T-201 (Trojanova, Praha 2), 10:00
Mail: 
Status: 
defended

Testing of human skills and abilities is a task which is being repeated frequently in the modern world. The testing methodology has remained the same for a long time but there are ways to potentially improve this process. One way is by using the concept of computerized adaptive testing. This concept aims at modeling a student, measuring his/her (unobservable) skills and, based on those results, predicting his/her outputs in testing. This efort allows us to create a shorter and more precise test as we are able to ask questions suiting the particular student better.

In this dissertation thesis, our research is centered around the concept of computerized adaptive testing using Bayesian networks as student models. We present the methodology of facilitating the adaptive test with this type of model and verify the added value of using the concept of CAT over the classical approach. The verification is performed either on articial data or on two empirical datasets. One dataset is collected as a mathematics test at high schools, the second is the oficial results dataset of Czech National Final High School Exam. Our tests proved that using the adaptive approach in testing decreases the length of the test and provides more reliable results. Moreover, we can use the student model to extract more information about the student rather than just the score of a single test.

In our research we use Bayesian networks as student models. We provide an evaluation of their effectiveness for this task and experimental proofs. We have identified, described and tested the effect of a special condition of these models, monotonicity. The monotonicity condition requires a model to satisfy special conditions placed on its parameters. We empirically proved that this condition improves the quality of the model which is learned from data, especially in cases where the learning dataset is small. We derive and present a new method for learning monotone parameters. This method uses learned models which are monotone. Based on our experiments these models provide better results than non-monotone methods and competitive monotone methods. Monotonicity is an important concept which helps learning models and allows us to learn more reliable parameters. Monotone models are more likely to be accepted by final users in areas where monotonicity is to be expected. The application area of such models is large as it is a quite common feature of modeled reality. Their application spans over the domain of CAT to other domains as well, where learning with a small dataset may be a common problem and monotonicity can help a lot there.

2021-10-06 17:01