pith. machine review for the scientific record. sign in

arxiv: 1904.11738 · v1 · submitted 2019-04-26 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Deep-IRT: Make Deep Learning Based Knowledge Tracing Explainable Using Item Response Theory

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords modelitemknowledgetracingdeeplearningstudentdeep-irt
0
0 comments X
read the original abstract

Deep learning based knowledge tracing model has been shown to outperform traditional knowledge tracing model without the need for human-engineered features, yet its parameters and representations have long been criticized for not being explainable. In this paper, we propose Deep-IRT which is a synthesis of the item response theory (IRT) model and a knowledge tracing model that is based on the deep neural network architecture called dynamic key-value memory network (DKVMN) to make deep learning based knowledge tracing explainable. Specifically, we use the DKVMN model to process the student's learning trajectory and estimate the student ability level and the item difficulty level over time. Then, we use the IRT model to estimate the probability that a student will answer an item correctly using the estimated student ability and the item difficulty. Experiments show that the Deep-IRT model retains the performance of the DKVMN model, while it provides a direct psychological interpretation of both students and items.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Explainable Knowledge Tracing via Probabilistic Embeddings and Pattern-based Reasoning

    cs.AI 2026-05 unverdicted novelty 6.0

    PLKT models student knowledge with Beta probabilistic embeddings and performs explicit logical reasoning over historical interactions to deliver both accurate predictions and interpretable explanations in knowledge tracing.

  2. Interpretable Difficulty-Aware Knowledge Tracing in Tutor-Student Dialogues

    cs.CL 2026-05 unverdicted novelty 6.0

    A difficulty-aware conversational knowledge tracing framework that combines LLMs with Item Response Theory to produce interpretable student performance predictions in tutor dialogues.