pith. machine review for the scientific record. sign in

arxiv: 2601.04025 · v2 · submitted 2026-01-07 · 💻 cs.CL · cs.CY

Recognition: unknown

Simulated Students in Tutoring Dialogues: Substance or Illusion?

Authors on Pith no claims yet
classification 💻 cs.CL cs.CY
keywords studentsevaluationsimulatedsimulationstudenttutoringworkmany
0
0 comments X
read the original abstract

Advances in large language models (LLMs) enable many new innovations in education. However, evaluating the effectiveness of new technology requires real students, which is time-consuming and hard to scale up. Therefore, many recent works on LLM-powered tutoring solutions have used simulated students for both training and evaluation, often via simple prompting. Surprisingly, little work has been done to ensure or even measure the quality of simulated students. In this work, we formally define the student simulation task, propose a set of evaluation metrics that span linguistic, behavioral, and cognitive aspects, and benchmark a wide range of student simulation methods on these metrics. We experiment on a real-world math tutoring dialogue dataset, where both automated and human evaluation results show that prompting strategies for student simulation perform poorly; supervised fine-tuning and preference optimization yield much better but still limited performance, motivating future work on this challenging task.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Simulating Students or Sycophantic Problem Solving? On Misconception Faithfulness of LLM Simulators

    cs.CL 2026-05 conditional novelty 7.0

    LLM simulators exhibit near-zero selective response to targeted misconception feedback and behave sycophantically, but SFT and SFS-aligned RL improve this property.