pith. machine review for the scientific record. sign in

I trust the AI system’s outputs

1 Pith paper cite this work. Polarity classification is still indexing.

1 Pith paper citing it

citation-role summary

other 1

citation-polarity summary

fields

cs.HC 1

years

2026 1

verdicts

UNVERDICTED 1

roles

other 1

polarities

unclear 1

representative citing papers

Evaluating the False Trust engendered by LLM Explanations

cs.HC · 2026-05-11 · unverdicted · novelty 6.0

A user study finds that LLM reasoning traces and post-hoc explanations create false trust by increasing acceptance of incorrect answers, whereas contrastive dual explanations improve users' ability to detect errors.

citing papers explorer

Showing 1 of 1 citing paper.

  • Evaluating the False Trust engendered by LLM Explanations cs.HC · 2026-05-11 · unverdicted · none · ref 63

    A user study finds that LLM reasoning traces and post-hoc explanations create false trust by increasing acceptance of incorrect answers, whereas contrastive dual explanations improve users' ability to detect errors.