pith. machine review for the scientific record. sign in

Is Safety Standard Same for Everyone? User -Specific Safety Evaluation of Large Language Models,

2 Pith papers cite this work. Polarity classification is still indexing.

2 Pith papers citing it

fields

cs.AI 1 cs.CY 1

years

2025 2

verdicts

UNVERDICTED 2

representative citing papers

LLM Harms: A Taxonomy and Discussion

cs.CY · 2025-12-05 · unverdicted · novelty 3.0

This paper proposes a taxonomy of LLM harms in five categories and suggests mitigation strategies plus a dynamic auditing system for responsible development.

citing papers explorer

Showing 2 of 2 citing papers.

  • Beyond Context: Large Language Models' Failure to Grasp Users' Intent cs.AI · 2025-12-24 · unverdicted · none · ref 69

    LLMs fail to detect hidden harmful intent, allowing systematic bypass of safety mechanisms through framing techniques, with reasoning modes often worsening the issue.

  • LLM Harms: A Taxonomy and Discussion cs.CY · 2025-12-05 · unverdicted · none · ref 244

    This paper proposes a taxonomy of LLM harms in five categories and suggests mitigation strategies plus a dynamic auditing system for responsible development.