A disentangled representation framework for AI-text detection improves generalization to unseen generators with up to 24.2% accuracy gain on the MAGE benchmark covering 20 LLMs.
InProceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 8384–8395
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Breaking the Generator Barrier: Disentangled Representation for Generalizable AI-Text Detection
A disentangled representation framework for AI-text detection improves generalization to unseen generators with up to 24.2% accuracy gain on the MAGE benchmark covering 20 LLMs.