pith. machine review for the scientific record. sign in

Think you have solved direct-answer question answering? try arc-da, the direct-answer AI2 reasoning challenge.CoRR, abs/2102.03315

3 Pith papers cite this work. Polarity classification is still indexing.

3 Pith papers citing it

fields

cs.CL 3

years

2026 2 2024 1

verdicts

UNVERDICTED 3

representative citing papers

RUQuant: Towards Refining Uniform Quantization for Large Language Models

cs.CL · 2026-04-05 · unverdicted · novelty 6.0

RUQuant uses block-wise composite orthogonal matrices from Householder reflections and Givens rotations plus a fine-tuned global reflection to achieve 99.8% full-precision accuracy at W6A6 and 97% at W4A4 for 13B LLMs in about one minute.

Corrective Retrieval Augmented Generation

cs.CL · 2024-01-29 · unverdicted · novelty 6.0

CRAG improves RAG robustness via a retrieval quality evaluator that triggers web augmentation and a decompose-recompose filter to focus on relevant information, yielding better results on short- and long-form generation tasks.

citing papers explorer

Showing 3 of 3 citing papers.

  • RUQuant: Towards Refining Uniform Quantization for Large Language Models cs.CL · 2026-04-05 · unverdicted · none · ref 36

    RUQuant uses block-wise composite orthogonal matrices from Householder reflections and Givens rotations plus a fine-tuned global reflection to achieve 99.8% full-precision accuracy at W6A6 and 97% at W4A4 for 13B LLMs in about one minute.

  • Corrective Retrieval Augmented Generation cs.CL · 2024-01-29 · unverdicted · none · ref 4

    CRAG improves RAG robustness via a retrieval quality evaluator that triggers web augmentation and a decompose-recompose filter to focus on relevant information, yielding better results on short- and long-form generation tasks.

  • SEPTQ: A Simple and Effective Post-Training Quantization Paradigm for Large Language Models cs.CL · 2026-04-11 · unverdicted · none · ref 2

    SEPTQ simplifies LLM post-training quantization to two steps via static global importance scoring and mask-guided column-wise weight updates, claiming superior results over baselines in low-bit settings.