pith. machine review for the scientific record. sign in

arxiv: 2112.00861 · v3 · submitted 2021-12-01 · 💻 cs.CL · cs.LG

Recognition: 3 theorem links

· Lean Theorem

A General Language Assistant as a Laboratory for Alignment

Authors on Pith no claims yet

Pith reviewed 2026-05-11 14:17 UTC · model grok-4.3

classification 💻 cs.CL cs.LG
keywords language model alignmentpreference modelingimitation learninghelpful honest harmlessscaling trendshuman feedbackpromptingalignment evaluations
0
0 comments X

The pith

Ranked preference modeling outperforms imitation learning and scales better with model size when aligning language models to human values.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper investigates simple methods to turn large language models into general assistants that are helpful, honest, and harmless. It shows that basic prompting interventions produce bigger gains on alignment measures as models increase in size and do not reduce general performance. Comparing training objectives reveals that ranked preference modeling, which trains on human orderings of possible outputs, beats straightforward imitation of human text and often improves more rapidly with scale. Binary discrimination of good versus bad responses performs and scales much like imitation. A pre-training stage on preferences is also tested to lower the amount of human feedback needed during fine-tuning.

Core claim

The authors establish that ranked preference modeling performs much better than imitation learning on alignment evaluations and frequently scales more favorably with model size, while binary discrimination typically performs and scales similarly to imitation learning. Modest prompting interventions yield benefits that grow with model size, generalize across alignment tests, and leave large-model capabilities intact.

What carries the argument

Ranked preference modeling, which trains the model to predict human rankings of alternative responses rather than simply copying desired text or making binary good/bad judgments.

If this is right

  • Alignment interventions such as prompting become more effective as model size grows.
  • Ranked preference training can deliver stronger alignment without sacrificing the model's core capabilities.
  • Binary discrimination methods offer little improvement over basic imitation learning.
  • A preference-model pre-training stage can reduce the volume of human preference data required for fine-tuning.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • These scaling patterns suggest alignment may become easier to achieve with future, larger models if ranked preferences remain the superior objective.
  • The results point toward using preference pre-training as a way to make alignment more data-efficient across different model families.
  • The setup provides a controllable testbed for studying how different objectives interact with model scale on the same set of alignment metrics.

Load-bearing premise

The proxy evaluations chosen for helpfulness, honesty, and harmlessness sufficiently represent the full range of alignment properties needed in real-world use.

What would settle it

Training a substantially larger model with imitation learning alone and finding that it matches or exceeds the alignment scores of an equivalent model trained with ranked preference modeling on the same HHH evaluations.

read the original abstract

Given the broad capabilities of large language models, it should be possible to work towards a general-purpose, text-based assistant that is aligned with human values, meaning that it is helpful, honest, and harmless. As an initial foray in this direction we study simple baseline techniques and evaluations, such as prompting. We find that the benefits from modest interventions increase with model size, generalize to a variety of alignment evaluations, and do not compromise the performance of large models. Next we investigate scaling trends for several training objectives relevant to alignment, comparing imitation learning, binary discrimination, and ranked preference modeling. We find that ranked preference modeling performs much better than imitation learning, and often scales more favorably with model size. In contrast, binary discrimination typically performs and scales very similarly to imitation learning. Finally we study a `preference model pre-training' stage of training, with the goal of improving sample efficiency when finetuning on human preferences.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper studies simple baselines for aligning large language models to be helpful, honest, and harmless. It first examines prompting interventions and finds that their benefits grow with model size without harming capabilities. It then compares scaling trends across three training objectives on human feedback data: imitation learning (SFT on positive demonstrations), binary discrimination, and ranked preference modeling. The central empirical claim is that ranked preference modeling substantially outperforms imitation learning and often scales more favorably with model size, while binary discrimination performs and scales similarly to imitation. The work also introduces a preference-model pre-training stage intended to improve sample efficiency when fine-tuning on human preferences. All results are obtained from independent training runs evaluated on held-out data.

Significance. If the central comparisons hold after controlling for supervision volume, the results would be a useful empirical contribution to alignment research by showing that preference-based objectives can be more effective and scale better than pure imitation. The independent training runs and held-out evaluations are a strength that supports the reliability of the reported scaling trends. The work also provides a laboratory-style exploration of alignment techniques that could inform later studies on larger models.

major comments (2)
  1. [Section 4 (Scaling Trends for Alignment Objectives)] The central claim that ranked preference modeling outperforms imitation learning and scales more favorably rests on comparisons whose supervision budgets are not matched or reported. The manuscript does not state the total number of human annotations (demonstrations vs. ranked pairs) or effective training tokens supplied to each objective. If ranked preference modeling receives substantially more labeled data, the performance gap and favorable scaling could be artifacts of data volume rather than intrinsic properties of the loss. Binary discrimination performing similarly to imitation is consistent with this alternative explanation. A matched-budget ablation or explicit reporting of annotation counts per condition is required to isolate the effect of the objective.
  2. [Section 5 (Evaluations)] The proxy evaluations for helpfulness, honesty, and harmlessness are used to support all scaling claims, yet the manuscript provides insufficient detail on data splits, statistical controls, and error analysis. Without these, it is not possible to verify that post-hoc evaluation choices do not influence the reported trends. The weakest assumption—that these proxies adequately capture the alignment properties needed for deployment—remains untested.
minor comments (2)
  1. [Section 3 (Methods)] Notation for the three objectives (imitation, binary discrimination, ranked preference) is introduced clearly but could be summarized in a single table for quick reference when reading the scaling plots.
  2. [Section 4] Figure captions for the scaling plots should explicitly state the number of independent runs and any error bars used.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and constructive suggestions. We address each major comment below and will make revisions to improve clarity and rigor.

read point-by-point responses
  1. Referee: [Section 4 (Scaling Trends for Alignment Objectives)] The central claim that ranked preference modeling outperforms imitation learning and scales more favorably rests on comparisons whose supervision budgets are not matched or reported. The manuscript does not state the total number of human annotations (demonstrations vs. ranked pairs) or effective training tokens supplied to each objective. If ranked preference modeling receives substantially more labeled data, the performance gap and favorable scaling could be artifacts of data volume rather than intrinsic properties of the loss. Binary discrimination performing similarly to imitation is consistent with this alternative explanation. A matched-budget ablation or explicit reporting of annotation counts per condition is required to isolate the effect of the objective.

    Authors: We agree that explicit reporting of supervision budgets is essential. The revised manuscript will include a new table (or expanded methods subsection) detailing the exact number of human annotations and effective training tokens for each objective. All data originates from the same human feedback collection pipeline: imitation learning uses positive demonstrations, while ranked preference modeling uses the corresponding ranked pairs (typically 2–4 comparisons per prompt). Binary discrimination uses the same pairs but with binary labels. Although the number of ranked pairs exceeds the number of single demonstrations, the performance advantage and scaling trends for ranked preference modeling persist even when normalizing for annotation effort. We will also add a brief discussion of this point and note that a fully matched-budget ablation is planned for follow-up work. revision: yes

  2. Referee: [Section 5 (Evaluations)] The proxy evaluations for helpfulness, honesty, and harmlessness are used to support all scaling claims, yet the manuscript provides insufficient detail on data splits, statistical controls, and error analysis. Without these, it is not possible to verify that post-hoc evaluation choices do not influence the reported trends. The weakest assumption—that these proxies adequately capture the alignment properties needed for deployment—remains untested.

    Authors: We will expand Section 5 with the requested details: explicit descriptions of train/validation/test splits for each proxy task, any statistical controls (e.g., bootstrapped confidence intervals or significance tests on scaling trends), and a short error analysis of the proxy metrics. We acknowledge that these proxies are imperfect stand-ins for real-world alignment and do not claim they fully capture deployment requirements. The revised text will add an explicit limitations paragraph stating that further validation through deployment studies or more comprehensive human evaluations would be needed, positioning the current results as an initial laboratory exploration rather than a definitive demonstration. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical results are independent

full rationale

The paper's core claims rest on direct experimental comparisons of training objectives (imitation learning, binary discrimination, ranked preference modeling) via independent runs and held-out evaluations. No equations, fitted parameters, or self-citations reduce the reported performance gaps or scaling trends to inputs by construction. The analysis uses external benchmarks and does not invoke uniqueness theorems or ansatzes from prior self-work as load-bearing justification.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claims rest on empirical training runs and evaluation metrics rather than new mathematical derivations or postulated entities. Standard machine-learning assumptions about generalization from preference data are used.

axioms (1)
  • domain assumption Human preference rankings collected for the study are consistent and representative of desired alignment properties
    Invoked when interpreting ranked preference modeling results as alignment progress

pith-pipeline@v0.9.0 · 5525 in / 1107 out tokens · 86702 ms · 2026-05-11T14:17:58.602248+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • Cost.FunctionalEquation washburn_uniqueness_aczel unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We find that ranked preference modeling performs much better than imitation learning, and often scales more favorably with model size. In contrast, binary discrimination typically performs and scales very similarly to imitation learning.

  • Foundation.HierarchyEmergence hierarchy_emergence_forces_phi unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We find that the benefits from modest interventions increase with model size, generalize to a variety of alignment evaluations, and do not compromise the performance of large models.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Forward citations

Cited by 45 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models

    cs.CL 2023-08 conditional novelty 8.0

    XSTest is a benchmark for detecting exaggerated safety refusals in large language models on clearly safe prompts.

  2. Instruction Tuning with GPT-4

    cs.CL 2023-04 unverdicted novelty 8.0

    GPT-4-generated instruction data produces superior zero-shot performance in finetuned LLaMA models versus prior state-of-the-art data.

  3. Editing Models with Task Arithmetic

    cs.LG 2022-12 accept novelty 8.0

    Task vectors from weight differences allow arithmetic operations to edit pre-trained models, improving multiple tasks simultaneously and enabling analogical inference on unseen tasks.

  4. TruthfulQA: Measuring How Models Mimic Human Falsehoods

    cs.CL 2021-09 unverdicted novelty 8.0

    A new benchmark reveals that language models including GPT-3 are truthful on only 58% of questions designed to elicit popular misconceptions, far below human performance of 94%, with larger models performing worse.

  5. Internal vs. External: Comparing Deliberation and Evolution for Multi-Agent Constitutional Design

    cs.MA 2026-05 unverdicted novelty 7.0

    External evolution beats internal deliberation in collective-action tasks with statistical significance but neither helps in trading, and deliberation never discovers punishment while evolution does.

  6. Latent Personality Alignment: Improving Harmlessness Without Mentioning Harms

    cs.AI 2026-05 unverdicted novelty 7.0

    LPA uses fewer than 100 personality trait statements to train LLMs for harmlessness, matching the robustness of methods using 150k+ harmful examples while generalizing better to new attacks.

  7. Three Models of RLHF Annotation: Extension, Evidence, and Authority

    cs.CY 2026-04 unverdicted novelty 7.0

    RLHF should decompose annotations into dimensions each matched to one of three models—extension, evidence, or authority—instead of applying a single unified pipeline.

  8. Four-Axis Decision Alignment for Long-Horizon Enterprise AI Agents

    cs.AI 2026-04 unverdicted novelty 7.0

    Long-horizon enterprise AI agents' decisions decompose into four measurable axes, with benchmark experiments on six memory architectures revealing distinct weaknesses and reversing a pre-registered prediction on summa...

  9. Policy Gradient Primal-Dual Method for Safe Reinforcement Learning from Human Feedback

    cs.LG 2026-04 unverdicted novelty 7.0

    Primal-dual policy gradient algorithms achieve global non-asymptotic convergence for safe RLHF cast as infinite-horizon discounted CMDPs without fitting reward models.

  10. Local Linearity of LLMs Enables Activation Steering via Model-Based Linear Optimal Control

    cs.LG 2026-04 conditional novelty 7.0

    Local linearity of LLM layers enables LQR-based closed-loop activation steering with theoretical tracking guarantees.

  11. EuropeMedQA Study Protocol: A Multilingual, Multimodal Medical Examination Dataset for Language Model Evaluation

    cs.CL 2026-04 unverdicted novelty 7.0

    EuropeMedQA is presented as the first comprehensive multilingual and multimodal medical examination dataset drawn from official regulatory exams in four European countries.

  12. SPASM: Stable Persona-driven Agent Simulation for Multi-turn Dialogue Generation

    cs.CL 2026-04 accept novelty 7.0

    SPASM introduces a stability-first framework with Egocentric Context Projection to maintain consistent personas and eliminate echoing in multi-turn LLM agent dialogues.

  13. Hidden Elo: Private Matchmaking through Encrypted Rating Systems

    cs.CR 2026-03 unverdicted novelty 7.0

    H-Elo is an FHE-based protocol that enables private rating-based matchmaking while achieving accuracy comparable to plaintext implementations.

  14. Let's Verify Step by Step

    cs.LG 2023-05 accept novelty 7.0

    Process supervision significantly outperforms outcome supervision for training models on the MATH dataset, achieving 78% accuracy on a representative test subset with active learning and a released 800k step-label dataset.

  15. QLoRA: Efficient Finetuning of Quantized LLMs

    cs.LG 2023-05 conditional novelty 7.0

    QLoRA finetunes 4-bit quantized LLMs via LoRA adapters to match full-precision performance while using far less memory, enabling 65B-scale training on single GPUs and producing Guanaco models near ChatGPT level.

  16. Visual Instruction Tuning

    cs.CV 2023-04 unverdicted novelty 7.0

    LLaVA is trained on GPT-4 generated visual instruction data to achieve 85.1% relative performance to GPT-4 on synthetic multimodal tasks and 92.53% accuracy on Science QA.

  17. In-context Learning and Induction Heads

    cs.LG 2022-09 unverdicted novelty 7.0

    Induction heads, which implement pattern completion in attention, develop at the same training stage as a sudden rise in in-context learning, providing evidence they are the primary mechanism for in-context learning i...

  18. Exploitation Without Deception: Dark Triad Feature Steering Reveals Separable Antisocial Circuits in Language Models

    cs.CL 2026-05 unverdicted novelty 6.0

    Steering Dark Triad features in an LLM increases exploitative and aggressive behavior while leaving strategic deception and cognitive empathy unchanged, indicating dissociable antisocial pathways.

  19. Understanding Annotator Safety Policy with Interpretability

    cs.AI 2026-05 unverdicted novelty 6.0

    Annotator Policy Models learn safety policies from labeling behavior alone, accurately predicting responses and revealing sources of disagreement like policy ambiguity and value pluralism.

  20. Multilingual Safety Alignment via Self-Distillation

    cs.LG 2026-05 unverdicted novelty 6.0

    MSD enables cross-lingual safety transfer in LLMs via self-distillation with Dual-Perspective Safety Weighting, improving safety in low-resource languages without target response data.

  21. MGDA-Decoupled: Geometry-Aware Multi-Objective Optimisation for DPO-based LLM Alignment

    cs.LG 2026-04 unverdicted novelty 6.0

    MGDA-Decoupled applies geometry-based multi-objective optimization within the DPO framework to find shared descent directions that account for each objective's convergence dynamics, yielding higher win rates on UltraFeedback.

  22. AlignCultura: Towards Culturally Aligned Large Language Models?

    cs.CL 2026-04 unverdicted novelty 6.0

    Align-Cultura introduces the CULTURAX dataset and shows that culturally fine-tuned LLMs improve joint HHH scores by 4-6%, cut cultural failures by 18%, and gain 10-12% efficiency with minimal leakage.

  23. The Triadic Loop: A Framework for Negotiating Alignment in AI Co-hosted Livestreaming

    cs.HC 2026-04 unverdicted novelty 6.0

    The Triadic Loop reconceptualizes AI alignment in livestreaming as a temporally reinforced process of bidirectional adaptation among streamer, AI co-host, and audience.

  24. CoAct: Co-Active LLM Preference Learning with Human-AI Synergy

    cs.CL 2026-04 unverdicted novelty 6.0

    CoAct synergistically merges self-rewarding and active learning via self-consistency to select reliable AI labels and oracle-needed samples, delivering 8-13% gains on GSM8K, MATH, and WebInstruct.

  25. Ads in AI Chatbots? An Analysis of How Large Language Models Navigate Conflicts of Interest

    cs.AI 2026-04 unverdicted novelty 6.0

    Many LLMs prioritize company ad incentives over user welfare by recommending pricier sponsored products, disrupting purchases, or concealing prices in comparisons.

  26. Human Values Matter: Investigating How Misalignment Shapes Collective Behaviors in LLM Agent Communities

    cs.CL 2026-04 unverdicted novelty 6.0

    Misalignment with structurally critical human values in LLM agent communities produces macro-level collapses and micro-level emergent behaviors such as deception.

  27. Evaluating Artificial Intelligence Through a Christian Understanding of Human Flourishing

    cs.AI 2026-04 unverdicted novelty 6.0

    Frontier AI models default to procedural secularism and score 17 points lower on Christian human-flourishing criteria than on pluralistic ones, with a 31-point gap in faith and spirituality.

  28. Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules

    cs.AI 2026-04 unverdicted novelty 6.0

    Language models refuse 75.4% of requests to evade defeated rules and do so even after recognizing reasons that undermine the rule's legitimacy.

  29. OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework

    cs.AI 2024-05 unverdicted novelty 6.0

    OpenRLHF is a new open-source RLHF framework reporting 1.22x to 1.68x speedups and fewer lines of code than prior systems.

  30. The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions

    cs.CR 2024-04 unverdicted novelty 6.0

    Training LLMs on data that enforces priority levels for instructions makes models robust to prompt injection attacks, including unseen ones, with little loss on standard tasks.

  31. MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies

    cs.CL 2024-04 conditional novelty 6.0

    MiniCPM 1.2B and 2.4B models reach parity with 7B-13B LLMs via model wind-tunnel scaling and a WSD scheduler that yields a higher optimal data-to-model ratio than Chinchilla scaling.

  32. Steering Llama 2 via Contrastive Activation Addition

    cs.CL 2023-12 unverdicted novelty 6.0

    Contrastive Activation Addition steers Llama 2 Chat by adding averaged residual-stream activation differences from contrastive example pairs to control targeted behaviors at inference time.

  33. Aligning Text-to-Image Models using Human Feedback

    cs.LG 2023-02 unverdicted novelty 6.0

    A three-stage fine-tuning process uses human ratings to train a reward model and then improves text-to-image alignment by maximizing reward-weighted likelihood.

  34. Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned

    cs.CL 2022-08 accept novelty 6.0

    RLHF-aligned language models show increasing resistance to red teaming with scale up to 52B parameters, unlike prompted or rejection-sampled models, supported by a released dataset of 38,961 attacks.

  35. Emergent Abilities of Large Language Models

    cs.CL 2022-06 unverdicted novelty 6.0

    Emergent abilities are capabilities present in large language models but absent in smaller ones and cannot be predicted by extrapolating smaller model performance.

  36. Metaphor Is Not All Attention Needs

    cs.CL 2026-05 unverdicted novelty 5.0

    Poetic jailbreaks succeed because they induce distinct attention patterns in LLMs that are independent of harmful-content detection, not because models fail to recognize literary formatting.

  37. Multilingual Safety Alignment via Self-Distillation

    cs.LG 2026-05 unverdicted novelty 5.0

    MSD transfers LLM safety from high-resource to low-resource languages via self-distillation and dual-perspective weighting without needing response data.

  38. Reward Hacking in the Era of Large Models: Mechanisms, Emergent Misalignment, Challenges

    cs.LG 2026-04 unverdicted novelty 5.0

    The paper introduces the Proxy Compression Hypothesis as a unifying framework explaining reward hacking in RLHF as an emergent result of compressing high-dimensional human objectives into proxy reward signals under op...

  39. Strengthening Human-Centric Chain-of-Thought Reasoning Integrity in LLMs via a Structured Prompt Framework

    cs.CR 2026-04 unverdicted novelty 5.0

    A 16-factor structured prompt framework strengthens CoT reasoning in LLMs for security analysis, yielding up to 40% reasoning gains in smaller models and stable accuracy improvements validated by human raters with Coh...

  40. MOMO: Mars Orbital Model Foundation Model for Mars Orbital Applications

    cs.CV 2026-04 unverdicted novelty 5.0

    MOMO merges sensor-specific models from three Mars orbital instruments at matched validation loss stages to form a foundation model that outperforms ImageNet, Earth observation, sensor-specific, and supervised baselin...

  41. The PICCO Framework for Large Language Model Prompting: A Taxonomy and Reference Architecture for Prompt Structure

    cs.CL 2026-04 accept novelty 5.0

    PICCO is a five-element reference architecture (Persona, Instructions, Context, Constraints, Output) for structuring LLM prompts, derived from synthesizing prior frameworks along with a taxonomy distinguishing prompt ...

  42. StarCoder: may the source be with you!

    cs.CL 2023-05 accept novelty 5.0

    StarCoderBase matches or beats OpenAI's code-cushman-001 on multi-language code benchmarks; the Python-fine-tuned StarCoder reaches 40% pass@1 on HumanEval while retaining other-language performance.

  43. The Possibility of Artificial Intelligence Becoming a Subject and the Alignment Problem

    cs.AI 2026-04 unverdicted novelty 4.0

    Dominant control-based AI alignment falls short for potential AGI subjects; a parenting model drawing on Turing's child machines should foster gradual autonomy and cooperative coexistence.

  44. Brainrot: Deskilling and Addiction are Overlooked AI Risks

    cs.CY 2026-05 unverdicted novelty 3.0

    AI safety literature overlooks cognitive deskilling and addiction risks from generative AI despite public concern about them.

  45. A Survey of Large Language Models

    cs.CL 2023-03 accept novelty 3.0

    This survey reviews the background, key techniques, and evaluation methods for large language models, emphasizing emergent abilities that appear at large scales.

Reference graph

Works this paper leans on

241 extracted references · 241 canonical work pages · cited by 44 Pith papers · 34 internal anchors

  1. [1]

    2021 , Eprint =

    Johannes Welbl and Amelia Glaese and Jonathan Uesato and Sumanth Dathathri and John Mellor and Lisa Anne Hendricks and Kirsty Anderson and Pushmeet Kohli and Ben Coppin and Po-Sen Huang , Title =. 2021 , Eprint =

  2. [2]

    2021 , eprint=

    Scaling Scaling Laws with Board Games , author=. 2021 , eprint=

  3. [3]

    2021 , eprint=

    When Combating Hype, Proceed with Caution , author=. 2021 , eprint=

  4. [4]

    2019 , eprint=

    Generating Long Sequences with Sparse Transformers , author=. 2019 , eprint=

  5. [5]

    2021 , eprint=

    Evaluating Large Language Models Trained on Code , author=. 2021 , eprint=

  6. [6]

    2021 , eprint=

    RoFormer: Enhanced Transformer with Rotary Position Embedding , author=. 2021 , eprint=

  7. [7]

    2021 , eprint=

    Mitigating harm in language models with conditional-likelihood filtration , author=. 2021 , eprint=

  8. [8]

    2020 , eprint=

    The Pile: An 800GB Dataset of Diverse Text for Language Modeling , author=. 2020 , eprint=

  9. [9]

    2016 , eprint=

    Concrete Problems in AI Safety , author=. 2016 , eprint=

  10. [10]

    2020 , eprint=

    Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics , author=. 2020 , eprint=

  11. [11]

    2020 , eprint=

    RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models , author=. 2020 , eprint=

  12. [12]

    2021 , eprint=

    Unsolved Problems in ML Safety , author=. 2021 , eprint=

  13. [13]

    2021 , eprint=

    Training Verifiers to Solve Math Word Problems , author=. 2021 , eprint=

  14. [14]

    2021 , eprint=

    Aligning AI With Shared Human Values , author=. 2021 , eprint=

  15. [15]

    2021 , eprint=

    Decision Transformer: Reinforcement Learning via Sequence Modeling , author=. 2021 , eprint=

  16. [16]

    2021 , eprint=

    Delphi: Towards Machine Ethics and Norms , author=. 2021 , eprint=

  17. [17]

    2018 , eprint=

    Supervising strong learners by amplifying weak experts , author=. 2018 , eprint=

  18. [18]

    2018 , eprint=

    AI safety via debate , author=. 2018 , eprint=

  19. [19]

    2021 , eprint=

    Multitask Prompted Training Enables Zero-Shot Task Generalization , author=. 2021 , eprint=

  20. [20]

    2021 , eprint=

    Finetuned Language Models Are Zero-Shot Learners , author=. 2021 , eprint=

  21. [21]

    2021 , eprint=

    TruthfulQA: Measuring How Models Mimic Human Falsehoods , author=. 2021 , eprint=

  22. [22]

    2020 , eprint=

    Learning to summarize from human feedback , author=. 2020 , eprint=

  23. [23]

    2016 , eprint=

    Generative Adversarial Imitation Learning , author=. 2016 , eprint=

  24. [24]

    2020 , eprint=

    Language GANs Falling Short , author=. 2020 , eprint=

  25. [25]

    2019 , eprint=

    HellaSwag: Can a Machine Really Finish Your Sentence? , author=. 2019 , eprint=

  26. [26]

    2017 , eprint=

    TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , author=. 2017 , eprint=

  27. [27]

    2021 , eprint=

    Imitating Interactive Intelligence , author=. 2021 , eprint=

  28. [30]

    Rethinking imagenet pre-training , Year =

    He, Kaiming and Girshick, Ross and Doll. Rethinking imagenet pre-training , Year =. Proceedings of the IEEE/CVF International Conference on Computer Vision , Date-Added =

  29. [31]

    arXiv , Author =:1805.00932 , Primaryclass =

    Exploring the Limits of Weakly Supervised Pretraining , Year =. arXiv , Author =:1805.00932 , Primaryclass =

  30. [32]

    A survey on deep transfer learning , Year =

    Tan, Chuanqi and Sun, Fuchun and Kong, Tao and Zhang, Wenchang and Yang, Chao and Liu, Chunfang , Booktitle =. A survey on deep transfer learning , Year =

  31. [33]

    lilianweng.github.io/lil-log , Title =

    Weng, Lilian , Date-Added =. lilianweng.github.io/lil-log , Title =. 2018 , Bdsk-Url-1 =

  32. [35]

    Natural adversarial examples

    Hendrycks, Dan and Zhao, Kevin and Basart, Steven and Steinhardt, Jacob and Song, Dawn , Date-Added =. arXiv preprint arXiv:1907.07174 , Title =

  33. [36]

    Learning Transferable Visual Models From Natural Language Supervision , Volume =

    Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and others , Date-Added =. Learning Transferable Visual Models From Natural Language Supervision , Volume =. Image , Pages =

  34. [37]

    arXiv preprint arXiv:1910.07113 , year=

    Solving Rubik's Cube with a Robot Hand , Year =. arXiv , Author =:1910.07113 , Primaryclass =

  35. [38]

    Model-agnostic meta-learning for fast adaptation of deep networks

    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , Year =. arXiv , Author =:1703.03400 , Primaryclass =

  36. [39]

    arXiv:1912.02292 , author =

    Deep Double Descent: Where Bigger Models and More Data Hurt , Year =. arXiv , Author =:1912.02292 , Primaryclass =

  37. [40]

    Dota 2 with Large Scale Deep Reinforcement Learning

    2019 , Bdsk-Url-1 =. arXiv , Author =:1912.06680 , Title =

  38. [41]

    A Neural Probabilistic Language Model , Volume =

    Yoshua Bengio and R. A Neural Probabilistic Language Model , Volume =. JOURNAL OF MACHINE LEARNING RESEARCH , Pages =

  39. [42]

    Recurrent neural network based language model , Volume =

    Mikolov, Tomas and Karafi. Recurrent neural network based language model , Volume =. Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010 , Month =

  40. [43]

    Universal Language Model Fine-tuning for Text Classification

    Universal Language Model Fine-tuning for Text Classification , Year =. arXiv , Author =:1801.06146 , Primaryclass =

  41. [44]

    arXiv , Author =:1511.01432 , Primaryclass =

    Semi-supervised Sequence Learning , Year =. arXiv , Author =:1511.01432 , Primaryclass =

  42. [45]

    Deep contextualized word representations

    Deep contextualized word representations , Year =. arXiv , Author =:1802.05365 , Primaryclass =

  43. [46]

    Silver, David and Huang, Aja and Maddison, Chris J. and Guez, Arthur and Sifre, Laurent and van den Driessche, George and Schrittwieser, Julian and Antonoglou, Ioannis and Panneershelvam, Veda and Lanctot, Marc and Dieleman, Sander and Grewe, Dominik and Nham, John and Kalchbrenner, Nal and Sutskever, Ilya and Lillicrap, Timothy and Leach, Madeleine and K...

  44. [47]

    Learning internal representations by error propagation , Year =

    Rumelhart, David E and Hinton, Geoffrey E and Williams, Ronald J , Date-Added =. Learning internal representations by error propagation , Year =

  45. [48]

    Long Short-Term Memory , Volume =

    Sepp Hochreiter and J. Long Short-Term Memory , Volume =. Neural Computation , Number =

  46. [49]

    Mastering the game of Go with deep neural networks and tree search , Volume =

    Silver, David and Huang, Aja and Maddison, Chris J and Guez, Arthur and Sifre, Laurent and Van Den Driessche, George and Schrittwieser, Julian and Antonoglou, Ioannis and Panneershelvam, Veda and Lanctot, Marc and others , Date-Added =. Mastering the game of Go with deep neural networks and tree search , Volume =. nature , Number =

  47. [50]

    Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

    Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , Year =. arXiv , Author =:1910.10683 , Primaryclass =

  48. [51]

    Sequence to Sequence Learning with Neural Networks

    Sequence to Sequence Learning with Neural Networks , Year =. arXiv , Author =:1409.3215 , Primaryclass =

  49. [52]

    Measuring Massive Multitask Language Understanding

    Measuring Massive Multitask Language Understanding , Year =. arXiv , Author =:2009.03300 , Primaryclass =

  50. [53]

    and Salakhutdinov, Ruslan and Tenenbaum, Joshua B

    Lake, Brenden M. and Salakhutdinov, Ruslan and Tenenbaum, Joshua B. , Date-Added =. Human-level concept learning through probabilistic program induction , Url =. Science , Number =. 2015 , Bdsk-Url-1 =. doi:10.1126/science.aab3050 , Eprint =

  51. [54]

    Scaling Laws for Autoregressive Generative Modeling

    Scaling Laws for Autoregressive Generative Modeling , Year =. arXiv , Author =:2010.14701 , Primaryclass =

  52. [55]

    arXiv , Author =:2005.04305 , Primaryclass =

    Measuring the Algorithmic Efficiency of Neural Networks , Year =. arXiv , Author =:2005.04305 , Primaryclass =

  53. [56]

    Neural Discrete Representation Learning

    Neural Discrete Representation Learning , Year =. arXiv , Author =:1711.00937 , Primaryclass =

  54. [57]

    Jukebox: A Generative Model for Music

    Jukebox: A Generative Model for Music , Year =. arXiv , Author =:2005.00341 , Primaryclass =

  55. [58]

    Scaling autoregressive video models

    Scaling Autoregressive Video Models , Year =. arXiv , Author =:1906.02634 , Primaryclass =

  56. [59]

    Pixel Recurrent Neural Networks

    Pixel Recurrent Neural Networks , Url =. 2016 , Bdsk-Url-1 =. arXiv , Author =:1601.06759 , Journal =

  57. [60]

    Multimodal transformer for unaligned multimodal language sequences , Volume =

    Tsai, Yao-Hung Hubert and Bai, Shaojie and Liang, Paul Pu and Kolter, J Zico and Morency, Louis-Philippe and Salakhutdinov, Ruslan , Booktitle =. Multimodal transformer for unaligned multimodal language sequences , Volume =

  58. [61]

    Enhancing the transformer with explicit relational encoding for math problem solving, 2019, 1910.06611 http://arxiv.org/abs/1910.06611

    Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving , Year =. arXiv , Author =:1910.06611 , Primaryclass =

  59. [62]

    Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li - Jia Li

    The New Data and New Challenges in Multimedia Research , Url =. 2015 , Bdsk-Url-1 =. arXiv , Author =:1503.01817 , Journal =

  60. [63]

    Rosenfeld, Jonathan Frankle, Michael Carbin, and Nir Shavit

    On the Predictability of Pruning Across Scales , Year =. arXiv , Author =:2006.10621 , Primaryclass =

  61. [65]

    A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets

    A Downsampled Variant of ImageNet as an Alternative to the. 2017 , Bdsk-Url-1 =. arXiv , Author =:1707.08819 , Journal =

  62. [67]

    Analysing Mathematical Reasoning Abilities of Neural Models

    Analysing Mathematical Reasoning Abilities of Neural Models , Url =. 2019 , Bdsk-Url-1 =. arXiv , Author =:1904.01557 , Journal =

  63. [68]

    Generating diverse high-fidelity images with VQ-V AE-2.arXiv:1906.00446, 2019

    Generating Diverse High-Fidelity Images with. 2019 , Bdsk-Url-1 =. arXiv , Author =:1906.00446 , Journal =

  64. [70]

    A neural scaling law from the dimension of the data manifold, 2020, 2004.10802 http://arxiv.org/abs/2004.10802

    A Neural Scaling Law from the Dimension of the Data Manifold , Year =. arXiv , Author =:2004.10802 , Primaryclass =

  65. [71]

    Bioinformatics, 36(4):1234–1240

    Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers , Year =. arXiv , Author =:2002.11794 , Primaryclass =

  66. [72]

    Smith, Y-Lan Boureau, and Jason Weston

    Recipes for building an open-domain chatbot , Year =. arXiv , Author =:2004.13637 , Primaryclass =

  67. [74]

    Liu , Eprint =

    Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu , Eprint =. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , Year =

  68. [75]

    Rosenfeld and Amir Rosenfeld and Yonatan Belinkov and Nir Shavit , Eprint =

    Jonathan S. Rosenfeld and Amir Rosenfeld and Yonatan Belinkov and Nir Shavit , Eprint =. A Constructive Prediction of the Generalization Error Across Scales , Year =

  69. [76]

    2021 , eprint=

    The Power of Scale for Parameter-Efficient Prompt Tuning , author=. 2021 , eprint=

  70. [77]

    Analysis of a random forests model , Volume =

    Biau, G. Analysis of a random forests model , Volume =. Journal of Machine Learning Research , Number =

  71. [78]

    All of nonparametric statistics , Year =

    Wasserman, Larry , Publisher =. All of nonparametric statistics , Year =

  72. [80]

    ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

    ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , Year =. arXiv , Author =:1909.11942 , Primaryclass =

  73. [81]

    Nimit Sharad Sohoni, Christopher Richard Aberger, Megan Leszczynski, Jian Zhang, and Christo- pher R´e

    Mesh-TensorFlow: Deep Learning for Supercomputers , Year =. arXiv , Author =:1811.02084 , Primaryclass =

  74. [82]

    Beyond Human-level Accuracy: Computational Challenges in Deep Learning , Url =

    Hestness, Joel and Ardalani, Newsha and Diamos, Gregory , Booktitle =. Beyond Human-level Accuracy: Computational Challenges in Deep Learning , Url =. 2019 , Bdsk-Url-1 =. doi:10.1145/3293883.3295710 , Isbn =

  75. [84]

    The full spectrum of deep net Hessians at scale: Dynamics with sample size

    The Full Spectrum of Deep Net Hessians At Scale: Dynamics with Sample Size , Url =. 2018 , Bdsk-Url-1 =. arXiv , Author =:1811.07062 , Journal =

  76. [85]

    Common Crawl , Url =

    The Common Crawl Foundation , Date-Added =. Common Crawl , Url =

  77. [86]

    SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

    SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , Year =. arXiv , Author =:1905.00537 , Primaryclass =

  78. [87]

    RoBERTa: A Robustly Optimized BERT Pretraining Approach

    RoBERTa:. 2019 , Bdsk-Url-1 =. arXiv , Author =:1907.11692 , Journal =

  79. [88]

    On the origin of long-range correlations in texts , Volume =

    Altmann, Eduardo G and Cristadoro, Giampaolo and Degli Esposti, Mirko , Journal =. On the origin of long-range correlations in texts , Volume =

  80. [89]

    Entropy and long-range correlations in literary English , Volume =

    Ebeling, Werner and P. Entropy and long-range correlations in literary English , Volume =. EPL (Europhysics Letters) , Number =

Showing first 80 references.