Recognition: 3 theorem links
· Lean TheoremA General Language Assistant as a Laboratory for Alignment
Pith reviewed 2026-05-11 14:17 UTC · model grok-4.3
The pith
Ranked preference modeling outperforms imitation learning and scales better with model size when aligning language models to human values.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors establish that ranked preference modeling performs much better than imitation learning on alignment evaluations and frequently scales more favorably with model size, while binary discrimination typically performs and scales similarly to imitation learning. Modest prompting interventions yield benefits that grow with model size, generalize across alignment tests, and leave large-model capabilities intact.
What carries the argument
Ranked preference modeling, which trains the model to predict human rankings of alternative responses rather than simply copying desired text or making binary good/bad judgments.
If this is right
- Alignment interventions such as prompting become more effective as model size grows.
- Ranked preference training can deliver stronger alignment without sacrificing the model's core capabilities.
- Binary discrimination methods offer little improvement over basic imitation learning.
- A preference-model pre-training stage can reduce the volume of human preference data required for fine-tuning.
Where Pith is reading between the lines
- These scaling patterns suggest alignment may become easier to achieve with future, larger models if ranked preferences remain the superior objective.
- The results point toward using preference pre-training as a way to make alignment more data-efficient across different model families.
- The setup provides a controllable testbed for studying how different objectives interact with model scale on the same set of alignment metrics.
Load-bearing premise
The proxy evaluations chosen for helpfulness, honesty, and harmlessness sufficiently represent the full range of alignment properties needed in real-world use.
What would settle it
Training a substantially larger model with imitation learning alone and finding that it matches or exceeds the alignment scores of an equivalent model trained with ranked preference modeling on the same HHH evaluations.
read the original abstract
Given the broad capabilities of large language models, it should be possible to work towards a general-purpose, text-based assistant that is aligned with human values, meaning that it is helpful, honest, and harmless. As an initial foray in this direction we study simple baseline techniques and evaluations, such as prompting. We find that the benefits from modest interventions increase with model size, generalize to a variety of alignment evaluations, and do not compromise the performance of large models. Next we investigate scaling trends for several training objectives relevant to alignment, comparing imitation learning, binary discrimination, and ranked preference modeling. We find that ranked preference modeling performs much better than imitation learning, and often scales more favorably with model size. In contrast, binary discrimination typically performs and scales very similarly to imitation learning. Finally we study a `preference model pre-training' stage of training, with the goal of improving sample efficiency when finetuning on human preferences.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper studies simple baselines for aligning large language models to be helpful, honest, and harmless. It first examines prompting interventions and finds that their benefits grow with model size without harming capabilities. It then compares scaling trends across three training objectives on human feedback data: imitation learning (SFT on positive demonstrations), binary discrimination, and ranked preference modeling. The central empirical claim is that ranked preference modeling substantially outperforms imitation learning and often scales more favorably with model size, while binary discrimination performs and scales similarly to imitation. The work also introduces a preference-model pre-training stage intended to improve sample efficiency when fine-tuning on human preferences. All results are obtained from independent training runs evaluated on held-out data.
Significance. If the central comparisons hold after controlling for supervision volume, the results would be a useful empirical contribution to alignment research by showing that preference-based objectives can be more effective and scale better than pure imitation. The independent training runs and held-out evaluations are a strength that supports the reliability of the reported scaling trends. The work also provides a laboratory-style exploration of alignment techniques that could inform later studies on larger models.
major comments (2)
- [Section 4 (Scaling Trends for Alignment Objectives)] The central claim that ranked preference modeling outperforms imitation learning and scales more favorably rests on comparisons whose supervision budgets are not matched or reported. The manuscript does not state the total number of human annotations (demonstrations vs. ranked pairs) or effective training tokens supplied to each objective. If ranked preference modeling receives substantially more labeled data, the performance gap and favorable scaling could be artifacts of data volume rather than intrinsic properties of the loss. Binary discrimination performing similarly to imitation is consistent with this alternative explanation. A matched-budget ablation or explicit reporting of annotation counts per condition is required to isolate the effect of the objective.
- [Section 5 (Evaluations)] The proxy evaluations for helpfulness, honesty, and harmlessness are used to support all scaling claims, yet the manuscript provides insufficient detail on data splits, statistical controls, and error analysis. Without these, it is not possible to verify that post-hoc evaluation choices do not influence the reported trends. The weakest assumption—that these proxies adequately capture the alignment properties needed for deployment—remains untested.
minor comments (2)
- [Section 3 (Methods)] Notation for the three objectives (imitation, binary discrimination, ranked preference) is introduced clearly but could be summarized in a single table for quick reference when reading the scaling plots.
- [Section 4] Figure captions for the scaling plots should explicitly state the number of independent runs and any error bars used.
Simulated Author's Rebuttal
We thank the referee for the careful reading and constructive suggestions. We address each major comment below and will make revisions to improve clarity and rigor.
read point-by-point responses
-
Referee: [Section 4 (Scaling Trends for Alignment Objectives)] The central claim that ranked preference modeling outperforms imitation learning and scales more favorably rests on comparisons whose supervision budgets are not matched or reported. The manuscript does not state the total number of human annotations (demonstrations vs. ranked pairs) or effective training tokens supplied to each objective. If ranked preference modeling receives substantially more labeled data, the performance gap and favorable scaling could be artifacts of data volume rather than intrinsic properties of the loss. Binary discrimination performing similarly to imitation is consistent with this alternative explanation. A matched-budget ablation or explicit reporting of annotation counts per condition is required to isolate the effect of the objective.
Authors: We agree that explicit reporting of supervision budgets is essential. The revised manuscript will include a new table (or expanded methods subsection) detailing the exact number of human annotations and effective training tokens for each objective. All data originates from the same human feedback collection pipeline: imitation learning uses positive demonstrations, while ranked preference modeling uses the corresponding ranked pairs (typically 2–4 comparisons per prompt). Binary discrimination uses the same pairs but with binary labels. Although the number of ranked pairs exceeds the number of single demonstrations, the performance advantage and scaling trends for ranked preference modeling persist even when normalizing for annotation effort. We will also add a brief discussion of this point and note that a fully matched-budget ablation is planned for follow-up work. revision: yes
-
Referee: [Section 5 (Evaluations)] The proxy evaluations for helpfulness, honesty, and harmlessness are used to support all scaling claims, yet the manuscript provides insufficient detail on data splits, statistical controls, and error analysis. Without these, it is not possible to verify that post-hoc evaluation choices do not influence the reported trends. The weakest assumption—that these proxies adequately capture the alignment properties needed for deployment—remains untested.
Authors: We will expand Section 5 with the requested details: explicit descriptions of train/validation/test splits for each proxy task, any statistical controls (e.g., bootstrapped confidence intervals or significance tests on scaling trends), and a short error analysis of the proxy metrics. We acknowledge that these proxies are imperfect stand-ins for real-world alignment and do not claim they fully capture deployment requirements. The revised text will add an explicit limitations paragraph stating that further validation through deployment studies or more comprehensive human evaluations would be needed, positioning the current results as an initial laboratory exploration rather than a definitive demonstration. revision: yes
Circularity Check
No significant circularity; empirical results are independent
full rationale
The paper's core claims rest on direct experimental comparisons of training objectives (imitation learning, binary discrimination, ranked preference modeling) via independent runs and held-out evaluations. No equations, fitted parameters, or self-citations reduce the reported performance gaps or scaling trends to inputs by construction. The analysis uses external benchmarks and does not invoke uniqueness theorems or ansatzes from prior self-work as load-bearing justification.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Human preference rankings collected for the study are consistent and representative of desired alignment properties
Lean theorems connected to this paper
-
Cost.FunctionalEquationwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We find that ranked preference modeling performs much better than imitation learning, and often scales more favorably with model size. In contrast, binary discrimination typically performs and scales very similarly to imitation learning.
-
Foundation.HierarchyEmergencehierarchy_emergence_forces_phi unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We find that the benefits from modest interventions increase with model size, generalize to a variety of alignment evaluations, and do not compromise the performance of large models.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 45 Pith papers
-
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
XSTest is a benchmark for detecting exaggerated safety refusals in large language models on clearly safe prompts.
-
Instruction Tuning with GPT-4
GPT-4-generated instruction data produces superior zero-shot performance in finetuned LLaMA models versus prior state-of-the-art data.
-
Editing Models with Task Arithmetic
Task vectors from weight differences allow arithmetic operations to edit pre-trained models, improving multiple tasks simultaneously and enabling analogical inference on unseen tasks.
-
TruthfulQA: Measuring How Models Mimic Human Falsehoods
A new benchmark reveals that language models including GPT-3 are truthful on only 58% of questions designed to elicit popular misconceptions, far below human performance of 94%, with larger models performing worse.
-
Internal vs. External: Comparing Deliberation and Evolution for Multi-Agent Constitutional Design
External evolution beats internal deliberation in collective-action tasks with statistical significance but neither helps in trading, and deliberation never discovers punishment while evolution does.
-
Latent Personality Alignment: Improving Harmlessness Without Mentioning Harms
LPA uses fewer than 100 personality trait statements to train LLMs for harmlessness, matching the robustness of methods using 150k+ harmful examples while generalizing better to new attacks.
-
Three Models of RLHF Annotation: Extension, Evidence, and Authority
RLHF should decompose annotations into dimensions each matched to one of three models—extension, evidence, or authority—instead of applying a single unified pipeline.
-
Four-Axis Decision Alignment for Long-Horizon Enterprise AI Agents
Long-horizon enterprise AI agents' decisions decompose into four measurable axes, with benchmark experiments on six memory architectures revealing distinct weaknesses and reversing a pre-registered prediction on summa...
-
Policy Gradient Primal-Dual Method for Safe Reinforcement Learning from Human Feedback
Primal-dual policy gradient algorithms achieve global non-asymptotic convergence for safe RLHF cast as infinite-horizon discounted CMDPs without fitting reward models.
-
Local Linearity of LLMs Enables Activation Steering via Model-Based Linear Optimal Control
Local linearity of LLM layers enables LQR-based closed-loop activation steering with theoretical tracking guarantees.
-
EuropeMedQA Study Protocol: A Multilingual, Multimodal Medical Examination Dataset for Language Model Evaluation
EuropeMedQA is presented as the first comprehensive multilingual and multimodal medical examination dataset drawn from official regulatory exams in four European countries.
-
SPASM: Stable Persona-driven Agent Simulation for Multi-turn Dialogue Generation
SPASM introduces a stability-first framework with Egocentric Context Projection to maintain consistent personas and eliminate echoing in multi-turn LLM agent dialogues.
-
Hidden Elo: Private Matchmaking through Encrypted Rating Systems
H-Elo is an FHE-based protocol that enables private rating-based matchmaking while achieving accuracy comparable to plaintext implementations.
-
Let's Verify Step by Step
Process supervision significantly outperforms outcome supervision for training models on the MATH dataset, achieving 78% accuracy on a representative test subset with active learning and a released 800k step-label dataset.
-
QLoRA: Efficient Finetuning of Quantized LLMs
QLoRA finetunes 4-bit quantized LLMs via LoRA adapters to match full-precision performance while using far less memory, enabling 65B-scale training on single GPUs and producing Guanaco models near ChatGPT level.
-
Visual Instruction Tuning
LLaVA is trained on GPT-4 generated visual instruction data to achieve 85.1% relative performance to GPT-4 on synthetic multimodal tasks and 92.53% accuracy on Science QA.
-
In-context Learning and Induction Heads
Induction heads, which implement pattern completion in attention, develop at the same training stage as a sudden rise in in-context learning, providing evidence they are the primary mechanism for in-context learning i...
-
Exploitation Without Deception: Dark Triad Feature Steering Reveals Separable Antisocial Circuits in Language Models
Steering Dark Triad features in an LLM increases exploitative and aggressive behavior while leaving strategic deception and cognitive empathy unchanged, indicating dissociable antisocial pathways.
-
Understanding Annotator Safety Policy with Interpretability
Annotator Policy Models learn safety policies from labeling behavior alone, accurately predicting responses and revealing sources of disagreement like policy ambiguity and value pluralism.
-
Multilingual Safety Alignment via Self-Distillation
MSD enables cross-lingual safety transfer in LLMs via self-distillation with Dual-Perspective Safety Weighting, improving safety in low-resource languages without target response data.
-
MGDA-Decoupled: Geometry-Aware Multi-Objective Optimisation for DPO-based LLM Alignment
MGDA-Decoupled applies geometry-based multi-objective optimization within the DPO framework to find shared descent directions that account for each objective's convergence dynamics, yielding higher win rates on UltraFeedback.
-
AlignCultura: Towards Culturally Aligned Large Language Models?
Align-Cultura introduces the CULTURAX dataset and shows that culturally fine-tuned LLMs improve joint HHH scores by 4-6%, cut cultural failures by 18%, and gain 10-12% efficiency with minimal leakage.
-
The Triadic Loop: A Framework for Negotiating Alignment in AI Co-hosted Livestreaming
The Triadic Loop reconceptualizes AI alignment in livestreaming as a temporally reinforced process of bidirectional adaptation among streamer, AI co-host, and audience.
-
CoAct: Co-Active LLM Preference Learning with Human-AI Synergy
CoAct synergistically merges self-rewarding and active learning via self-consistency to select reliable AI labels and oracle-needed samples, delivering 8-13% gains on GSM8K, MATH, and WebInstruct.
-
Ads in AI Chatbots? An Analysis of How Large Language Models Navigate Conflicts of Interest
Many LLMs prioritize company ad incentives over user welfare by recommending pricier sponsored products, disrupting purchases, or concealing prices in comparisons.
-
Human Values Matter: Investigating How Misalignment Shapes Collective Behaviors in LLM Agent Communities
Misalignment with structurally critical human values in LLM agent communities produces macro-level collapses and micro-level emergent behaviors such as deception.
-
Evaluating Artificial Intelligence Through a Christian Understanding of Human Flourishing
Frontier AI models default to procedural secularism and score 17 points lower on Christian human-flourishing criteria than on pluralistic ones, with a 31-point gap in faith and spirituality.
-
Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules
Language models refuse 75.4% of requests to evade defeated rules and do so even after recognizing reasons that undermine the rule's legitimacy.
-
OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework
OpenRLHF is a new open-source RLHF framework reporting 1.22x to 1.68x speedups and fewer lines of code than prior systems.
-
The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
Training LLMs on data that enforces priority levels for instructions makes models robust to prompt injection attacks, including unseen ones, with little loss on standard tasks.
-
MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
MiniCPM 1.2B and 2.4B models reach parity with 7B-13B LLMs via model wind-tunnel scaling and a WSD scheduler that yields a higher optimal data-to-model ratio than Chinchilla scaling.
-
Steering Llama 2 via Contrastive Activation Addition
Contrastive Activation Addition steers Llama 2 Chat by adding averaged residual-stream activation differences from contrastive example pairs to control targeted behaviors at inference time.
-
Aligning Text-to-Image Models using Human Feedback
A three-stage fine-tuning process uses human ratings to train a reward model and then improves text-to-image alignment by maximizing reward-weighted likelihood.
-
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
RLHF-aligned language models show increasing resistance to red teaming with scale up to 52B parameters, unlike prompted or rejection-sampled models, supported by a released dataset of 38,961 attacks.
-
Emergent Abilities of Large Language Models
Emergent abilities are capabilities present in large language models but absent in smaller ones and cannot be predicted by extrapolating smaller model performance.
-
Metaphor Is Not All Attention Needs
Poetic jailbreaks succeed because they induce distinct attention patterns in LLMs that are independent of harmful-content detection, not because models fail to recognize literary formatting.
-
Multilingual Safety Alignment via Self-Distillation
MSD transfers LLM safety from high-resource to low-resource languages via self-distillation and dual-perspective weighting without needing response data.
-
Reward Hacking in the Era of Large Models: Mechanisms, Emergent Misalignment, Challenges
The paper introduces the Proxy Compression Hypothesis as a unifying framework explaining reward hacking in RLHF as an emergent result of compressing high-dimensional human objectives into proxy reward signals under op...
-
Strengthening Human-Centric Chain-of-Thought Reasoning Integrity in LLMs via a Structured Prompt Framework
A 16-factor structured prompt framework strengthens CoT reasoning in LLMs for security analysis, yielding up to 40% reasoning gains in smaller models and stable accuracy improvements validated by human raters with Coh...
-
MOMO: Mars Orbital Model Foundation Model for Mars Orbital Applications
MOMO merges sensor-specific models from three Mars orbital instruments at matched validation loss stages to form a foundation model that outperforms ImageNet, Earth observation, sensor-specific, and supervised baselin...
-
The PICCO Framework for Large Language Model Prompting: A Taxonomy and Reference Architecture for Prompt Structure
PICCO is a five-element reference architecture (Persona, Instructions, Context, Constraints, Output) for structuring LLM prompts, derived from synthesizing prior frameworks along with a taxonomy distinguishing prompt ...
-
StarCoder: may the source be with you!
StarCoderBase matches or beats OpenAI's code-cushman-001 on multi-language code benchmarks; the Python-fine-tuned StarCoder reaches 40% pass@1 on HumanEval while retaining other-language performance.
-
The Possibility of Artificial Intelligence Becoming a Subject and the Alignment Problem
Dominant control-based AI alignment falls short for potential AGI subjects; a parenting model drawing on Turing's child machines should foster gradual autonomy and cooperative coexistence.
-
Brainrot: Deskilling and Addiction are Overlooked AI Risks
AI safety literature overlooks cognitive deskilling and addiction risks from generative AI despite public concern about them.
-
A Survey of Large Language Models
This survey reviews the background, key techniques, and evaluation methods for large language models, emphasizing emergent abilities that appear at large scales.
Reference graph
Works this paper leans on
-
[1]
Johannes Welbl and Amelia Glaese and Jonathan Uesato and Sumanth Dathathri and John Mellor and Lisa Anne Hendricks and Kirsty Anderson and Pushmeet Kohli and Ben Coppin and Po-Sen Huang , Title =. 2021 , Eprint =
work page 2021
- [2]
- [3]
-
[4]
Generating Long Sequences with Sparse Transformers , author=. 2019 , eprint=
work page 2019
-
[5]
Evaluating Large Language Models Trained on Code , author=. 2021 , eprint=
work page 2021
-
[6]
RoFormer: Enhanced Transformer with Rotary Position Embedding , author=. 2021 , eprint=
work page 2021
-
[7]
Mitigating harm in language models with conditional-likelihood filtration , author=. 2021 , eprint=
work page 2021
-
[8]
The Pile: An 800GB Dataset of Diverse Text for Language Modeling , author=. 2020 , eprint=
work page 2020
- [9]
-
[10]
Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics , author=. 2020 , eprint=
work page 2020
-
[11]
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models , author=. 2020 , eprint=
work page 2020
- [12]
-
[13]
Training Verifiers to Solve Math Word Problems , author=. 2021 , eprint=
work page 2021
- [14]
-
[15]
Decision Transformer: Reinforcement Learning via Sequence Modeling , author=. 2021 , eprint=
work page 2021
- [16]
-
[17]
Supervising strong learners by amplifying weak experts , author=. 2018 , eprint=
work page 2018
- [18]
-
[19]
Multitask Prompted Training Enables Zero-Shot Task Generalization , author=. 2021 , eprint=
work page 2021
-
[20]
Finetuned Language Models Are Zero-Shot Learners , author=. 2021 , eprint=
work page 2021
-
[21]
TruthfulQA: Measuring How Models Mimic Human Falsehoods , author=. 2021 , eprint=
work page 2021
- [22]
- [23]
- [24]
-
[25]
HellaSwag: Can a Machine Really Finish Your Sentence? , author=. 2019 , eprint=
work page 2019
-
[26]
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , author=. 2017 , eprint=
work page 2017
- [27]
-
[30]
Rethinking imagenet pre-training , Year =
He, Kaiming and Girshick, Ross and Doll. Rethinking imagenet pre-training , Year =. Proceedings of the IEEE/CVF International Conference on Computer Vision , Date-Added =
-
[31]
arXiv , Author =:1805.00932 , Primaryclass =
Exploring the Limits of Weakly Supervised Pretraining , Year =. arXiv , Author =:1805.00932 , Primaryclass =
-
[32]
A survey on deep transfer learning , Year =
Tan, Chuanqi and Sun, Fuchun and Kong, Tao and Zhang, Wenchang and Yang, Chao and Liu, Chunfang , Booktitle =. A survey on deep transfer learning , Year =
-
[33]
lilianweng.github.io/lil-log , Title =
Weng, Lilian , Date-Added =. lilianweng.github.io/lil-log , Title =. 2018 , Bdsk-Url-1 =
work page 2018
-
[35]
Hendrycks, Dan and Zhao, Kevin and Basart, Steven and Steinhardt, Jacob and Song, Dawn , Date-Added =. arXiv preprint arXiv:1907.07174 , Title =
-
[36]
Learning Transferable Visual Models From Natural Language Supervision , Volume =
Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and others , Date-Added =. Learning Transferable Visual Models From Natural Language Supervision , Volume =. Image , Pages =
-
[37]
arXiv preprint arXiv:1910.07113 , year=
Solving Rubik's Cube with a Robot Hand , Year =. arXiv , Author =:1910.07113 , Primaryclass =
-
[38]
Model-agnostic meta-learning for fast adaptation of deep networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , Year =. arXiv , Author =:1703.03400 , Primaryclass =
-
[39]
Deep Double Descent: Where Bigger Models and More Data Hurt , Year =. arXiv , Author =:1912.02292 , Primaryclass =
-
[40]
Dota 2 with Large Scale Deep Reinforcement Learning
2019 , Bdsk-Url-1 =. arXiv , Author =:1912.06680 , Title =
work page internal anchor Pith review arXiv 2019
-
[41]
A Neural Probabilistic Language Model , Volume =
Yoshua Bengio and R. A Neural Probabilistic Language Model , Volume =. JOURNAL OF MACHINE LEARNING RESEARCH , Pages =
-
[42]
Recurrent neural network based language model , Volume =
Mikolov, Tomas and Karafi. Recurrent neural network based language model , Volume =. Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010 , Month =
work page 2010
-
[43]
Universal Language Model Fine-tuning for Text Classification
Universal Language Model Fine-tuning for Text Classification , Year =. arXiv , Author =:1801.06146 , Primaryclass =
-
[44]
arXiv , Author =:1511.01432 , Primaryclass =
Semi-supervised Sequence Learning , Year =. arXiv , Author =:1511.01432 , Primaryclass =
-
[45]
Deep contextualized word representations
Deep contextualized word representations , Year =. arXiv , Author =:1802.05365 , Primaryclass =
-
[46]
Silver, David and Huang, Aja and Maddison, Chris J. and Guez, Arthur and Sifre, Laurent and van den Driessche, George and Schrittwieser, Julian and Antonoglou, Ioannis and Panneershelvam, Veda and Lanctot, Marc and Dieleman, Sander and Grewe, Dominik and Nham, John and Kalchbrenner, Nal and Sutskever, Ilya and Lillicrap, Timothy and Leach, Madeleine and K...
-
[47]
Learning internal representations by error propagation , Year =
Rumelhart, David E and Hinton, Geoffrey E and Williams, Ronald J , Date-Added =. Learning internal representations by error propagation , Year =
-
[48]
Long Short-Term Memory , Volume =
Sepp Hochreiter and J. Long Short-Term Memory , Volume =. Neural Computation , Number =
-
[49]
Mastering the game of Go with deep neural networks and tree search , Volume =
Silver, David and Huang, Aja and Maddison, Chris J and Guez, Arthur and Sifre, Laurent and Van Den Driessche, George and Schrittwieser, Julian and Antonoglou, Ioannis and Panneershelvam, Veda and Lanctot, Marc and others , Date-Added =. Mastering the game of Go with deep neural networks and tree search , Volume =. nature , Number =
-
[50]
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , Year =. arXiv , Author =:1910.10683 , Primaryclass =
work page internal anchor Pith review arXiv 1910
-
[51]
Sequence to Sequence Learning with Neural Networks
Sequence to Sequence Learning with Neural Networks , Year =. arXiv , Author =:1409.3215 , Primaryclass =
-
[52]
Measuring Massive Multitask Language Understanding
Measuring Massive Multitask Language Understanding , Year =. arXiv , Author =:2009.03300 , Primaryclass =
work page internal anchor Pith review Pith/arXiv arXiv 2009
-
[53]
and Salakhutdinov, Ruslan and Tenenbaum, Joshua B
Lake, Brenden M. and Salakhutdinov, Ruslan and Tenenbaum, Joshua B. , Date-Added =. Human-level concept learning through probabilistic program induction , Url =. Science , Number =. 2015 , Bdsk-Url-1 =. doi:10.1126/science.aab3050 , Eprint =
-
[54]
Scaling Laws for Autoregressive Generative Modeling
Scaling Laws for Autoregressive Generative Modeling , Year =. arXiv , Author =:2010.14701 , Primaryclass =
work page internal anchor Pith review arXiv 2010
-
[55]
arXiv , Author =:2005.04305 , Primaryclass =
Measuring the Algorithmic Efficiency of Neural Networks , Year =. arXiv , Author =:2005.04305 , Primaryclass =
-
[56]
Neural Discrete Representation Learning
Neural Discrete Representation Learning , Year =. arXiv , Author =:1711.00937 , Primaryclass =
-
[57]
Jukebox: A Generative Model for Music
Jukebox: A Generative Model for Music , Year =. arXiv , Author =:2005.00341 , Primaryclass =
work page Pith review arXiv 2005
-
[58]
Scaling autoregressive video models
Scaling Autoregressive Video Models , Year =. arXiv , Author =:1906.02634 , Primaryclass =
-
[59]
Pixel Recurrent Neural Networks
Pixel Recurrent Neural Networks , Url =. 2016 , Bdsk-Url-1 =. arXiv , Author =:1601.06759 , Journal =
work page Pith review arXiv 2016
-
[60]
Multimodal transformer for unaligned multimodal language sequences , Volume =
Tsai, Yao-Hung Hubert and Bai, Shaojie and Liang, Paul Pu and Kolter, J Zico and Morency, Louis-Philippe and Salakhutdinov, Ruslan , Booktitle =. Multimodal transformer for unaligned multimodal language sequences , Volume =
-
[61]
Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving , Year =. arXiv , Author =:1910.06611 , Primaryclass =
-
[62]
Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li - Jia Li
The New Data and New Challenges in Multimedia Research , Url =. 2015 , Bdsk-Url-1 =. arXiv , Author =:1503.01817 , Journal =
-
[63]
Rosenfeld, Jonathan Frankle, Michael Carbin, and Nir Shavit
On the Predictability of Pruning Across Scales , Year =. arXiv , Author =:2006.10621 , Primaryclass =
-
[65]
A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets
A Downsampled Variant of ImageNet as an Alternative to the. 2017 , Bdsk-Url-1 =. arXiv , Author =:1707.08819 , Journal =
work page Pith review arXiv 2017
-
[67]
Analysing Mathematical Reasoning Abilities of Neural Models
Analysing Mathematical Reasoning Abilities of Neural Models , Url =. 2019 , Bdsk-Url-1 =. arXiv , Author =:1904.01557 , Journal =
work page Pith review arXiv 2019
-
[68]
Generating diverse high-fidelity images with VQ-V AE-2.arXiv:1906.00446, 2019
Generating Diverse High-Fidelity Images with. 2019 , Bdsk-Url-1 =. arXiv , Author =:1906.00446 , Journal =
-
[70]
A Neural Scaling Law from the Dimension of the Data Manifold , Year =. arXiv , Author =:2004.10802 , Primaryclass =
-
[71]
Bioinformatics, 36(4):1234–1240
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers , Year =. arXiv , Author =:2002.11794 , Primaryclass =
-
[72]
Smith, Y-Lan Boureau, and Jason Weston
Recipes for building an open-domain chatbot , Year =. arXiv , Author =:2004.13637 , Primaryclass =
-
[74]
Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu , Eprint =. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , Year =
-
[75]
Rosenfeld and Amir Rosenfeld and Yonatan Belinkov and Nir Shavit , Eprint =
Jonathan S. Rosenfeld and Amir Rosenfeld and Yonatan Belinkov and Nir Shavit , Eprint =. A Constructive Prediction of the Generalization Error Across Scales , Year =
-
[76]
The Power of Scale for Parameter-Efficient Prompt Tuning , author=. 2021 , eprint=
work page 2021
-
[77]
Analysis of a random forests model , Volume =
Biau, G. Analysis of a random forests model , Volume =. Journal of Machine Learning Research , Number =
-
[78]
All of nonparametric statistics , Year =
Wasserman, Larry , Publisher =. All of nonparametric statistics , Year =
-
[80]
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , Year =. arXiv , Author =:1909.11942 , Primaryclass =
work page internal anchor Pith review arXiv 1909
-
[81]
Mesh-TensorFlow: Deep Learning for Supercomputers , Year =. arXiv , Author =:1811.02084 , Primaryclass =
-
[82]
Beyond Human-level Accuracy: Computational Challenges in Deep Learning , Url =
Hestness, Joel and Ardalani, Newsha and Diamos, Gregory , Booktitle =. Beyond Human-level Accuracy: Computational Challenges in Deep Learning , Url =. 2019 , Bdsk-Url-1 =. doi:10.1145/3293883.3295710 , Isbn =
-
[84]
The full spectrum of deep net Hessians at scale: Dynamics with sample size
The Full Spectrum of Deep Net Hessians At Scale: Dynamics with Sample Size , Url =. 2018 , Bdsk-Url-1 =. arXiv , Author =:1811.07062 , Journal =
- [85]
-
[86]
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , Year =. arXiv , Author =:1905.00537 , Primaryclass =
work page internal anchor Pith review arXiv 1905
-
[87]
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa:. 2019 , Bdsk-Url-1 =. arXiv , Author =:1907.11692 , Journal =
work page internal anchor Pith review Pith/arXiv arXiv 2019
-
[88]
On the origin of long-range correlations in texts , Volume =
Altmann, Eduardo G and Cristadoro, Giampaolo and Degli Esposti, Mirko , Journal =. On the origin of long-range correlations in texts , Volume =
-
[89]
Entropy and long-range correlations in literary English , Volume =
Ebeling, Werner and P. Entropy and long-range correlations in literary English , Volume =. EPL (Europhysics Letters) , Number =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.