Transformer feed-forward layers act as key-value memories storing textual patterns and their associated output distributions.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
verdicts
CONDITIONAL 2representative citing papers
Increased regularization is required for group DRO to achieve good worst-group generalization in overparameterized neural networks.
citing papers explorer
-
Transformer Feed-Forward Layers Are Key-Value Memories
Transformer feed-forward layers act as key-value memories storing textual patterns and their associated output distributions.
-
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
Increased regularization is required for group DRO to achieve good worst-group generalization in overparameterized neural networks.