Linearized attention requires impractically large widths m = Ω(κ_d(G)^6 n log n) to enter the NTK regime, exceeding 10^24 for MNIST and 10^29 for CIFAR-10.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Linearized Attention Cannot Enter the Kernel Regime at Any Practical Width
Linearized attention requires impractically large widths m = Ω(κ_d(G)^6 n log n) to enter the NTK regime, exceeding 10^24 for MNIST and 10^29 for CIFAR-10.