k-MIP attention enables linear-complexity graph transformers that approximate full attention arbitrarily closely and bounds GraphGPS expressivity via S-SEG-WL.
The point features are the 3D positions and RGB values, and each point is labelled as one of 13 semantic classes
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
CONDITIONAL 1representative citing papers
citing papers explorer
-
k-Maximum Inner Product Attention for Graph Transformers and the Expressive Power of GraphGPS
k-MIP attention enables linear-complexity graph transformers that approximate full attention arbitrarily closely and bounds GraphGPS expressivity via S-SEG-WL.