Mochi aligns pre-training with inference via meta-learning for efficient graph foundation models, matching or exceeding prior models on 25 datasets with 8-27x less training time.
A perspective view and survey of meta-learning.Artif
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Mochi: Aligning Pre-training and Inference for Efficient Graph Foundation Models via Meta-Learning
Mochi aligns pre-training with inference via meta-learning for efficient graph foundation models, matching or exceeding prior models on 25 datasets with 8-27x less training time.