Mochi aligns pre-training with inference via meta-learning for efficient graph foundation models, matching or exceeding prior models on 25 datasets with 8-27x less training time.
Graph prototypical networks for few-shot learning on attributed networks
2 Pith papers cite this work. Polarity classification is still indexing.
years
2026 2representative citing papers
Mira-Embeddings-V1 adapts embeddings for recruitment reranking by synthesizing positive and hard-negative samples with LLMs, then applies JD-JD contrastive and JD-CV triplet training plus a BoundaryHead MLP, lifting Recall@50 from 68.89% to 77.55% and Recall@200 from 0.5969 to 0.7047.
citing papers explorer
-
Mochi: Aligning Pre-training and Inference for Efficient Graph Foundation Models via Meta-Learning
Mochi aligns pre-training with inference via meta-learning for efficient graph foundation models, matching or exceeding prior models on 25 datasets with 8-27x less training time.
-
Mira-Embeddings-V1: Domain-Adapted Semantic Reranking for Recruitment via LLM-Synthesized Data
Mira-Embeddings-V1 adapts embeddings for recruitment reranking by synthesizing positive and hard-negative samples with LLMs, then applies JD-JD contrastive and JD-CV triplet training plus a BoundaryHead MLP, lifting Recall@50 from 68.89% to 77.55% and Recall@200 from 0.5969 to 0.7047.