Pretraining on 1M wild videos followed by post-training on curated data yields high-fidelity feedforward 3D avatars that generalize across identities, clothing, and lighting with emergent relightability and loose-garment support.
Training language models to follow instructions with human feedback.Ad- vances in neural information processing systems, 35:27730– 27744
2 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.CV 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
MPerS dynamically mixes semantic guidance from MLLM-generated RS captions with DINOv3 features via MixExperts and Linguistic Query Guided Attention to achieve superior semantic segmentation on three public remote sensing datasets.
citing papers explorer
-
Large-scale Codec Avatars: The Unreasonable Effectiveness of Large-scale Avatar Pretraining
Pretraining on 1M wild videos followed by post-training on curated data yields high-fidelity feedforward 3D avatars that generalize across identities, clothing, and lighting with emergent relightability and loose-garment support.
-
MPerS: Dynamic MLLM MixExperts Perception-Guided Remote Sensing Scene Segmentation
MPerS dynamically mixes semantic guidance from MLLM-generated RS captions with DINOv3 features via MixExperts and Linguistic Query Guided Attention to achieve superior semantic segmentation on three public remote sensing datasets.