HI-MoE introduces hierarchical scene-then-instance routing in a Mixture-of-Experts detector, yielding gains over dense DINO and flat MoE variants on COCO, especially for small objects.
Jacobs, Michael I
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
verdicts
UNVERDICTED 2representative citing papers
ICEdit achieves state-of-the-art instructional image editing in Diffusion Transformers via in-context generation, requiring only 0.1% of prior training data and 1% trainable parameters.
citing papers explorer
-
HI-MoE: Hierarchical Instance-Conditioned Mixture-of-Experts for Object Detection
HI-MoE introduces hierarchical scene-then-instance routing in a Mixture-of-Experts detector, yielding gains over dense DINO and flat MoE variants on COCO, especially for small objects.
-
In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer
ICEdit achieves state-of-the-art instructional image editing in Diffusion Transformers via in-context generation, requiring only 0.1% of prior training data and 1% trainable parameters.