pith. machine review for the scientific record. sign in

arxiv: 2605.06088 · v1 · submitted 2026-05-07 · 💻 cs.CV

Recognition: unknown

OpenGaFF: Open-Vocabulary Gaussian Feature Field with Codebook Attention

Authors on Pith no claims yet

Pith reviewed 2026-05-08 14:04 UTC · model grok-4.3

classification 💻 cs.CV
keywords open-vocabulary3D Gaussian Splattingfeature fieldcodebook attentionsemantic consistencyscene understanding3D segmentation
0
0 comments X

The pith

OpenGaFF models semantics as a continuous function of 3D Gaussian geometry to achieve spatially coherent open-vocabulary scene understanding.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper develops OpenGaFF for open-vocabulary 3D scene understanding based on 3D Gaussian Splatting. The approach builds a Gaussian Feature Field that treats semantics as a continuous function tied to geometry and appearance. By conditioning predictions on geometric structure, it aims to create better spatial coherence in 3D. A codebook and attention mechanism help maintain object-level consistency and support language feature matching. Tests on benchmarks show gains in segmentation and consistency over previous methods.

Core claim

OpenGaFF constructs a Gaussian Feature Field that models semantics as a continuous function of Gaussian geometry and appearance. Explicitly conditioning semantic predictions on geometric structure strengthens the coupling between geometry and semantics. This leads to improved spatial coherence across similar structures in 3D space. A structured codebook serves as shared semantic primitives, and codebook-guided attention retrieves language features through similarity matching to enable robust open-vocabulary reasoning while reducing feature variance within objects.

What carries the argument

The Gaussian Feature Field combined with a structured codebook and codebook-guided attention mechanism, where semantics are modeled continuously from geometry and appearance, and attention matches query embeddings to codebook entries for consistency.

If this is right

  • Improved segmentation quality on standard 2D and 3D open-vocabulary benchmarks.
  • Stronger 3D semantic consistency across multi-view observations.
  • Reduced intra-object feature variance through codebook attention.
  • A semantically interpretable codebook offering insights into the learned representation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Applying the geometry-conditioning principle to other 3D reconstruction methods could test its broader applicability beyond Gaussian splatting.
  • The codebook's interpretability may allow users to inspect and refine semantic primitives for specific applications.
  • Stronger spatial coherence might improve performance in downstream tasks like 3D object detection or scene editing.

Load-bearing premise

Explicitly conditioning semantic predictions on geometric structure will strengthen the coupling between geometry and semantics and improve spatial coherence across similar structures in 3D space.

What would settle it

A controlled experiment showing that ablating the geometric conditioning results in equivalent or better 3D semantic consistency metrics would falsify the claim that this conditioning is key to the improvement.

Figures

Figures reproduced from arXiv: 2605.06088 by Federico Tombari, Kunyi Li, Michael Niemeyer, Nassir Navab, Sen Wang, Stefano Gasperini.

Figure 1
Figure 1. Figure 1: OpenGaFF is an open-vocabulary 3D scene understanding method and achieves precise segmentation and consistently high vision-language similarity score in both 2D and 3D evaluations. Abstract Understanding open-vocabulary 3D scenes with Gaussian-based representations remains challenging due to fragmented and spatially inconsistent semantic predic￾tions across multi-view observations. In this paper, we presen… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of OpenGaFF. We first preprocess RGB images using foundation models to generate ground-truth language features. These features are clustered for structured language codebook initialization, while PCA is applied to per-view feature maps to obtain low-dimensional representations for supervising the Gaussian Feature Field. In Stage 1, the Gaussian Feature Field is trained by 2D feature distillation. … view at source ↗
Figure 3
Figure 3. Figure 3: Qualitative Evaluation of 2D and 3D Open-Vocabulary Query on LERF-OVS [28]. We visualize vision-language similarity as 2D heatmaps. For the shown 3D results, we perform open-vocabulary segmentation directly in 3D and render selected Gaussians. Our method achieve more precise and consistent segmentation in both 2D and 3D. 4 Experiments 4.1 Experimental Setup Datasets. Following previous methods [13, 16, 14]… view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative Evaluation of 3D Open-Vocabulary Query on ScanNet-v2 [33]. We highlight the Gaussians corresponding to the query text. Our approach produces more spatially consistent responses while LangSplatV2 [10] exhibits significant noise and inconsistent activations. fragmented or noisy results. Due to weak geometry–semantic coupling, relevant Gaussians may be missed, leading to incomplete shapes (e.g., "… view at source ↗
Figure 5
Figure 5. Figure 5: What Does the Codebook Capture? LangSplatV2 [10] encodes semantics in a dis￾tributed manner, where a single entry may rep￾resent multiple objects or an object may span mul￾tiple entries. In contrast, our method learns disen￾tangled semantic units. Recent works [21, 14, 10] have demonstrated the effectiveness of incorporating codebook learn￾ing into 3D scene understanding. However, an open question remains:… view at source ↗
Figure 6
Figure 6. Figure 6: Ablation Studies. We conduct comprehensive ablation studies to demonstrate the effectc of differnet proposed contributions and report the mIoU of 3D OVS on the whole scene. explicitly couples semantics with geometry, enabling consistent feature propagation across spatially coherent regions and producing more complete and robust segmentation in both 2D and 3D. Ablation on Attention Module. We replace the co… view at source ↗
Figure 7
Figure 7. Figure 7: Ablation on Entropy Loss. We report the mIoU of 3D OVS on Figurines scene. Larger λentropy values encourage stricter object-level bindings but may over-specialize entries, hurting rare object learning due to limited observations. Rendered RGB Rendered LD Feature Predicted Language Feature Ours View 1 View 2 Predicted Language Feature LangSplatV2 view at source ↗
Figure 8
Figure 8. Figure 8: Illustration of Object-Level Feature Consistency. Compared with the language feature maps predicted by LangSplatV2 [10], ours are more consistent and clearer, demonstrating superior segmentation performance. can be suboptimal for objects that appear infrequently in the training data. Due to limited observations, such objects may not be sufficiently learned, leading to degraded segmentation performance. Thi… view at source ↗
Figure 9
Figure 9. Figure 9: Visualization of Codebook Entries. We present per-entry heatmaps and their correspond￾ing masked RGB images to visualize the regions each codebook entry attends to. These results demonstrate that our codebook effectively captures disentangled and semantically meaningful units. D.3 More Evaluation on ScanNet-v2 We visualize the predicted semantic feature point clouds and compare them with the ground-truth s… view at source ↗
Figure 10
Figure 10. Figure 10: Qualitative Evaluation of 2D Open-Vocabulary Segmentation on MipNeRF360 [32]. Our method can predict more precise and consistent segmentation in both 2D and 3D. stems from our Gaussian Feature Field (Section 3.2), which effectively couples 3D geometry with semantic representations. 5 view at source ↗
Figure 11
Figure 11. Figure 11: Additional Qualitative Evaluation of 2D and 3D Open-Vocabulary Segmentation on LERF-OVS [28]. Our method can predict more precise and consistent segmentation in both 2D and 3D. 6 view at source ↗
Figure 12
Figure 12. Figure 12: Additional Qualitative Evaluation of 3D Open-Vocabulary Segmentation on ScanNet￾v2 [33]. We visualize the language feature point cloud. Ours method can predict clean and more consistent language feature. 7 view at source ↗
read the original abstract

Understanding open-vocabulary 3D scenes with Gaussian-based representations remains challenging due to fragmented and spatially inconsistent semantic predictions across multi-view observations. In this paper, we present OpenGaFF, a novel framework for open-vocabulary 3D scene understanding built upon 3D Gaussian Splatting. At the core of our method is a Gaussian Feature Field that models semantics as a continuous function of Gaussian geometry and appearance. By explicitly conditioning semantic predictions on geometric structure, this formulation strengthens the coupling between geometry and semantics, leading to improved spatial coherence across similar structures in 3D space. To further enforce object-level semantic consistency, we introduce a structured codebook that serves as a set of shared semantic primitives. Furthermore, a codebook-guided attention mechanism is proposed to retrieve language features via similarity matching between query embeddings and learned codebook entries, enabling robust open-vocabulary reasoning while reducing intra-object feature variance. Extensive experiments on standard 2D and 3D open-vocabulary benchmarks demonstrate that our method consistently outperforms prior approaches, achieving improved segmentation quality, stronger 3D semantic consistency and a semantically interpretable codebook that provides insight into the learned representation.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces OpenGaFF, a framework for open-vocabulary 3D scene understanding built on 3D Gaussian Splatting. It defines a Gaussian Feature Field that models semantics as a continuous function explicitly conditioned on Gaussian geometry and appearance to improve spatial coherence. A structured codebook of shared semantic primitives is combined with a codebook-guided attention mechanism that retrieves language features via similarity matching to reduce intra-object variance and support open-vocabulary reasoning. The authors report that experiments on standard 2D and 3D benchmarks show consistent outperformance over prior methods in segmentation quality, 3D semantic consistency, and codebook interpretability.

Significance. If the reported gains hold under rigorous evaluation, the work offers a concrete advance in coupling geometry and semantics within explicit 3D Gaussian representations, addressing a known source of view-inconsistent predictions in open-vocabulary settings. The structured codebook and attention design also provide a potentially interpretable primitive set that could aid analysis of learned representations.

major comments (2)
  1. [Abstract] The abstract states that 'extensive experiments... demonstrate that our method consistently outperforms prior approaches' yet provides no numerical results, ablation tables, or error bars. The full experimental section must supply these quantitative comparisons (including baselines, metrics, and statistical significance) for the central performance claim to be evaluable.
  2. [Method (Gaussian Feature Field formulation)] The weakest modeling assumption—that explicitly conditioning semantic predictions on geometric structure will strengthen geometry-semantics coupling and thereby improve 3D coherence—is presented without a formal derivation or controlled isolation experiment. A targeted ablation that removes the geometric conditioning while retaining the codebook attention would be required to substantiate that this choice is load-bearing for the reported consistency gains.
minor comments (2)
  1. [Abstract] The abstract introduces the term 'Gaussian Feature Field' without a one-sentence definition; adding a brief inline gloss would improve immediate readability for readers unfamiliar with the construction.
  2. [Method] Notation for the codebook entries and attention similarity function should be introduced once in the method section and used consistently thereafter to avoid ambiguity when describing the retrieval step.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and positive recommendation for minor revision. We address each major comment below and will make the necessary revisions to strengthen the paper.

read point-by-point responses
  1. Referee: [Abstract] The abstract states that 'extensive experiments... demonstrate that our method consistently outperforms prior approaches' yet provides no numerical results, ablation tables, or error bars. The full experimental section must supply these quantitative comparisons (including baselines, metrics, and statistical significance) for the central performance claim to be evaluable.

    Authors: We agree that including key quantitative results directly in the abstract would make the central claims more immediately evaluable. In the revised version, we will update the abstract to report specific metrics (e.g., mIoU on ScanNet and Replica, improvements over baselines). The experimental section already contains the full set of quantitative comparisons, baselines, and metrics; we will add error bars to all tables and report statistical significance (e.g., via paired t-tests across scenes) to further support the performance claims. revision: yes

  2. Referee: [Method (Gaussian Feature Field formulation)] The weakest modeling assumption—that explicitly conditioning semantic predictions on geometric structure will strengthen geometry-semantics coupling and thereby improve 3D coherence—is presented without a formal derivation or controlled isolation experiment. A targeted ablation that removes the geometric conditioning while retaining the codebook attention would be required to substantiate that this choice is load-bearing for the reported consistency gains.

    Authors: We acknowledge the value of a controlled ablation to isolate the contribution of geometric conditioning. While the manuscript motivates the Gaussian Feature Field through its formulation and end-to-end results, we agree an explicit isolation experiment is warranted. In the revision, we will add a targeted ablation that disables geometric conditioning (relying only on appearance features) while retaining the codebook attention, and report the resulting drop in 3D semantic consistency metrics. This will directly substantiate that the conditioning is load-bearing. A full formal derivation is not provided as the design follows established conditioning practices in neural fields; we will instead expand the motivation section with additional intuition. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected in derivation chain

full rationale

The paper presents OpenGaFF as a new construction on top of 3D Gaussian Splatting: a Gaussian Feature Field that explicitly conditions semantics on geometry and appearance, plus a structured codebook with attention for consistency. These are introduced as direct modeling decisions whose benefits are then measured via experiments on standard 2D/3D benchmarks. No equation or claim reduces by construction to its own inputs, no fitted parameter is relabeled as a prediction, and no load-bearing step relies on self-citation chains or imported uniqueness theorems. The logical flow from conditioning choice to claimed coherence gains is independent and externally validated, making the derivation self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 2 invented entities

The central claim rests on the effectiveness of two newly introduced constructs whose performance is asserted but not independently validated in the provided abstract.

axioms (1)
  • domain assumption 3D Gaussian Splatting provides an accurate continuous scene representation
    Invoked as the base representation without re-derivation.
invented entities (2)
  • Gaussian Feature Field no independent evidence
    purpose: Models semantics as a continuous function of Gaussian geometry and appearance
    New entity introduced to couple geometry and semantics; no independent evidence supplied.
  • structured codebook no independent evidence
    purpose: Serves as shared semantic primitives to enforce object-level consistency
    New entity introduced to reduce intra-object feature variance; no independent evidence supplied.

pith-pipeline@v0.9.0 · 5515 in / 1328 out tokens · 47344 ms · 2026-05-08T14:04:10.168160+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

36 extracted references · 7 canonical work pages · 1 internal anchor

  1. [1]

    Drivedreamer: Towards real-world-drive world models for autonomous driving

    Xiaofeng Wang, Zheng Zhu, Guan Huang, Xinze Chen, Jiagang Zhu, and Jiwen Lu. Drivedreamer: Towards real-world-drive world models for autonomous driving. InEuropean conference on computer vision, pages 55–72. Springer, 2024

  2. [2]

    Driving into the future: Multiview visual forecasting and planning with world model for autonomous driving

    Yuqi Wang, Jiawei He, Lue Fan, Hongxin Li, Yuntao Chen, and Zhaoxiang Zhang. Driving into the future: Multiview visual forecasting and planning with world model for autonomous driving. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14749–14759, 2024

  3. [3]

    Openemma: Open-source multimodal model for end-to-end autonomous driving

    Shuo Xing, Chengyuan Qian, Yuping Wang, Hongyuan Hua, Kexin Tian, Yang Zhou, and Zhengzhong Tu. Openemma: Open-source multimodal model for end-to-end autonomous driving. InProceedings of the Winter Conference on Applications of Computer Vision, pages 1001–1009, 2025

  4. [4]

    Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments

    Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 3674–3683, 2018

  5. [5]

    Magma: A foundation model for multimodal ai agents

    Jianwei Yang, Reuben Tan, Qianhui Wu, Ruijie Zheng, Baolin Peng, Yongyuan Liang, Yu Gu, Mu Cai, Seonghyeon Ye, Joel Jang, et al. Magma: A foundation model for multimodal ai agents. InProceedings of the computer vision and pattern recognition conference, pages 14203–14214, 2025

  6. [6]

    A comprehensive survey of scene graphs: Generation and application.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):1–26, 2021

    Xiaojun Chang, Pengzhen Ren, Pengfei Xu, Zhihui Li, Xiaojiang Chen, and Alex Hauptmann. A comprehensive survey of scene graphs: Generation and application.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):1–26, 2021

  7. [7]

    Foundations of spatial perception for robotics: Hierarchical representations and real-time systems.The International Journal of Robotics Research, 43(10):1457–1505, 2024

    Nathan Hughes, Yun Chang, Siyi Hu, Rajat Talak, Rumaia Abdulhai, Jared Strader, and Luca Carlone. Foundations of spatial perception for robotics: Hierarchical representations and real-time systems.The International Journal of Robotics Research, 43(10):1457–1505, 2024

  8. [8]

    Openscene: 3d scene understanding with open vocabularies

    Songyou Peng, Kyle Genova, Chiyu Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser, et al. Openscene: 3d scene understanding with open vocabularies. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 815–824, 2023

  9. [9]

    Langsplat: 3d language gaus- sian splatting

    Minghan Qin, Wanhua Li, Jiawei Zhou, Haoqian Wang, and Hanspeter Pfister. Langsplat: 3d language gaus- sian splatting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20051–20060, 2024

  10. [10]

    Langsplatv2: High- dimensional 3d language gaussian splatting with 450+ fps

    Wanhua Li, Yujie Zhao, Minghan Qin, Yang Liu, Yuanhao Cai, and Chuang Gan. Langsplatv2: High- dimensional 3d language gaussian splatting with 450+ fps. InAnnual Conference on Neural Information Processing Systems, 2025

  11. [11]

    3d gaussian splatting for real-time radiance field rendering.ACM Trans

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1, 2023

  12. [12]

    Language-driven semantic segmentation,

    Boyi Li, Kilian Q Weinberger, Serge Belongie, Vladlen Koltun, and René Ranftl. Language-driven semantic segmentation.arXiv preprint arXiv:2201.03546, 2022

  13. [13]

    Opengaussian: Towards point-level 3d gaussian-based open vocabulary understanding.Advances in Neural Information Processing Systems, 37:19114–19138, 2024

    Yanmin Wu, Jiarui Meng, Haijie Li, Chenming Wu, Yahao Shi, Xinhua Cheng, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, et al. Opengaussian: Towards point-level 3d gaussian-based open vocabulary understanding.Advances in Neural Information Processing Systems, 37:19114–19138, 2024

  14. [14]

    Gala: Guided attention with language alignment for open vocabulary gaussian splatting.arXiv preprint arXiv:2508.14278, 2025

    Elena Alegret, Kunyi Li, Sen Wang, Siyun Liang, Michael Niemeyer, Stefano Gasperini, Nassir Navab, and Federico Tombari. Gala: Guided attention with language alignment for open vocabulary gaussian splatting.arXiv preprint arXiv:2508.14278, 2025

  15. [15]

    Supergseg: Open-vocabulary 3d segmentation with structured super-gaussians

    Siyun Liang, Sen Wang, Kunyi Li, Michael Niemeyer, Stefano Gasperini, Hendrik Lensch, Nassir Navab, and Federico Tombari. Supergseg: Open-vocabulary 3d segmentation with structured super-gaussians. arXiv preprint arXiv:2412.10231, 2024

  16. [16]

    Visibility-aware language aggregation for open-vocabulary segmentation in 3d gaussian splatting

    Sen Wang, Kunyi Li, Siyun Liang, Elena Alegret, Jing Ma, Nassir Navab, and Stefano Gasperini. Visibility- aware language aggregation for open-vocabulary segmentation in 3d gaussian splatting.arXiv preprint arXiv:2509.05515, 2025

  17. [17]

    Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields

    Shijie Zhou, Haoran Chang, Sicheng Jiang, Zhiwen Fan, Zehao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, and Achuta Kadambi. Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21676–21685, 2024. 10

  18. [18]

    Gags: Granularity- aware feature distillation for language gaussian splatting

    Yuning Peng, Haiping Wang, Yuan Liu, Chenglu Wen, Zhen Dong, and Bisheng Yang. Gags: Granularity- aware feature distillation for language gaussian splatting. InProceedings of the AAAI Conference on Artificial Intelligence, volume 40, pages 8376–8384, 2026

  19. [19]

    Kim Jun-Seong, GeonU Kim, Kim Yu-Ji, Yu-Chiang Frank Wang, Jaesung Choe, and Tae-Hyun Oh. Dr. splat: Directly referring 3d gaussian splatting via direct language embedding registration. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 14137–14146, 2025

  20. [20]

    Occam’s lgs: An efficient approach for language gaussian splatting,

    Jiahuan Cheng, Jan-Nico Zaech, Luc Van Gool, and Danda Pani Paudel. Occam’s lgs: An efficient approach for language gaussian splatting.arXiv preprint arXiv:2412.01807, 2024

  21. [21]

    Goi: Find 3d gaussians of interest with an optimizable open-vocabulary semantic-space hyperplane

    Yansong Qu, Shaohui Dai, Xinyang Li, Jianghang Lin, Liujuan Cao, Shengchuan Zhang, and Rongrong Ji. Goi: Find 3d gaussians of interest with an optimizable open-vocabulary semantic-space hyperplane. In Proceedings of the 32nd ACM international conference on multimedia, pages 5328–5337, 2024

  22. [22]

    Ccl-lgs: Contrastive codebook learning for 3d language gaussian splatting.arXiv preprint arXiv:2505.20469, 2025

    Lei Tian, Xiaomin Li, Liqian Ma, Hefei Huang, Zirui Zheng, Hao Yin, Taiqing Li, Huchuan Lu, and Xu Jia. Ccl-lgs: Contrastive codebook learning for 3d language gaussian splatting.arXiv preprint arXiv:2505.20469, 2025

  23. [23]

    Dinov2: Learning robust visual features without supervision.Transactions on Machine Learning Research Journal, 2024

    Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision.Transactions on Machine Learning Research Journal, 2024

  24. [24]

    Grounding dino: Marrying dino with grounded pre-training for open-set object detection

    Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. InEuropean conference on computer vision, pages 38–55. Springer, 2024

  25. [25]

    Segment anything

    Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. InProceedings of the IEEE/CVF international conference on computer vision, pages 4015–4026, 2023

  26. [26]

    Learning transferable visual models from natural language supervision

    Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. InInternational conference on machine learning, pages 8748–8763. PmLR, 2021

  27. [27]

    SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features

    Michael Tschannen, Alexey Gritsenko, Xiao Wang, Muhammad Ferjad Naeem, Ibrahim Alabdulmohsin, Nikhil Parthasarathy, Talfan Evans, Lucas Beyer, Ye Xia, Basil Mustafa, et al. Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features.arXiv preprint arXiv:2502.14786, 2025

  28. [28]

    Lerf: Language embedded radiance fields

    Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. Lerf: Language embedded radiance fields. InProceedings of the IEEE/CVF international conference on computer vision, pages 19729–19739, 2023

  29. [29]

    Garfield: Group anything with radiance fields

    Chung Min Kim, Mingxuan Wu, Justin Kerr, Ken Goldberg, Matthew Tancik, and Angjoo Kanazawa. Garfield: Group anything with radiance fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21530–21539, 2024

  30. [30]

    Omniseg3d: Omniversal 3d segmentation via hierarchical contrastive learning

    Haiyang Ying, Yixuan Yin, Jinzhi Zhang, Fan Wang, Tao Yu, Ruqi Huang, and Lu Fang. Omniseg3d: Omniversal 3d segmentation via hierarchical contrastive learning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20612–20622, 2024

  31. [31]

    Nerf: Representing scenes as neural radiance fields for view synthesis.Communications of the ACM, 65 (1):99–106, 2021

    Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis.Communications of the ACM, 65 (1):99–106, 2021

  32. [32]

    Mip-nerf 360: Unbounded anti-aliased neural radiance fields

    Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5470–5479, 2022

  33. [33]

    Scannet: Richly-annotated 3d reconstructions of indoor scenes

    Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 5828–5839, 2017

  34. [34]

    gsplat: An open-source library for gaussian splatting.Journal of Machine Learning Research, 26(34):1–17, 2025

    Vickie Ye, Ruilong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, et al. gsplat: An open-source library for gaussian splatting.Journal of Machine Learning Research, 26(34):1–17, 2025. 11

  35. [35]

    Cluster quality analysis using silhouette score

    Ketan Rajshekhar Shahapure and Charles Nicholas. Cluster quality analysis using silhouette score. In 2020 IEEE 7th international conference on data science and advanced analytics (DSAA), pages 747–748. IEEE, 2020

  36. [36]

    British Free Range Eggs

    Jin-Chuan Shi, Miao Wang, Hao-Bin Duan, and Shao-Hua Guan. Language embedded 3d gaussians for open-vocabulary scene understanding. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5333–5343, 2024. 12 A Implementation Details Preprocessing.We follow LangSplat [ 9] to preprocess SAM [25] masks and CLIP [26] language...