pith. machine review for the scientific record. sign in

arxiv: 2602.21105 · v3 · submitted 2026-02-24 · 💻 cs.CV

Recognition: 2 theorem links

· Lean Theorem

BrepGaussian: CAD reconstruction from Multi-View Images with Gaussian Splatting

Authors on Pith no claims yet

Pith reviewed 2026-05-15 19:43 UTC · model grok-4.3

classification 💻 cs.CV
keywords B-Rep reconstructionGaussian SplattingCAD modelsMulti-view imagesParametric 3D representationComputer visionDeep learning for graphics
0
0 comments X

The pith

BrepGaussian learns clean boundary representation models directly from multi-view 2D images using a two-stage Gaussian splatting process.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents BrepGaussian as a framework to recover explicit 3D boundary representations of solids from unstructured 2D images. It uses Gaussian splatting with learnable features in two stages: first to capture overall geometry and edges, then to refine features for coherent patches. This avoids reliance on dense point clouds that limit previous deep learning approaches and improves generalization to new shapes. A sympathetic reader would care because successful B-Rep recovery from images could streamline reverse engineering and CAD workflows in design and manufacturing.

Core claim

BrepGaussian is a novel framework that employs a Gaussian Splatting renderer with learnable features, followed by a specific fitting strategy in a two-stage learning process that first captures geometry and edges and then refines patch features to achieve clean geometry and coherent instance representations from multi-view images.

What carries the argument

The two-stage Gaussian Splatting framework with learnable features that disentangles geometry reconstruction from feature learning.

If this is right

  • Reconstructs B-Rep models without requiring dense and clean point cloud inputs.
  • Achieves cleaner geometry and more coherent instance representations compared to prior methods.
  • Generalizes better to novel shapes in CAD reconstruction tasks.
  • Demonstrates superior performance on standard benchmarks for multi-view 3D reconstruction.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This approach might enable B-Rep extraction from real-world multi-view photos taken with consumer cameras.
  • The two-stage disentanglement could be adapted to other 3D representation learning problems beyond B-Rep.
  • Integration with existing CAD software pipelines could become feasible if the fitting strategy proves robust.

Load-bearing premise

The two-stage process reliably disentangles geometry reconstruction and feature learning from multi-view images to produce clean B-Rep geometry without post-hoc adjustments.

What would settle it

Experiments on a dataset of multi-view images showing frequent failures in producing valid manifold B-Rep models or requiring manual cleanup would falsify the claim of reliable reconstruction.

Figures

Figures reproduced from arXiv: 2602.21105 by Dongyang Ren, Hangyu Xu, Jiaxing Yu, Jie Guo, Yanwen Guo, Yuanqi Li, Zhengkang Zhou, Zhouyuxiao Yang.

Figure 1
Figure 1. Figure 1: Given multi-view images, our pipeline reconstructs a CAD model through feature-aware Gaussian Splatting and parametric [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overall pipeline of BrepGaussian. Given multi-view RGB images of a CAD object, we extract edge and patch views using existing edge detection and segmentation models. These views drive a two-stage Gaussian Splatting model that predicts edge and patch labels on the reconstructed point cloud. The fitted primitives are globally optimized to obtain the final B-Rep model. Stage 2 training — learning 3D patch ins… view at source ↗
Figure 3
Figure 3. Figure 3: Illustration of optimized 2D Gaussians. flat regions use [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison on patch segmentation. Our BrepGaussian produces cleaner and more consistent patch segmentation. [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparison on CAD reconstruction. Our BrepGaussian exhibit more accurate geometry. [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: From left to right, we present 2D images, point cloud [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Experiments on real-world scenes from ABO dataset [ [PITH_FULL_IMAGE:figures/full_fig_p008_7.png] view at source ↗
read the original abstract

The boundary representation (B-Rep) models a 3D solid as its explicit boundaries: trimmed corners, edges, and faces. Recovering B-Rep representation from unstructured data is a challenging and valuable task of computer vision and graphics. Recent advances in deep learning have greatly improved the recovery of 3D shape geometry, but still depend on dense and clean point clouds and struggle to generalize to novel shapes. We propose B-Rep Gaussian Splatting (BrepGaussian), a novel framework that learns 3D parametric representations from 2D images. We employ a Gaussian Splatting renderer with learnable features, followed by a specific fitting strategy. To disentangle geometry reconstruction and feature learning, we introduce a two-stage learning framework that first captures geometry and edges and then refines patch features to achieve clean geometry and coherent instance representations. Extensive experiments demonstrate the superior performance of our approach to state-of-the-art methods.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces BrepGaussian, a novel framework for reconstructing boundary representation (B-Rep) CAD models from multi-view 2D images. It employs Gaussian Splatting with learnable features in a two-stage learning process: the first stage captures geometry and edges, while the second refines patch features to produce clean geometry and coherent instance representations. The authors claim this yields superior performance over state-of-the-art methods without requiring dense point-cloud input.

Significance. If the two-stage disentanglement holds and produces clean B-Rep outputs, the work could advance CAD reconstruction in computer vision by enabling direct parametric recovery from images and improving generalization to novel shapes, addressing limitations of prior methods reliant on point clouds.

major comments (2)
  1. [Abstract] Abstract: the central claim of 'superior performance' to state-of-the-art methods is unsupported by any quantitative metrics, dataset details, ablation studies, or error analysis, which is load-bearing for verifying the framework's effectiveness.
  2. [Two-stage learning framework] Two-stage learning framework: no loss formulation, architectural isolation, or regularization is provided to enforce disentanglement between geometry/edge capture and patch feature refinement; Gaussian primitives are continuous and may entangle edge sharpness with surface features, risking non-clean B-Rep outputs without post-hoc adjustments.
minor comments (1)
  1. [Method] The description of the 'specific fitting strategy' following Gaussian Splatting is vague and lacks equations or pseudocode for how trimmed faces/edges are obtained from the splatted representation.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the thoughtful and constructive feedback. We address each major comment below and will revise the manuscript to strengthen the presentation of results and technical details.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim of 'superior performance' to state-of-the-art methods is unsupported by any quantitative metrics, dataset details, ablation studies, or error analysis, which is load-bearing for verifying the framework's effectiveness.

    Authors: We agree that the abstract would benefit from greater specificity. The full manuscript contains quantitative comparisons (including metrics such as Chamfer distance and edge accuracy), dataset descriptions, ablation studies, and error analysis in the Experiments section. In the revision we will expand the abstract to briefly cite the key performance gains and reference the supporting experiments, while preserving its concise nature. revision: yes

  2. Referee: [Two-stage learning framework] Two-stage learning framework: no loss formulation, architectural isolation, or regularization is provided to enforce disentanglement between geometry/edge capture and patch feature refinement; Gaussian primitives are continuous and may entangle edge sharpness with surface features, risking non-clean B-Rep outputs without post-hoc adjustments.

    Authors: We appreciate this observation. The manuscript describes the two-stage process at a high level; we will add the explicit loss functions for each stage (geometry/edge term in stage 1, patch-feature refinement term in stage 2), the architectural isolation mechanism (parameter freezing and separate optimizers), and any regularization terms used to promote disentanglement. We will also clarify how the subsequent B-Rep fitting strategy, combined with the learned edge features, produces clean parametric outputs and will include additional analysis or ablations demonstrating that entanglement is limited without relying on extensive post-processing. revision: yes

Circularity Check

0 steps flagged

No circularity: two-stage pipeline presented as independent without self-referential reductions

full rationale

The provided manuscript text describes a two-stage framework (first capturing geometry and edges via Gaussian Splatting, then refining patch features) and a fitting strategy to produce B-Rep outputs from multi-view images. No equations, loss formulations, or derivation steps are shown that reduce any claimed prediction or result to a fitted parameter or input quantity by construction. No self-citations are invoked as load-bearing uniqueness theorems, no ansatzes are smuggled via prior work, and no known results are merely renamed. The separation of stages is asserted as a design choice without mathematical reduction to the method's own outputs, leaving the derivation chain self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the premise that Gaussian Splatting with learnable features plus a staged fitting procedure can recover explicit B-Rep boundaries from images alone; no free parameters or invented entities are named in the abstract.

axioms (1)
  • domain assumption Multi-view images contain enough geometric information to recover explicit trimmed faces, edges, and corners of a solid
    Implicit in the task formulation and the choice of input modality.

pith-pipeline@v0.9.0 · 5480 in / 1140 out tokens · 18523 ms · 2026-05-15T19:43:21.699000+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages

  1. [1]

    Gencad: Image-conditioned computer-aided design generation with transformer-based contrastive representation and diffusion priors.Trans

    Md Ferdous Alam et al. Gencad: Image-conditioned computer-aided design generation with transformer-based contrastive representation and diffusion priors.Trans. Mach. Learn. Res., 2025. 3

  2. [2]

    Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P

    Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neu- ral radiance fields. InInt. Conf. Comput. Vis., 2021. 3

  3. [3]

    Eulàlia Parés

    Dena Bazazian and M. Eulàlia Parés. Edc-net: Edge detec- tion capsule network for 3d point clouds. InApplied Sci- ences, 2021. 2

  4. [4]

    Segment any 3d gaussians

    Jiazhong Cen, Jiemin Fang, Chen Yang, Lingxi Xie, Xi- aopeng Zhang, Wei Shen, and Qi Tian. Segment any 3d gaussians. InAAAI, 2025. 3

  5. [5]

    Chen and et al

    Y . Chen and et al. Capri-net: Learning compact cad shapes with adaptive primitive assembly. InIEEE Conf. Comput. Vis. Pattern Recog., 2022. 2

  6. [6]

    Ya- go Vicente, Thomas Dideriksen, Himanshu Arora, Matthieu Guillaumin, and Jitendra Malik

    Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F. Ya- go Vicente, Thomas Dideriksen, Himanshu Arora, Matthieu Guillaumin, and Jitendra Malik. Abo: Dataset and bench- marks for real-world 3d object understanding. InIEEE Conf. Comput. Vis. Pattern Recog., 2022. 8

  7. [7]

    Fischler and Robert C

    Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography.Communications of the ACM, 1981. 5

  8. [8]

    Curve-aware gaussian splatting for 3d parametric curve reconstruction

    Zhirui Gao, Renjiao Yi, Yaqiao Dai, Xuening Zhu, Wei Chen, Chenyang Zhu, and Kai Xu. Curve-aware gaussian splatting for 3d parametric curve reconstruction. InInt. Conf. Comput. Vis., 2025. 3

  9. [9]

    Self-supervised learning of hy- brid part-aware 3d representations of 2d gaussians and su- perquadrics

    Zhirui Gao, Renjiao Yi, Yuhang Huang, Wei Chen, Chenyang Zhu, and Kai Xu. Self-supervised learning of hy- brid part-aware 3d representations of 2d gaussians and su- perquadrics. InInt. Conf. Comput. Vis., 2025. 3

  10. [10]

    Sugar: Surface- aligned gaussian splatting for efficient 3d mesh reconstruc- tion and high-quality mesh rendering

    Antoine Guédon and Vincent Lepetit. Sugar: Surface- aligned gaussian splatting for efficient 3d mesh reconstruc- tion and high-quality mesh rendering. InIEEE Conf. Com- put. Vis. Pattern Recog., 2024. 3

  11. [11]

    Complexgen: Cad reconstruction by b- rep chain complex generation

    Haoxiang Guo, Shilin Liu, Hao Pan, Yang Liu, Xin Tong, and Baining Guo. Complexgen: Cad reconstruction by b- rep chain complex generation. InACM SIGGRAPH Annual Conference, 2022. 2

  12. [12]

    Semantic gaussians: Open-vocabulary scene understanding with 3d gaussian splatting.ArXiv, abs/2403.15624, 2024

    Jun Guo, Xiaojian Ma, Yue Fan, Huaping Liu, and Qing Li. Semantic gaussians: Open-vocabulary scene understanding with 3d gaussian splatting.ArXiv, abs/2403.15624, 2024. 3

  13. [13]

    2d gaussian splatting for geometrically ac- curate radiance fields

    Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically ac- curate radiance fields. InACM SIGGRAPH Annual Confer- ence, 2024. 2

  14. [14]

    Ucsg-net: Unsupervised discovering of constructive solid geometry tree

    Kacper Kania, Maciej Zi˛ eba, and Tomasz Kajdanowicz. Ucsg-net: Unsupervised discovering of constructive solid geometry tree. InAdv. Neural Inform. Process. Syst., 2020. 2

  15. [15]

    3d gaussian splatting for real-time radiance field rendering

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. InACM SIGGRAPH Annual Con- ference, 2023. 2, 3

  16. [16]

    Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick

    Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer White- head, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. InInt. Conf. Comput. Vis., 2023. 3, 5

  17. [17]

    Abc: A big cad model dataset for geometric deep learning

    Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, and Daniele Panozzo. Abc: A big cad model dataset for geometric deep learning. InIEEE Conf. Comput. Vis. Pattern Recog., 2019. 5

  18. [18]

    Lambourne, Karl D

    Joseph G. Lambourne, Karl D. D. Willis, Pradeep Ku- mar Jayaraman, Aditya Sanghi, Peter Meltzer, and Hooman Shayani. Brepnet: A topological message passing system for solid models. InIEEE Conf. Comput. Vis. Pattern Recog.,

  19. [19]

    Eric-Tuan Lê, Minhyuk Sung, Duygu Ceylan, Radomír Mˇech, Tamy Boubekeur, and Niloy J. Mitra. Cpfn: Cascaded primitive fitting networks for high-resolution point clouds. In Int. Conf. Comput. Vis., 2021. 2

  20. [20]

    Supervised fitting of geometric primi- tives to 3d point clouds

    Lingxiao Li, Minhyuk Sung, Anastasia Dubrovina, Li Yi, and Leonidas Guibas. Supervised fitting of geometric primi- tives to 3d point clouds. InIEEE Conf. Comput. Vis. Pattern Recog., 2019. 1, 2

  21. [21]

    Secad-net: Self-supervised cad reconstruction by learning sketch-extrude operations

    Pu Li, Jianwei Guo, Xiaopeng Zhang, and Dong-Ming Yan. Secad-net: Self-supervised cad reconstruction by learning sketch-extrude operations. InIEEE Conf. Comput. Vis. Pat- tern Recog., 2023. 2

  22. [22]

    Sfmcad: Unsupervised cad reconstruction by learning sketch-based feature modeling operations

    Pu Li, Jianwei Guo, Huibin Li, Bedrich Benes, and Dong- Ming Yan. Sfmcad: Unsupervised cad reconstruction by learning sketch-based feature modeling operations. InIEEE Conf. Comput. Vis. Pattern Recog., 2024. 2

  23. [23]

    Yangyan Li, Xiaokun Wu, Yiorgos Chrysathou, Andrei Sharf, Daniel Cohen-Or, and Niloy J. Mitra. Globfit: Consis- tently fitting primitives by discovering global relations.ACM Trans. Graph., 2011. 2

  24. [24]

    Sed-net: Surface and edge detection for primitive fitting of point clouds

    Yuanqi Li, Shun Liu, Xinran Yang, Jianwei Guo, Jie Guo, and Yanwen Guo. Sed-net: Surface and edge detection for primitive fitting of point clouds. InACM SIGGRAPH Annual Conference, 2023. 2, 7

  25. [25]

    Caddreamer: Cad object generation from single-view images

    Yuan Li, Cheng Lin, Yuan Liu, Xiaoxiao Long, Chenxu Zhang, Ningna Wang, Xin Li, Wenping Wang, and Xiaohu Guo. Caddreamer: Cad object generation from single-view images. InIEEE Conf. Comput. Vis. Pattern Recog., 2025. 3

  26. [26]

    Deep point cloud edge recon- struction via surface patch segmentation.IEEE Transactions on Visualization and Computer Graphics, 2025

    Yuanqi Li, Hongshen Wang, Yansong Liu, Jingcheng Huang, Shun Liu, and Chenyu Huang. Deep point cloud edge recon- struction via surface patch segmentation.IEEE Transactions on Visualization and Computer Graphics, 2025. 2, 7

  27. [27]

    Split-and-fit: Learning b-reps via structure-aware voronoi partitioning

    Yilin Liu, Jiale Chen, Shanshan Pan, Daniel Cohen-Or, Hao Zhang, and Hui Huang. Split-and-fit: Learning b-reps via structure-aware voronoi partitioning. InACM SIGGRAPH Annual Conference, 2024. 2, 7, 8

  28. [28]

    Point2cad: Reverse engineering cad models from 3d point clouds

    Yujia Liu, Anton Obukhov, Jan Dirk Wegner, and Konrad Schindler. Point2cad: Reverse engineering cad models from 3d point clouds. InIEEE Conf. Comput. Vis. Pattern Recog.,

  29. [29]

    Hola: B-rep generation using a holistic latent representation

    Yilin Liu et al. Hola: B-rep generation using a holistic latent representation. InSIGGRAPH, 2025. 3

  30. [30]

    Def: Deep es- timation of sharp geometric features in 3d shapes

    Albert Matveev, Ruslan Rakhimov, Alexey Artemov, Gleb Bobrovskikh, Vage Egiazarian, Emil Bogomolov, Daniele Panozzo, Denis Zorin, and Evgeny Burnaev. Def: Deep es- timation of sharp geometric features in 3d shapes. InACM SIGGRAPH Annual Conference, 2022. 2

  31. [31]

    Srinivasan, Matthew Tancik, Jonathan T

    Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view syn- thesis. InEur. Conf. Comput. Vis., 2020. 2, 3

  32. [32]

    Instant neural graphics primitives with a multires- olution hash encoding

    Thomas Müller, Alex Evans, Christoph Schied, and Alexan- der Keller. Instant neural graphics primitives with a multires- olution hash encoding. InACM SIGGRAPH Annual Confer- ence, pages 102:1–102:15, 2022. 3

  33. [33]

    Soft 3d reconstruction for view synthesis

    Eric Penner and Li Zhang. Soft 3d reconstruction for view synthesis. InACM Trans. Graph., 2017. 3

  34. [34]

    Csg-stump: A learning friendly csg-like representation for interpretable shape parsing

    Daxuan Ren, Jianmin Zheng, Jianfei Cai, Jiatong Li, Haiyong Jiang, Zhongang Cai, Junzhe Zhang, Liang Pan, Mingyuan Zhang, Haiyu Zhao, and Shuai Yi. Csg-stump: A learning friendly csg-like representation for interpretable shape parsing. InInt. Conf. Comput. Vis., 2021. 2

  35. [35]

    Csgnet: Neural shape parser for constructive solid geometry

    Gopal Sharma, Rishabh Goyal, Difan Liu, Evangelos Kalogerakis, and Subhransu Maji. Csgnet: Neural shape parser for constructive solid geometry. InIEEE Conf. Com- put. Vis. Pattern Recog., 2018. 2

  36. [36]

    Parsenet: A parametric surface fitting network for 3d point clouds

    Gopal Sharma, Difan Liu, Subhransu Maji, Evangelos Kalogerakis, Siddhartha Chaudhuri, and Radomír M ˇech. Parsenet: A parametric surface fitting network for 3d point clouds. InEur. Conf. Comput. Vis., 2020. 1, 2, 7

  37. [37]

    Point2cyl: Reverse engineering 3d objects from point clouds to extrusion cylinders

    Mikaela Angelina Uy, Yen-Yu Chang, Minhyuk Sung, Purvi Goel, Joseph Lambourne, Tolga Birdal, and Leonidas Guibas. Point2cyl: Reverse engineering 3d objects from point clouds to extrusion cylinders. InIEEE Conf. Comput. Vis. Pattern Recog., 2022. 2

  38. [38]

    Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction

    Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. InAdv. Neural Inform. Process. Syst., 2021. 3

  39. [39]

    Pie-net: Parametric inference of point cloud edges

    Xiaogang Wang, Yuelang Xu, Kai Xu, Andrea Tagliasacchi, Bin Zhou, Ali Mahdavi-Amiri, and Hao Zhang. Pie-net: Parametric inference of point cloud edges. InAdv. Neural Inform. Process. Syst., 2020. 2

  40. [40]

    Hpnet: Deep primi- tive segmentation using hybrid representations

    Siming Yan, Zhenpei Yang, Chongyang Ma, Haibin Huang, Etienne V ouga, and Qixing Huang. Hpnet: Deep primi- tive segmentation using hybrid representations. InInt. Conf. Comput. Vis., 2021. 2, 7

  41. [41]

    Sgcr: Spherical gaussians for efficient 3d curve reconstruction

    Xinran Yang, Donghao Ji, Yuanqi Li, Jie Guo, Yanwen Guo, and Junyuan Xie. Sgcr: Spherical gaussians for efficient 3d curve reconstruction. InIEEE Conf. Comput. Vis. Pattern Recog., 2025. 3

  42. [42]

    Nef: Neural edge fields for 3d parametric curve reconstruction from multi-view images

    Yunfan Ye, Renjiao Yi, Zhirui Gao, Chenyang Zhu, Zhiping Cai, and Kai Xu. Nef: Neural edge fields for 3d parametric curve reconstruction from multi-view images. InIEEE Conf. Comput. Vis. Pattern Recog., 2023. 5

  43. [43]

    Sketchsplat: 3d edge reconstruction via differentiable multi-view sketch splatting

    Haiyang Ying and Matthias Zwicker. Sketchsplat: 3d edge reconstruction via differentiable multi-view sketch splatting. InInt. Conf. Comput. Vis., 2025. 3

  44. [44]

    Img2cad: Re- verse engineering 3d cad models from images through vlm- assisted conditional factorization

    Yang You, Mikaela Angelina Uy, Jiaqi Han, Rahul Thomas, Haotong Zhang, Yi Du, Hansheng Chen, Francis Engel- mann, Suya You, and Leonidas Guibas. Img2cad: Re- verse engineering 3d cad models from images through vlm- assisted conditional factorization. InSIGGRAPH Asia, 2025. 3

  45. [45]

    Dualcsg: Learning dual csg trees for general and compact cad modeling.ArXiv, abs/2301.11497,

    Fenggen Yu, Qimin Chen, Maham Tanveer, Ali Mahdavi- Amiri, and Hao Zhang. Dualcsg: Learning dual csg trees for general and compact cad modeling.ArXiv, abs/2301.11497,

  46. [46]

    Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields

    Shijie Zhou, Haoran Chang, Sicheng Jiang, Zhiwen Fan, Ze- hao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, and Achuta Kadambi. Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields. In IEEE Conf. Comput. Vis. Pattern Recog., 2023. 3

  47. [47]

    Nerve: Neural volumetric edges for parametric curve extraction from point clouds

    Xiangyu Zhu, Dong Du, Weikai Chen, Zhiyou Zhao, Yinyu Nie, and Xiaoguang Han. Nerve: Neural volumetric edges for parametric curve extraction from point clouds. InIEEE Conf. Comput. Vis. Pattern Recog., 2023. 2