Recognition: 1 theorem link
· Lean TheoremTOPOS: High-Fidelity and Efficient Industry-Grade 3D Head Generation
Pith reviewed 2026-05-15 01:28 UTC · model grok-4.3
The pith
TOPOS generates single-image 3D heads locked to one fixed studio topology so every output shares identical vertices for rigging and animation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
TOPOS recovers both geometry and appearance for heads under one fixed reference topology by training TOPOS-VAE to map diverse point clouds into that topology via its Perceiver Resampler, then training TOPOS-DiT, a rectified flow transformer, to generate new heads from a single image inside the learned latent space, and finally attaching TOPOS-Texture to output relightable UV maps that remain spatially registered to the mesh.
What carries the argument
TOPOS-VAE variational autoencoder that uses the Perceiver Resampler to map point clouds sampled from arbitrary head meshes onto one fixed reference topology while preserving vertex correspondence and geometric detail.
If this is right
- All generated heads share exact vertex ordering, so skinning weights and blend shapes transfer directly between any pair without additional correspondence solving.
- The structured latent space lets the rectified flow transformer produce new heads at high resolution while remaining compatible with existing studio asset libraries.
- UV textures generated by TOPOS-Texture stay pixel-aligned to the fixed mesh, supporting immediate use in relighting and shader pipelines.
- The method outperforms both classical single-image reconstruction and topology-agnostic 3D generative models on metrics that matter for digital-human production.
Where Pith is reading between the lines
- The same resampler-plus-flow design could be applied to full-body or hand generation if a canonical topology is first defined for those domains.
- Fixed-topology outputs would reduce the compute cost of batch asset creation in virtual production environments that require thousands of consistent characters.
- Because the topology is enforced at generation time, downstream neural rendering or real-time avatar systems could cache shared rig data across many generated heads.
Load-bearing premise
The Perceiver Resampler can map point clouds taken from head meshes of many different topologies onto the single target topology while keeping the geometric accuracy and semantic vertex labels needed for later rigging.
What would settle it
Generate a batch of heads, load them into a standard animation package with the reference rig, and observe whether skinning weights transfer without manual fixes or whether animation artifacts appear at the same vertex locations across all outputs.
Figures
read the original abstract
High-fidelity 3D head generation plays a crucial role in the film, animation and video game industries. In industrial pipelines, studios typically enforce a fixed reference topology across all head assets, as such a clean and uniform topology is a prerequisite for production-level rigging, skinning and animation. In this paper, we present TOPOS, a framework tailored for single image conditioned 3D head generation that jointly recovers geometry and appearance under such an industry-standard topology. In contrast to general 3D generative models which produce triangle meshes with inconsistent topology and numerous vertices, hindering semantic correspondence and asset-level reuse, TOPOS generates head meshes with a fixed, studio-style topology, enabling consistent vertex-level correspondence across all generated heads. To model heads under this unified topology, we proposed a novel variational autoencoder structure, termed TOPOS-VAE. Inspired by multi-model large language models (MLLMs), our TOPOS-VAE leverages the Perceiver Resampler to convert input pointclouds sampled from head meshes of diverse topologies into the target reference topology. Building upon TOPOS-VAE's structured latent space, we train a rectified flow transformer, TOPOS-DiT, to efficiently generate high-fidelity head meshes from a single image. We further present TOPOS-Texture, an end-to-end module that produces relightable UV texture maps from the same portrait image via fine-tuning a multimodal image generative model. The generated textures are spatially aligned with the underlying mesh geometry and faithfully preserve high-frequency appearance details. Extensive experiments demonstrate that TOPOS achieves state-of-the-art performance on 3D head generation, surpassing both classical face reconstruction methods and general 3D object generative models, highlighting its effectiveness for digital human creation.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces TOPOS, a single-image conditioned framework for generating 3D head meshes with a fixed industry-standard topology suitable for rigging and animation. It proposes TOPOS-VAE, which uses a Perceiver Resampler to map point clouds from arbitrary head topologies into a reference topology, followed by TOPOS-DiT (a rectified flow transformer) for latent-space generation and TOPOS-Texture for aligned relightable UV maps. The central claim is that this yields state-of-the-art performance on 3D head generation, outperforming both classical face reconstruction methods and general 3D generative models.
Significance. If the performance claims and topology-preservation properties hold, the work would be significant for industrial digital-human pipelines in film, animation, and games, where fixed-topology assets are a prerequisite for downstream skinning and animation without additional retopology steps. The combination of VAE-based topology unification with a DiT generator and texture module addresses a practical gap between academic 3D generation and production requirements.
major comments (2)
- [Abstract / TOPOS-VAE] Abstract / TOPOS-VAE description: the claim that the Perceiver Resampler reliably converts point clouds sampled from diverse head topologies into the fixed reference topology while preserving both low-frequency geometry and high-frequency semantic landmarks (eye corners, lip vertices) required for rigging is load-bearing for all downstream claims of consistent vertex correspondence and SOTA performance, yet the manuscript provides no quantitative validation such as per-vertex RMSE, landmark correspondence rates, or ablation comparing resampled versus ground-truth fixed-topology meshes.
- [Experiments] Experimental section (presumably §4 or §5): the abstract asserts state-of-the-art results surpassing classical reconstruction and general 3D models, but without visible quantitative tables, ablation studies on the resampler, or error analysis on topology fidelity, the support for the central performance claim cannot be evaluated.
minor comments (2)
- [Method] Clarify the exact input/output dimensions and training objective of the Perceiver Resampler within TOPOS-VAE; the current high-level description leaves the mechanism for semantic correspondence ambiguous.
- [Related Work] Add explicit comparison to prior topology-aware generation works (e.g., those using fixed-topology templates or mesh autoencoders) in the related-work section to better position the novelty of the Perceiver-based unification.
Simulated Author's Rebuttal
We thank the referee for the constructive review and for acknowledging the practical relevance of fixed-topology 3D head generation for industrial pipelines. We agree that stronger quantitative support is required for the Perceiver Resampler claims and for the SOTA assertions. Below we respond to each major comment and commit to the necessary revisions.
read point-by-point responses
-
Referee: [Abstract / TOPOS-VAE] Abstract / TOPOS-VAE description: the claim that the Perceiver Resampler reliably converts point clouds sampled from diverse head topologies into the fixed reference topology while preserving both low-frequency geometry and high-frequency semantic landmarks (eye corners, lip vertices) required for rigging is load-bearing for all downstream claims of consistent vertex correspondence and SOTA performance, yet the manuscript provides no quantitative validation such as per-vertex RMSE, landmark correspondence rates, or ablation comparing resampled versus ground-truth fixed-topology meshes.
Authors: We acknowledge that the manuscript does not currently include explicit quantitative metrics (per-vertex RMSE, landmark error rates, or direct ablation against ground-truth fixed-topology meshes) for the Perceiver Resampler. While the VAE training objective and qualitative results support topology unification, these metrics are indeed necessary to substantiate the load-bearing claims. We will add a dedicated quantitative evaluation subsection that reports per-vertex RMSE, landmark correspondence accuracy on semantic points, and an ablation comparing resampled meshes to ground-truth reference-topology meshes. These additions will appear in the revised Experiments section. revision: yes
-
Referee: [Experiments] Experimental section (presumably §4 or §5): the abstract asserts state-of-the-art results surpassing classical reconstruction and general 3D models, but without visible quantitative tables, ablation studies on the resampler, or error analysis on topology fidelity, the support for the central performance claim cannot be evaluated.
Authors: The current experimental section contains quantitative comparisons, yet we agree that the tables, resampler-specific ablations, and topology-fidelity error analysis are not presented with sufficient clarity or completeness to fully support the SOTA claims. We will revise the Experiments section to feature expanded quantitative tables (geometry and texture metrics against both classical and generative baselines), explicit ablations isolating the resampler, and a focused error analysis on topology fidelity. These changes will make the performance evidence directly evaluable. revision: yes
Circularity Check
No circularity in TOPOS derivation chain
full rationale
The paper's chain proceeds from a design choice (fixed industry topology) to a Perceiver-Resampler VAE that maps diverse point clouds into that topology, followed by a rectified-flow DiT conditioned on image latents and a fine-tuned texture generator. No equations, fitted parameters, or self-citations are shown that make any claimed output (geometry, correspondence, or SOTA metrics) equivalent to the inputs by construction. The resampler is presented as an adaptation of prior MLLM work rather than a self-referential fit; downstream performance is asserted via experiments, not by renaming training objectives. The derivation therefore remains independent of its own fitted values.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption A fixed reference topology is a prerequisite for production-level rigging, skinning and animation in industrial pipelines.
invented entities (3)
-
TOPOS-VAE
no independent evidence
-
TOPOS-DiT
no independent evidence
-
TOPOS-Texture
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Advances in neural information processing systems , volume=
Denoising diffusion probabilistic models , author=. Advances in neural information processing systems , volume=
-
[2]
Denoising Diffusion Implicit Models
Denoising Diffusion Implicit Models , author=. arXiv:2010.02502 , year=
work page internal anchor Pith review Pith/arXiv arXiv 2010
-
[3]
International conference on machine learning , pages=
Improved denoising diffusion probabilistic models , author=. International conference on machine learning , pages=. 2021 , organization=
work page 2021
-
[4]
Flow Matching for Generative Modeling
Flow matching for generative modeling , author=. arXiv preprint arXiv:2210.02747 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[5]
Flow matching guide and code , author=. arXiv preprint arXiv:2412.06264 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[6]
Advances in neural information processing systems , volume=
Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling , author=. Advances in neural information processing systems , volume=
-
[7]
Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
3d shapenets: A deep representation for volumetric shapes , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
-
[8]
International conference on machine learning , pages=
Learning representations and generative models for 3d point clouds , author=. International conference on machine learning , pages=. 2018 , organization=
work page 2018
-
[9]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Diffusion probabilistic models for 3d point cloud generation , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[10]
Advances in neural information processing systems , volume=
Lion: Latent point diffusion models for 3d shape generation , author=. Advances in neural information processing systems , volume=
-
[11]
Computer Graphics Forum , volume=
OctFusion: Octree-based Diffusion Models for 3D Shape Generation , author=. Computer Graphics Forum , volume=. 2025 , organization=
work page 2025
-
[12]
Octgpt: Octree-based multiscale autoregressive models for 3d shape generation , author=. Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers , pages=
-
[13]
ACM Transactions on Graphics (ToG) , volume=
Locally attentional sdf diffusion for controllable 3d shape generation , author=. ACM Transactions on Graphics (ToG) , volume=. 2023 , publisher=
work page 2023
-
[14]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Structured 3d latents for scalable and versatile 3d generation , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[15]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Sparseflex: High-resolution and arbitrary-topology 3d shape modeling , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[16]
Seminal graphics: pioneering efforts that shaped the field , pages=
Marching cubes: A high resolution 3D surface construction algorithm , author=. Seminal graphics: pioneering efforts that shaped the field , pages=
-
[17]
Advances in Neural Information Processing Systems , volume=
Meshxl: Neural coordinate field for generative 3d foundation models , author=. Advances in Neural Information Processing Systems , volume=
-
[18]
arXiv preprint arXiv:2406.10163 , year=
Meshanything: Artist-created mesh generation with autoregressive transformers , author=. arXiv preprint arXiv:2406.10163 , year=
-
[19]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Meshgpt: Generating triangle meshes with decoder-only transformers , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[20]
Advances in neural information processing systems , volume=
Attention is all you need , author=. Advances in neural information processing systems , volume=
-
[21]
Blanz, Volker and Vetter, Thomas , title =. 1999 , isbn =. doi:10.1145/311535.311556 , booktitle =
-
[22]
2009 sixth IEEE international conference on advanced video and signal based surveillance , pages=
A 3D face model for pose and illumination invariant face recognition , author=. 2009 sixth IEEE international conference on advanced video and signal based surveillance , pages=. 2009 , organization=
work page 2009
-
[23]
Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier , journal =. Learning a model of facial shape and expression from. 2017 , pages =
work page 2017
-
[24]
High-Resolution Image Synthesis with Latent Diffusion Models , author=. 2021 , eprint=
work page 2021
-
[25]
Advances in neural information processing systems , volume=
Flamingo: a visual language model for few-shot learning , author=. Advances in neural information processing systems , volume=
-
[26]
Advances in neural information processing systems , volume=
Direct preference optimization: Your language model is secretly a reward model , author=. Advances in neural information processing systems , volume=
-
[27]
Reinforcement learning: An introduction , author=. 1998 , publisher=
work page 1998
-
[28]
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis , author=. 2020 , booktitle=
work page 2020
-
[29]
3D Gaussian Splatting for Real-Time Radiance Field Rendering , journal =
Kerbl, Bernhard and Kopanas, Georgios and Leimk. 3D Gaussian Splatting for Real-Time Radiance Field Rendering , journal =. 2023 , url =
work page 2023
-
[30]
International Journal of Computer Vision , volume=
Large scale 3D morphable models , author=. International Journal of Computer Vision , volume=. 2018 , publisher=
work page 2018
-
[31]
Proceedings of the IEEE international conference on computer vision , pages=
Internet based morphable model , author=. Proceedings of the IEEE international conference on computer vision , pages=
-
[32]
3d face morphable models" in-the-wild" , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
-
[33]
ACM Transactions on Graphics (ToG) , volume=
Learning an animatable detailed 3D face model from in-the-wild images , author=. ACM Transactions on Graphics (ToG) , volume=. 2021 , publisher=
work page 2021
-
[34]
European conference on computer vision , pages=
Realy: Rethinking the evaluation of 3d face reconstruction , author=. European conference on computer vision , pages=. 2022 , organization=
work page 2022
-
[35]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Combining 3d morphable models: A large scale face-and-head model , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[36]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Ghum & ghuml: Generative 3d human shape and articulated pose models , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[37]
Proceedings of the ieee/cvf conference on computer vision and pattern recognition , pages=
Facescape: a large-scale high quality 3d face dataset and detailed riggable 3d face prediction , author=. Proceedings of the ieee/cvf conference on computer vision and pattern recognition , pages=
-
[38]
European conference on computer vision , pages=
Towards metrical reconstruction of human faces , author=. European conference on computer vision , pages=. 2022 , organization=
work page 2022
-
[39]
arXiv preprint arXiv:2404.04104 , year=
SMIRK: 3D Facial Expressions through Analysis-by-Neural-Synthesis , author=. arXiv preprint arXiv:2404.04104 , year=
-
[40]
Proceedings of the AAAI conference on artificial intelligence , volume=
Flowface: Semantic flow-guided shape-aware face swapping , author=. Proceedings of the AAAI conference on artificial intelligence , volume=
-
[41]
arXiv preprint arXiv:2505.00615 , year=
Pixel3dmm: Versatile screen-space priors for single-image 3d face reconstruction , author=. arXiv preprint arXiv:2505.00615 , year=
-
[42]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Instant multi-view head capture through learnable registration , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[43]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Topologically consistent multi-view face inference using volumetric sampling , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[44]
Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
Learning detailed face reconstruction from a single image , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
-
[45]
Proceedings of the IEEE international conference on computer vision , pages=
Unrestricted facial geometry reconstruction using image-to-image translation , author=. Proceedings of the IEEE international conference on computer vision , pages=
-
[46]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Df2net: A dense-fine-finer network for detailed 3d face reconstruction , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[47]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Efficient geometry-aware 3d generative adversarial networks , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[48]
Neu S : Learning neural implicit surfaces by volume rendering for multi-view reconstruction
Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction , author=. arXiv preprint arXiv:2106.10689 , year=
-
[49]
The Thirty-eighth Annual Conference on Neural Information Processing Systems , year=
Generalizable and Animatable Gaussian Head Avatar , author=. The Thirty-eighth Annual Conference on Neural Information Processing Systems , year=
-
[50]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
LHM: Large Animatable Human Reconstruction Model for Single Image to 3D in Seconds , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[51]
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year=
GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians , author=. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year=
-
[52]
arXiv preprint arXiv:2508.19754 , year=
FastAvatar: Towards Unified Fast High-Fidelity 3D Avatar Reconstruction with Large Gaussian Reconstruction Transformers , author=. arXiv preprint arXiv:2508.19754 , year=
-
[53]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Occupancy networks: Learning 3d reconstruction in function space , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[54]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Deepsdf: Learning continuous signed distance functions for shape representation , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[55]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Deformed implicit field: Modeling 3d shapes with learned dense correspondence , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[56]
Point-E: A System for Generating 3D Point Clouds from Complex Prompts
Point-e: A system for generating 3d point clouds from complex prompts , author=. arXiv preprint arXiv:2212.08751 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[57]
arXiv preprint arXiv:2505.14521 , year=
Sparc3d: Sparse representation and construction for high-resolution 3d shapes modeling , author=. arXiv preprint arXiv:2505.14521 , year=
-
[58]
Bowen Jing, Bonnie Berger, and Tommi Jaakkola
Perceiver io: A general architecture for structured inputs & outputs , author=. arXiv preprint arXiv:2107.14795 , year=
-
[59]
ACM Transactions On Graphics (TOG) , volume=
3dshape2vecset: A 3d shape representation for neural fields and generative diffusion models , author=. ACM Transactions On Graphics (TOG) , volume=. 2023 , publisher=
work page 2023
-
[60]
Advances in neural information processing systems , volume=
Michelangelo: Conditional 3d shape generation based on shape-image-text aligned latent representation , author=. Advances in neural information processing systems , volume=
-
[61]
Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation , author=. arXiv preprint arXiv:2501.12202 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[62]
Native and Compact Structured Latents for 3D Generation , author=. Tech report , year=
-
[63]
arXiv preprint arXiv:2405.16890 , year=
Pivotmesh: Generic 3d mesh generation via pivot vertices guidance , author=. arXiv preprint arXiv:2405.16890 , year=
-
[64]
arXiv preprint arXiv:2409.18114 , year=
Edgerunner: Auto-regressive auto-encoder for artistic mesh generation , author=. arXiv preprint arXiv:2409.18114 , year=
-
[65]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Meshanything v2: Artist-created mesh generation with adjacent mesh tokenization , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[66]
arXiv preprint arXiv:2507.02477 , year=
Mesh Silksong: Auto-Regressive Mesh Generation as Weaving Silk , author=. arXiv preprint arXiv:2507.02477 , year=
-
[67]
Proceedings of the Computer Vision and Pattern Recognition Conference , pages=
Scaling mesh generation via compressive tokenization , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=
-
[68]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Nautilus: Locality-aware autoencoder for scalable mesh generation , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[69]
arXiv preprint arXiv:2505.13573 , year=
FreeMesh: Boosting Mesh Generation with Coordinates Merging , author=. arXiv preprint arXiv:2505.13573 , year=
-
[70]
arXiv preprint arXiv:2412.09548 , year=
Meshtron: High-fidelity, artist-like 3d mesh generation at scale , author=. arXiv preprint arXiv:2412.09548 , year=
-
[71]
arXiv preprint arXiv:2503.16653 , year=
iflame: Interleaving full and linear attention for efficient mesh generation , author=. arXiv preprint arXiv:2503.16653 , year=
-
[72]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Meshllm: Empowering large language models to progressively understand and generate 3d mesh , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[73]
arXiv preprint arXiv:2411.09595 , year=
Llama-mesh: Unifying 3d mesh generation with language models , author=. arXiv preprint arXiv:2411.09595 , year=
-
[74]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Diffusion model alignment using direct preference optimization , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[75]
Fast Graph Representation Learning with PyTorch Geometric
Fast graph representation learning with PyTorch Geometric , author=. arXiv preprint arXiv:1903.02428 , year=
work page internal anchor Pith review Pith/arXiv arXiv 1903
-
[76]
International conference on machine learning , pages=
Neural message passing for quantum chemistry , author=. International conference on machine learning , pages=. 2017 , organization=
work page 2017
-
[77]
Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
Dynamic edge-conditioned filters in convolutional neural networks on graphs , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
-
[78]
Advances in neural information processing systems , volume=
Inductive representation learning on large graphs , author=. Advances in neural information processing systems , volume=
-
[79]
ACM Transactions on Graphics (tog) , volume=
Dynamic graph cnn for learning on point clouds , author=. ACM Transactions on Graphics (tog) , volume=. 2019 , publisher=
work page 2019
-
[80]
ACM Transactions on Graphics (TOG) , volume=
Learning the geodesic embedding with graph neural networks , author=. ACM Transactions on Graphics (TOG) , volume=. 2023 , publisher=
work page 2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.