Vector Scaffolding uses Interior Gradient Aggregation, Progressive Stratification, and Rapid Inflation Scheduling to achieve 2.5x faster optimization and up to 1.4 dB higher PSNR in differentiable vectorization.
hub
Instant neural graphics primitives with a multiresolution hash encoding
13 Pith papers cite this work. Polarity classification is still indexing.
hub tools
representative citing papers
HairGPT reframes 3D hairstyle synthesis as dual-decoupled autoregressive strand sequence modeling with geometric tokenization for semantic control and rare style generation.
RobotPan predicts metric-scaled compact 3D Gaussians from calibrated multi-view inputs via spherical coordinates and hierarchical voxel priors for real-time 360° robotic perception and reconstruction.
LRM is a large transformer that predicts a NeRF directly from a single image after training on a million-object multi-view dataset.
Probability-Flow Distillation exactly matches the Wasserstein gradient flow of the target distribution when distilling 2D diffusion priors into 3D models, yielding higher-fidelity results than SDS or SDI.
YOGO reformulates stochastic 3D Gaussian Splatting into a deterministic budget-aware system and supplies an ultra-dense dataset to enforce physical fidelity over viewpoint interpolation.
A physics-aware query-conditioned hierarchical graph attention network estimates point-wise transmitter-resolved radio maps from sparse measurements and outperforms baselines on DeepMIMO simulations in direct, residual, and gated regimes.
Habitat-GS integrates 3D Gaussian Splatting scene rendering and Gaussian avatars into Habitat-Sim, yielding agents with stronger cross-domain generalization and effective human-aware navigation.
NeuVolEx extracts robust spatial features from INR training via a structural encoder and multi-task scheme to enable accurate ROI classification with limited supervision and unsupervised viewpoint clustering in volume exploration.
ANTIC reduces storage for large-scale PDE simulations by orders of magnitude through adaptive temporal snapshot selection combined with continual neural-field residual compression while preserving physics accuracy.
TouchAnything reconstructs accurate 3D object geometries from only a few tactile contacts by optimizing for consistency with a pretrained visual diffusion prior.
Implicit neural representations enable stable, resolution-independent reconstruction of continuous environmental fields from sparse and irregular ecological data.
Comparative study of DS-NeRF, TensoRF, and HashNeRF with depth-supervision and architectural variants finds no conclusive outperformance under equal training time but identifies which design choices transfer to low-data, low-compute regimes.
citing papers explorer
-
Vector Scaffolding: Inter-Scale Orchestration for Differentiable Image Vectorization
Vector Scaffolding uses Interior Gradient Aggregation, Progressive Stratification, and Rapid Inflation Scheduling to achieve 2.5x faster optimization and up to 1.4 dB higher PSNR in differentiable vectorization.
-
HairGPT: Strand-as-Language Autoregressive Modeling for Realistic 3D Hairstyle Synthesis
HairGPT reframes 3D hairstyle synthesis as dual-decoupled autoregressive strand sequence modeling with geometric tokenization for semantic control and rare style generation.
-
RobotPan: A 360$^\circ$ Surround-View Robotic Vision System for Embodied Perception
RobotPan predicts metric-scaled compact 3D Gaussians from calibrated multi-view inputs via spherical coordinates and hierarchical voxel priors for real-time 360° robotic perception and reconstruction.
-
LRM: Large Reconstruction Model for Single Image to 3D
LRM is a large transformer that predicts a NeRF directly from a single image after training on a million-object multi-view dataset.
-
Probability-Flow Distillation: Exact Wasserstein Gradient Flow for High-Fidelity 3D Generation
Probability-Flow Distillation exactly matches the Wasserstein gradient flow of the target distribution when distilling 2D diffusion priors into 3D models, yielding higher-fidelity results than SDS or SDI.
-
You Only Gaussian Once: Controllable 3D Gaussian Splatting for Ultra-Densely Sampled Scenes
YOGO reformulates stochastic 3D Gaussian Splatting into a deterministic budget-aware system and supplies an ultra-dense dataset to enforce physical fidelity over viewpoint interpolation.
-
Physics-Aware Query-Conditioned Graph Attention Networks for Radio Map Estimation
A physics-aware query-conditioned hierarchical graph attention network estimates point-wise transmitter-resolved radio maps from sparse measurements and outperforms baselines on DeepMIMO simulations in direct, residual, and gated regimes.
-
Habitat-GS: A High-Fidelity Navigation Simulator with Dynamic Gaussian Splatting
Habitat-GS integrates 3D Gaussian Splatting scene rendering and Gaussian avatars into Habitat-Sim, yielding agents with stronger cross-domain generalization and effective human-aware navigation.
-
NeuVolEx: Implicit Neural Features for Volume Exploration
NeuVolEx extracts robust spatial features from INR training via a structural encoder and multi-task scheme to enable accurate ROI classification with limited supervision and unsupervised viewpoint clustering in volume exploration.
-
ANTIC: Adaptive Neural Temporal In-situ Compressor
ANTIC reduces storage for large-scale PDE simulations by orders of magnitude through adaptive temporal snapshot selection combined with continual neural-field residual compression while preserving physics accuracy.
-
TouchAnything: Diffusion-Guided 3D Reconstruction from Sparse Robot Touches
TouchAnything reconstructs accurate 3D object geometries from only a few tactile contacts by optimizing for consistency with a pretrained visual diffusion prior.
-
Implicit neural representations as a coordinate-based framework for continuous environmental field reconstruction from sparse ecological observations
Implicit neural representations enable stable, resolution-independent reconstruction of continuous environmental fields from sparse and irregular ecological data.
-
Low-Cost Neural Radiance Fields
Comparative study of DS-NeRF, TensoRF, and HashNeRF with depth-supervision and architectural variants finds no conclusive outperformance under equal training time but identifies which design choices transfer to low-data, low-compute regimes.