Recognition: unknown
Neural Enhancement of Analytical Appearance Models
Pith reviewed 2026-05-07 17:39 UTC · model grok-4.3
The pith
Neural enhancement replaces selected nodes in analytical appearance models with small multi-layer perceptrons to improve accuracy on real data.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We present neural enhancement, a novel framework to boost an input analytical appearance model, by identifying and replacing its key computational nodes/operators with small-scale multi-layer perceptrons. This allows us to leverage the computational graph structure of the original model, while improving its expressiveness at a modest cost. To make the enhancement computationally tractable, we propose a hypercube-based search to automatically and efficiently identify the node(s) and/or operator(s) to be replaced towards maximal gain in a differentiable fashion. We enhance a number of common analytical BRDF models. The results are, at once accurate, compact and efficient, and compare favorably
What carries the argument
Neural enhancement framework that uses hypercube-based search to select and replace key nodes or operators in an analytical model's graph with small multi-layer perceptrons
If this is right
- The enhanced models achieve higher accuracy when fitting measured reflectance data while retaining the original analytical model's structure and efficiency.
- Results remain compatible with any standard rasterization or ray-tracing pipeline without modification.
- The same approach improves fitting of bidirectional texture functions as well as single-point BRDFs.
- A modest number of extra parameters from the small MLPs is sufficient to close most of the accuracy gap to larger pure neural models.
Where Pith is reading between the lines
- The method could be applied to other analytical models such as those for subsurface scattering or participating media without redesigning the search procedure.
- The hypercube search itself may serve as a general tool for deciding where to inject neural capacity inside any fixed computational graph.
- In real-time rendering, the resulting models could reduce the need for precomputed texture tables or expensive tabulation while still matching captured material appearance.
- One could test whether the same gains appear when the replacement MLPs are constrained to even fewer layers or neurons.
Load-bearing premise
Replacing a few key nodes with small MLPs will measurably improve fit to physical data without losing the original model's compactness or speed, and the hypercube search will locate those nodes efficiently.
What would settle it
If the enhanced models show no reduction in fitting error on held-out measured BRDF or BTF datasets relative to the unmodified analytical baselines, or if their per-sample evaluation time increases by more than a small constant factor.
Figures
read the original abstract
Traditional analytical reflectance models, while compact and interpretable, lack the capacity to accurately represent physical measurements. Recent neural models, which closely fit input data, are less generalizable and often more expensive to store and evaluate. To combine the strengths and overcome the limitations of these two classes of models, we present neural enhancement, a novel framework to boost an input analytical appearance model, by identifying and replacing its key computational nodes/operators with small-scale multi-layer perceptrons. This allows us to leverage the computational graph structure of the original model, while improving its expressiveness at a modest cost. To make the enhancement computationally tractable, we propose a hypercube-based search to automatically and efficiently identify the node(s) and/or operator(s) to be replaced towards maximal gain in a differentiable fashion. We enhance a number of common analytical BRDF models. The results are, at once accurate, compact and efficient, and compare favorably with state-of-the-art work on fitting measured reflectance and bidirectional texture functions. Finally, our models are fully compatible with any standard rasterization or ray-tracing pipeline.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces 'neural enhancement,' a framework that augments analytical appearance models (primarily BRDFs) by automatically identifying and replacing selected computational nodes or operators in their graphs with small-scale multi-layer perceptrons. A hypercube-based search is used to determine the replacements in a differentiable manner for maximal gain. The enhanced models are applied to several standard analytical BRDFs and evaluated on fitting measured reflectance data and bidirectional texture functions, with claims of improved accuracy at modest additional cost while preserving compactness, efficiency, and compatibility with standard rasterization and ray-tracing pipelines.
Significance. If the central claims are substantiated by the experiments, the work would be significant for computer graphics by providing a practical hybrid that leverages the structure and efficiency of analytical models while gaining the fitting power of neural components. The hypercube search for node selection is a potentially useful technical device for making such enhancements tractable without exhaustive search. Strengths include the focus on compatibility with existing renderers and the attempt to quantify trade-offs between analytical and neural approaches.
major comments (2)
- [Abstract] Abstract: The repeated claim that the enhanced models remain 'compact and efficient' and incur only 'modest cost' while comparing favorably to SOTA is load-bearing for the contribution, yet the abstract provides no quantitative support such as parameter counts, storage sizes, evaluation timings, or error metrics (e.g., RMSE on measured data) relative to the unmodified analytical baselines or pure neural models. Without these, it is impossible to verify whether MLP replacements preserve the stated advantages or whether overhead accumulates as the skeptic notes.
- [Method] The hypercube search is presented as making enhancement 'computationally tractable,' but no analysis of its scaling (combinatorial cost, number of evaluations needed) or empirical runtime is referenced. If this upfront cost is high for graphs with many nodes, it undermines the practicality of the framework for complex appearance models.
minor comments (2)
- [Abstract] The abstract states that 'a number of common analytical BRDF models' were enhanced but does not enumerate them; listing the specific models (e.g., in a table) would clarify the scope and generality of the results.
- Notation for the hypercube search and node identification could be made more precise (e.g., formal definition of the search space and objective) to aid reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive review and for recognizing the potential of neural enhancement as a hybrid approach. We address each major comment below and will revise the manuscript to strengthen the presentation of quantitative evidence and analysis.
read point-by-point responses
-
Referee: [Abstract] Abstract: The repeated claim that the enhanced models remain 'compact and efficient' and incur only 'modest cost' while comparing favorably to SOTA is load-bearing for the contribution, yet the abstract provides no quantitative support such as parameter counts, storage sizes, evaluation timings, or error metrics (e.g., RMSE on measured data) relative to the unmodified analytical baselines or pure neural models. Without these, it is impossible to verify whether MLP replacements preserve the stated advantages or whether overhead accumulates as the skeptic notes.
Authors: We agree that the abstract lacks explicit quantitative metrics, which limits immediate verification of the compactness and efficiency claims. The manuscript body contains detailed comparisons (including parameter counts, storage sizes, evaluation timings, and RMSE values against analytical baselines and neural alternatives) in the results and evaluation sections. To address this, we will revise the abstract to include concise quantitative highlights, such as typical error reductions and overhead percentages, while respecting length constraints. revision: yes
-
Referee: [Method] The hypercube search is presented as making enhancement 'computationally tractable,' but no analysis of its scaling (combinatorial cost, number of evaluations needed) or empirical runtime is referenced. If this upfront cost is high for graphs with many nodes, it undermines the practicality of the framework for complex appearance models.
Authors: The hypercube search reduces the combinatorial space compared to exhaustive enumeration by structuring the search over node subsets in a differentiable manner. The manuscript shows its successful use on standard BRDF graphs, but we acknowledge the absence of an explicit scaling analysis or reported search runtimes. We will add a dedicated paragraph in the method section discussing the combinatorial complexity (linear in the hypercube dimension rather than exponential in node count) and include empirical timings for the search process on the evaluated models to demonstrate practicality. revision: yes
Circularity Check
No significant circularity; framework is self-contained against external data
full rationale
The derivation introduces a hypercube search over an analytical model's computational graph to select nodes for replacement by small MLPs, followed by fitting to measured reflectance data. No equation or claim reduces by construction to its own inputs (e.g., no parameter fitted on a subset then relabeled a prediction of a related quantity). No load-bearing self-citations, uniqueness theorems, or ansatzes imported from prior author work are invoked. Results are validated by direct comparison to measured BTF/BRDF data and SOTA baselines, confirming the chain remains independent of the target outputs.
Axiom & Free-Parameter Ledger
free parameters (1)
- MLP weights and biases
axioms (2)
- domain assumption Analytical models have identifiable computational nodes that can be replaced while preserving overall structure.
- standard math Differentiable search is possible via hypercube method.
Reference graph
Works this paper leans on
-
[1]
Dorsey, H
J. Dorsey, H. Rushmeier, and F. Sillion, Digital Modeling of Material Appearance. Elsevier, 2010
2010
-
[2]
Directional reflectance and emissivity of an opaque surface,
F. E. Nicodemus, “Directional reflectance and emissivity of an opaque surface,” Applied optics, vol. 4, no. 7, pp. 767–775, 1965
1965
-
[3]
A data-driven reflectance model,
W. Matusik, H. Pfister, M. Brand, and L. McMillan, “A data-driven reflectance model,” ACM Trans. Graph., vol. 22, p. 759–769, jul 2003
2003
-
[4]
Illumination for computer generated pictures,
B. T. Phong, “Illumination for computer generated pictures,” Commun. ACM, vol. 18, p. 311–317, jun 1975
1975
-
[5]
Measuring and modeling anisotropic reflection,
G. J. Ward, “Measuring and modeling anisotropic reflection,” in Proceedings of the 19th annual conference on Computer graphics and interactive techniques, pp. 265–272, 1992. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 10 Ground-Truth Enhanced Cook-Torrance Cook-Torrance [7] Enhanced Ward Ward [5] Enhanced GenBRDF GenBRDF [15] ILM L3 37 METAL...
1992
-
[6]
Non-linear approximation of reflectance functions,
E. P . Lafortune, S.-C. Foo, K. E. Torrance, and D. P . Green- berg, “Non-linear approximation of reflectance functions,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 117–126, 1997
1997
-
[7]
A reflectance model for computer graphics,
R. L. Cook and K. E. Torrance, “A reflectance model for computer graphics,” ACM Transactions on Graphics (ToG), vol. 1, no. 1, pp. 7–24, 1982
1982
-
[8]
Generalization of lambert’s reflectance model,
M. Oren and S. K. Nayar, “Generalization of lambert’s reflectance model,” in Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pp. 239–246, 1994
1994
-
[9]
An anisotropic phong brdf model,
M. Ashikhmin and P . Shirley, “An anisotropic phong brdf model,” Journal of graphics tools, vol. 5, no. 2, pp. 25–32, 2000
2000
-
[10]
Experimental analysis of brdf models.,
A. Ngan, F. Durand, and W. Matusik, “Experimental analysis of brdf models.,” Rendering Techniques, vol. 2005, no. 16th, p. 2, 2005
2005
-
[11]
Neural brdf representation and importance sampling,
A. Sztrajman, G. Rainer, T. Ritschel, and T. Weyrich, “Neural brdf representation and importance sampling,” in Computer Graphics Forum, vol. 40, pp. 332–346, Wiley Online Library, 2021
2021
-
[12]
Neural layered brdfs,
J. Fan, B. Wang, M. Hasan, J. Yang, and L.-Q. Yan, “Neural layered brdfs,” in ACM SIGGRAPH 2022 Conference Proceedings, SIGGRAPH ’22, (New York, NY, USA), Association for Computing Machinery, 2022
2022
-
[13]
Microfacet models for refraction through rough surfaces,
B. Walter, S. R. Marschner, H. Li, and K. E. Torrance, “Microfacet models for refraction through rough surfaces,” in Proceedings of the 18th Eurographics conference on Rendering Techniques, pp. 195–206, 2007
2007
-
[14]
An adaptive parameterization for effi- cient material acquisition and rendering,
J. Dupuy and W. Jakob, “An adaptive parameterization for effi- cient material acquisition and rendering,” ACM Transactions on graphics (TOG), vol. 37, no. 6, pp. 1–14, 2018
2018
-
[15]
genbrdf: Dis- covering new analytic brdfs with genetic programming,
A. Brady, J. Lawrence, P . Peers, and W. Weimer, “genbrdf: Dis- covering new analytic brdfs with genetic programming,” ACM Transactions on Graphics (TOG), vol. 33, no. 4, pp. 1–11, 2014
2014
-
[16]
Neural biplane representation for btf rendering and acquisition,
J. Fan, B. Wang, M. Hasan, J. Yang, and L.-Q. Yan, “Neural biplane representation for btf rendering and acquisition,” in ACM SIGGRAPH 2023 Conference Proceedings, SIGGRAPH ’23, (New York, NY, USA), Association for Computing Machinery, 2023
2023
-
[17]
Deep inverse rendering for high-resolution svbrdf estimation from an arbitrary number of images.,
D. Gao, X. Li, Y. Dong, P . Peers, K. Xu, and X. Tong, “Deep inverse rendering for high-resolution svbrdf estimation from an arbitrary number of images.,” ACM Trans. Graph., vol. 38, no. 4, pp. 134–1, 2019
2019
-
[18]
Match: differentiable material graphs for procedural material capture,
L. Shi, B. Li, M. Ha ˇsan, K. Sunkavalli, T. Boubekeur, R. Mech, and W. Matusik, “Match: differentiable material graphs for procedural material capture,” ACM Trans. Graph., vol. 39, Nov. 2020
2020
-
[19]
Matformer: a generative model for procedural materials,
P . Guerrero, M. Ha ˇsan, K. Sunkavalli, R. M ˇech, T. Boubekeur, and N. J. Mitra, “Matformer: a generative model for procedural materials,” ACM Trans. Graph., vol. 41, July 2022
2022
-
[20]
Advances in geometry and re- flectance acquisition,
M. Weinmann and R. Klein, “Advances in geometry and re- flectance acquisition,” in SIGGRAPH Asia Courses, pp. 1:1–1:71, 2015
2015
-
[21]
Principles of appearance acquisition and representa- tion,
T. Weyrich, J. Lawrence, H. P . A. Lensch, S. Rusinkiewicz, and T. Zickler, “Principles of appearance acquisition and representa- tion,” Found. Trends. Comput. Graph. Vis., vol. 4, no. 2, pp. 75– JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 11 Fig. 16: We computed the SSIM of our enhanced GGX, the original GGX, GenBRDF [15], the original Cook...
2015
-
[22]
Brdf representation and acquisition,
D. Guarnera, G. C. Guarnera, A. Ghosh, C. Denk, and M. Glencross, “Brdf representation and acquisition,” in Computer Graphics Forum, vol. 35, pp. 625–650, Wiley Online Library, 2016
2016
-
[23]
Deep appearance modeling: A survey,
Y. Dong, “Deep appearance modeling: A survey,” Visual Informatics, vol. 3, no. 2, pp. 59–68, 2019
2019
-
[24]
Deep svbrdf acquisition and modelling: A survey,
B. Kavoosighafi, S. Hajisharif, E. Miandji, G. Baravdish, W. Cao, and J. Unger, “Deep svbrdf acquisition and modelling: A survey,” in Computer Graphics Forum, vol. 43, p. e15199, Wiley Online Library, 2024
2024
-
[25]
Beckmann and A
P . Beckmann and A. Spizzichino, The Scattering of Electromagnetic Waves from Rough Surfaces. Goford: Pergamon Press, 1963. [distributed in the Western Hemisphere by Macmillan, New York]
1963
-
[26]
Theory for off-specular reflec- tion from roughened surfaces,
K. E. Torrance and E. M. Sparrow, “Theory for off-specular reflec- tion from roughened surfaces,” Journal of the Optical Society of America, vol. 57, no. 9, pp. 1105–1114, 1967
1967
-
[27]
Models of light reflection for computer synthe- sized pictures,
J. F. Blinn, “Models of light reflection for computer synthe- sized pictures,” in Proceedings of the 4th annual conference on Computer graphics and interactive techniques, pp. 192–198, 1977
1977
-
[28]
An inexpensive brdf model for physically-based ren- dering,
C. Schlick, “An inexpensive brdf model for physically-based ren- dering,” in Computer graphics forum, vol. 13, pp. 233–246, Wiley Online Library, 1994
1994
-
[29]
Geometrical shadowing of a random rough surface,
B. Smith, “Geometrical shadowing of a random rough surface,” IEEE transactions on antennas and propagation, vol. 15, no. 5, pp. 668–671, 1967
1967
-
[30]
A microfacet-based brdf generator,
M. Ashikhmin, “A microfacet-based brdf generator,” Proc. ACM SIGGRAPH 2000, New Orleans, USA, July, 2000
2000
-
[31]
Average irregularity representation of a rough surface for ray reflection,
T. Trowbridge and K. P . Reitz, “Average irregularity representation of a rough surface for ray reflection,”Journal of the Optical Society of America, vol. 65, no. 5, pp. 531–536, 1975
1975
-
[32]
A sparse parametric mixture model for btf compression, editing and rendering,
H. Wu, J. Dorsey, and H. Rushmeier, “A sparse parametric mixture model for btf compression, editing and rendering,” in Computer Graphics Forum, vol. 30, pp. 465–473, Wiley Online Library, 2011
2011
-
[33]
An anisotropic brdf model for fitting and monte carlo rendering,
M. Kurt, L. Szirmay-Kalos, and J. K ˇriv´anek, “An anisotropic brdf model for fitting and monte carlo rendering,” ACM SIGGRAPH Computer Graphics, vol. 44, no. 1, pp. 1–15, 2010
2010
-
[34]
Accurate fitting of measured reflectances using a shifted gamma micro-facet distribu- tion,
M. M. Bagher, C. Soler, and N. Holzschuch, “Accurate fitting of measured reflectances using a shifted gamma micro-facet distribu- tion,” in Computer Graphics Forum, vol. 31, pp. 1509–1518, Wiley Online Library, 2012
2012
-
[35]
Brdf models for accurate and efficient rendering of glossy surfaces,
J. L ¨ow, J. Kronander, A. Ynnerman, and J. Unger, “Brdf models for accurate and efficient rendering of glossy surfaces,” ACM Transactions on Graphics (TOG), vol. 31, no. 1, pp. 1–14, 2012
2012
-
[36]
Neural btf compression and interpolation,
G. Rainer, W. Jakob, A. Ghosh, and T. Weyrich, “Neural btf compression and interpolation,” in Computer Graphics Forum, vol. 38, pp. 235–244, Wiley Online Library, 2019
2019
-
[37]
Efficient structuring of the latent space for controllable data reconstruc- tion and compression,
E. Trunz, M. Weinmann, S. Merzbach, and R. Klein, “Efficient structuring of the latent space for controllable data reconstruc- tion and compression,” Graphics and Visual Computing, vol. 7, p. 200059, 2022
2022
-
[38]
Hypernetworks for generalizable brdf representa- tion,
F. Gokbudak, A. Sztrajman, C. Zhou, F. Zhong, R. Mantiuk, and C. Oztireli, “Hypernetworks for generalizable brdf representa- tion,” in European Conference on Computer Vision, pp. 73–89, Springer, 2024
2024
-
[39]
Real-time neural appearance models,
T. Zeltner*, F. Rousselle*, A. Weidlich*, P . Clarberg*, J. Nov ´ak*, B. Bitterli*, A. Evans, T. Davidovi ˇc, S. Kallweit, and A. Lefohn, “Real-time neural appearance models,” ACM Transactions on JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 12 Graphics, vol. 43, no. 3, pp. 1–17, 2024
2015
-
[40]
Deepbrdf: A deep representation for manipulating measured brdf,
B. Hu, J. Guo, Y. Chen, M. Li, and Y. Guo, “Deepbrdf: A deep representation for manipulating measured brdf,” in Computer Graphics Forum, vol. 39, pp. 157–166, Wiley Online Library, 2020
2020
-
[41]
Unified neural encoding of btfs,
G. Rainer, A. Ghosh, W. Jakob, and T. Weyrich, “Unified neural encoding of btfs,” in Computer Graphics Forum, vol. 39, pp. 167– 178, Wiley Online Library, 2020
2020
-
[42]
Neumip: Multi-resolution neural materials,
A. Kuznetsov, “Neumip: Multi-resolution neural materials,” ACM Transactions on Graphics (TOG), vol. 40, no. 4, 2021
2021
-
[43]
A hierarchical architecture for neural materials,
B. Xue, S. Zhao, H. W. Jensen, and Z. Montazeri, “A hierarchical architecture for neural materials,” Computer Graphics Forum, vol. 43, no. 6, p. e15116, 2024
2024
-
[44]
Metappearance: Meta-learning for visual appearance reproduction,
M. Fischer and T. Ritschel, “Metappearance: Meta-learning for visual appearance reproduction,” ACM Transactions on Graphics (TOG), vol. 41, no. 6, pp. 1–13, 2022
2022
-
[45]
A compact representation of measured brdfs using neural processes,
C. Zheng, R. Zheng, R. Wang, S. Zhao, and H. Bao, “A compact representation of measured brdfs using neural processes,” ACM Trans. Graph., vol. 41, nov 2021
2021
-
[46]
Learning gener- ative models for rendering specular microgeometry,
A. Kuznetsov, M. Ha ˇsan, Z. Xu, L.-Q. Yan, B. Walter, N. K. Kalantari, S. Marschner, and R. Ramamoorthi, “Learning gener- ative models for rendering specular microgeometry,” ACM Trans. Graph., vol. 38, Nov. 2019
2019
-
[47]
Learning-based inverse bi- scale material fitting from tabular brdfs,
W. Shi, J. Dorsey, and H. Rushmeier, “Learning-based inverse bi- scale material fitting from tabular brdfs,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 4, pp. 1810– 1823, 2022
2022
-
[48]
Distilling free-form natural laws from experimental data,
M. Schmidt and H. Lipson, “Distilling free-form natural laws from experimental data,” science, vol. 324, no. 5923, pp. 81–85, 2009
2009
-
[49]
arXiv preprint arXiv:1912.04871 (2019)
B. K. Petersen, M. Landajuela, T. N. Mundhenk, C. P . Santiago, S. K. Kim, and J. T. Kim, “Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradi- ents,” arXiv preprint arXiv:1912.04871, 2019
-
[50]
Ai feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity,
S.-M. Udrescu, A. Tan, J. Feng, O. Neto, T. Wu, and M. Tegmark, “Ai feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity,” Advances in Neural Information Processing Systems, vol. 33, pp. 4860–4871, 2020
2020
-
[51]
Neural symbolic regression that scales,
L. Biggio, T. Bendinelli, A. Neitz, A. Lucchi, and G. Parascan- dolo, “Neural symbolic regression that scales,” in International Conference on Machine Learning, pp. 936–945, Pmlr, 2021
2021
-
[52]
Koshy, Discrete mathematics with applications
T. Koshy, Discrete mathematics with applications. Elsevier, 2004
2004
-
[53]
Opensvbrdf: A database of measured spatially-varying reflectance,
X. Ma, X. Xu, L. Zhang, K. Zhou, and H. Wu, “Opensvbrdf: A database of measured spatially-varying reflectance,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–14, 2023
2023
-
[54]
Dupuy, Photorealistic Surface Rendering with Microfacet Theory
J. Dupuy, Photorealistic Surface Rendering with Microfacet Theory. Theses, Universit ´e Claude Bernard - Lyon I ; Universit ´e de Montr´eal (1878-....), Nov. 2015
2015
-
[55]
Neusample: Importance sampling for neural materials,
B. Xu, L. Wu, M. Hasan, F. Luan, I. Georgiev, Z. Xu, and R. Ramamoorthi, “Neusample: Importance sampling for neural materials,” in ACM SIGGRAPH 2023 Conference Proceedings, SIGGRAPH ’23, (New York, NY, USA), Association for Computing Machinery, 2023
2023
-
[56]
Standard shader ball: A modern and feature-rich render test scene,
A. Mazzone and C. Rydalch, “Standard shader ball: A modern and feature-rich render test scene,” in SIGGRAPH Asia 2023 Technical Communications, SA ’23, (New York, NY, USA), Association for Computing Machinery, 2023
2023
-
[57]
Perceived Quality of BRDF Models,
B. Kavoosighafi, R. K. Mantiuk, S. Hajisharif, E. Miandji, and J. Unger, “Perceived Quality of BRDF Models,” Computer Graphics Forum, 2025
2025
-
[58]
A new change of variables for efficient brdf representation,
S. M. Rusinkiewicz, “A new change of variables for efficient brdf representation,” in Rendering Techniques’98: Proceedings of the Eurographics Workshop in Vienna, Austria, June 29—July 1, 1998 9, pp. 11–22, Springer, 1998
1998
-
[59]
Efficient reflectance capture with a deep gated mixture-of-experts,
X. Ma, Y. Yu, H. Wu, and K. Zhou, “Efficient reflectance capture with a deep gated mixture-of-experts,”IEEE TVCG, pp. 1–12, 2023
2023
-
[60]
Material classification based on training data synthesized using a btf database,
M. Weinmann, J. Gall, and R. Klein, “Material classification based on training data synthesized using a btf database,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III 13, pp. 156–171, Springer, 2014
2014
-
[61]
Mitsuba 3 renderer
W. Jakob, S. Speierer, N. Roussel, M. Nimier-David, D. Vicini, T. Zeltner, B. Nicolet, M. Crespo, V . Leroy, and Z. Zhang, “Mitsuba 3 renderer.” https://mitsuba-renderer.org, 2022
2022
-
[62]
Template-based sampling of anisotropic brdfs,
J. Filip and R. V ´avra, “Template-based sampling of anisotropic brdfs,” in Computer Graphics Forum, vol. 33, pp. 91–99, Wiley Online Library, 2014
2014
-
[63]
On the spectral bias of neu- ral networks,
N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Ham- precht, Y. Bengio, and A. Courville, “On the spectral bias of neu- ral networks,” in International conference on machine learning, pp. 5301–5310, PMLR, 2019
2019
-
[64]
Nerf: Representing scenes as neural radiance fields for view synthesis,
B. Mildenhall, P . P . Srinivasan, M. Tancik, J. T. Barron, R. Ra- mamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021
2021
-
[65]
A microfacet-based brdf generator,
M. Ashikmin, S. Premo ˇze, and P . Shirley, “A microfacet-based brdf generator,” in Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 65–74, 2000
2000
-
[66]
Material- gan: Reflectance capture using a generative svbrdf model,
Y. Guo, C. Smith, M. Ha ˇsan, K. Sunkavalli, and S. Zhao, “Material- gan: Reflectance capture using a generative svbrdf model,” arXiv preprint arXiv:2010.00114, 2020
-
[67]
KAN: Kolmogorov-Arnold Networks
Z. Liu, Y. Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Solja ˇci´c, T. Y. Hou, and M. Tegmark, “Kan: Kolmogorov-arnold networks,” arXiv preprint arXiv:2404.19756, 2024. Xuanzhe Shen is a master student in the State Key Lab of CAD & CG, Zhejiang University. He received his B.Eng. degree from the same uni- versity in 2024. His research interests include ap...
work page internal anchor Pith review arXiv 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.