pith. machine review for the scientific record. sign in

arxiv: 2605.06214 · v1 · submitted 2026-05-07 · 💻 cs.CV

Recognition: unknown

Differentiable Adaptive 4D Structured Illumination for Joint Capture of Shape and Reflectance

Authors on Pith no claims yet

Pith reviewed 2026-05-08 13:40 UTC · model grok-4.3

classification 💻 cs.CV
keywords adaptive illuminationstructured lightshape reconstructionreflectance estimationdifferentiable optimizationdepth uncertainty4D light patternsjoint capture
0
0 comments X

The pith

A differentiable framework adaptively selects 4D illumination patterns to jointly capture object shape and reflectance with one camera.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a system that automatically picks the next set of 4D structured-light patterns based on current uncertainty at each pixel, so that shape and reflectance can be recovered together in fewer steps than fixed-pattern methods. A histogram model tracks probability distributions over possible depth and reflectance values per pixel, and the choice of illumination is made differentiable so that a loss can directly minimize expected depth uncertainty. Captured images update the histograms, and a final optimization aligns all real measurements against a forward simulation of the same patterns to produce the output maps. If the approach works as described, it would simplify high-quality 3D scanning that also yields surface reflectance parameters without requiring multiple camera positions or separate hardware.

Core claim

We present a differentiable framework to adaptively compute 4D illumination conditions with respect to an object, for efficient, high-quality simultaneous acquisition of its shape and reflectance, with a unified spatial-angular structured light and a single camera. Using a simple histogram-based pixel-level probability model for depth and reflectance, we differentiably link the next illumination condition(s) with a loss that encourages the reduction in depth uncertainty. As new structured illumination is cast, corresponding image measurements are used to update the uncertainty at each pixel. Finally, a fine-tuning-based approach reconstructs the depth map and reflectance parameter maps, by 3

What carries the argument

Histogram-based pixel-level probability model that differentiably links illumination selection to reduction of depth uncertainty.

If this is right

  • Fewer total illuminations suffice for high-quality depth maps compared with non-adaptive sequences.
  • Shape and reflectance are recovered in one unified capture process without separate rigs.
  • Depth results on varied physical objects match or exceed current state-of-the-art techniques.
  • Reflectance parameter maps remain consistent with real photographs under the same lighting.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The uncertainty-driven selection could be applied to other single-camera sensing tasks such as polarization or fluorescence imaging if similar forward models exist.
  • If the per-pixel histogram updates can be performed in real time, the method might support dynamic scenes with moving objects.
  • Performance on scenes dominated by interreflections or subsurface scattering would test the limits of the current single-bounce simulation assumption.

Load-bearing premise

The histogram model at each pixel accurately represents the true uncertainties in depth and reflectance so that minimizing the derived loss produces useful adaptive illumination choices.

What would settle it

Showing that a fixed non-adaptive collection of the same number of 4D illumination patterns yields equal or better depth accuracy and reflectance fidelity on the identical set of physical test objects would falsify the benefit of the adaptive differentiable selection.

Figures

Figures reproduced from arXiv: 2605.06214 by Hongzhi Wu, Huakeng Ding, Kun Zhou, Yaowen Chen.

Figure 1
Figure 1. Figure 1: Our acquisition setup. It consists of a camera, an LED view at source ↗
Figure 2
Figure 2. Figure 2: Our pipeline consists of two stages. First, for a physical object, we compute the next light/mask pattern(s) by minimizing the cross view at source ↗
Figure 3
Figure 3. Figure 3: Graphical illustration of our probability model for depth. view at source ↗
Figure 4
Figure 4. Figure 4: Visualization of various parts in adaptive acquisition. view at source ↗
Figure 5
Figure 5. Figure 5: Reflectance results represented as GGX BRDF parame view at source ↗
Figure 6
Figure 6. Figure 6: Comparisons with state-of-the-art techniques on shape and reflectance capture. From the left column to right: depth reconstruction view at source ↗
Figure 7
Figure 7. Figure 7: Comparison with a single-source structured light [ view at source ↗
Figure 12
Figure 12. Figure 12: Impact of nbin over the depth quality. 512×256 254×128 127×64 RMSE = 4.20mm RMSE(%inliers) = 0.38mm (94.4% ) RMSE = 4.17mm RMSE(%inliers) = 0.40mm (94.5% ) RMSE = 3.71mm RMSE(%inliers) = 0.67mm (93.3% ) view at source ↗
Figure 9
Figure 9. Figure 9: Impact of nsample over the depth quality. nbatch = 2 nbatch = 3 nbatch = 6 RMSE = 3.57mm RMSE(%inliers) = 0.43mm (97.9% ) RMSE = 3.54mm RMSE(%inliers) = 0.40mm (98.3% ) RMSE = 3.59mm RMSE(%inliers) = 0.45mm (97.6% ) view at source ↗
Figure 10
Figure 10. Figure 10: Impact of the number of simultaneously optimized next view at source ↗
read the original abstract

We present a differentiable framework to adaptively compute 4D illumination conditions with respect to an object, for efficient, high-quality simultaneous acquisition of its shape and reflectance, with a unified spatial-angular structured light and a single camera. Using a simple histogram-based pixel-level probability model for depth and reflectance, we differentiably link the next illumination condition(s) with a loss that encourages the reduction in depth uncertainty. As new structured illumination is cast, corresponding image measurements are used to update the uncertainty at each pixel. Finally, a fine-tuning-based approach reconstructs the depth map and reflectance parameter maps, by minimizing the differences between all physical measurements and their simulated counterparts. The effectiveness of our framework is demonstrated on physical objects with wide variations in shape and appearance. Our depth results compare favorably with state-of-the-art techniques, while our reflectance results are comparable when validated against photographs.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes a differentiable framework for adaptively computing 4D structured illumination patterns (spatial-angular) to enable efficient joint capture of an object's shape (depth) and reflectance using a single camera. It introduces a simple histogram-based per-pixel probability model for depth and reflectance uncertainty, uses differentiation to select the next illumination(s) via a loss that reduces depth uncertainty, updates the model with new measurements, and performs final reconstruction of depth and reflectance maps by minimizing simulation-to-measurement differences in a fine-tuning optimization. Effectiveness is shown via physical demonstrations on objects with varying shape and appearance, with depth results claimed to compare favorably to SOTA and reflectance validated against photographs.

Significance. If the central claims hold under quantitative scrutiny, the work could advance efficient, unified acquisition pipelines for geometry and appearance in computer vision and graphics, reducing the need for separate shape and reflectance scans. The differentiable link between uncertainty model and illumination selection is a technical strength that enables adaptive, measurement-driven optimization, and the physical validation on real objects adds practical relevance. However, the decoupling of adaptation (depth-only) from reflectance reconstruction limits the 'joint' and 'simultaneous' aspects of the contribution.

major comments (2)
  1. [Abstract / Method (adaptive selection paragraph)] Abstract and method overview: the adaptive illumination selection is driven solely by back-propagation of a depth-uncertainty loss ('differentiably link the next illumination condition(s) with a loss that encourages the reduction in depth uncertainty'), while reflectance parameter maps are recovered only afterward via non-adaptive fine-tuning optimization that matches all accumulated measurements to simulations. This separation means illumination patterns are never informed by reflectance uncertainty gradients, weakening the central claim of a unified adaptive framework for joint shape-and-reflectance capture.
  2. [Abstract / §3 (probability model)] Abstract and §3 (histogram model description): the pixel-independent histogram probability model for joint depth/reflectance uncertainty is used to guide adaptation, yet the text notes it is 'simple' and does not explicitly incorporate view-dependent BRDF effects or inter-reflections. Without such handling, the uncertainty estimates (and thus the selected patterns) may be inaccurate for specular or complex reflectance, directly affecting the reliability of the joint-capture pipeline.
minor comments (2)
  1. [Abstract / Results] The abstract claims 'favorable' depth comparisons and 'comparable' reflectance results but provides no quantitative metrics, error analysis, or specific baselines; the full results section should include tables with RMSE, PSNR, or similar values across multiple objects and illumination counts to support these statements.
  2. [Method] Notation for the 4D illumination space and the exact form of the differentiable loss could be clarified with an equation or pseudocode to make the adaptation step reproducible.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment below with point-by-point responses, including planned revisions where appropriate.

read point-by-point responses
  1. Referee: Abstract and method overview: the adaptive illumination selection is driven solely by back-propagation of a depth-uncertainty loss ('differentiably link the next illumination condition(s) with a loss that encourages the reduction in depth uncertainty'), while reflectance parameter maps are recovered only afterward via non-adaptive fine-tuning optimization that matches all accumulated measurements to simulations. This separation means illumination patterns are never informed by reflectance uncertainty gradients, weakening the central claim of a unified adaptive framework for joint shape-and-reflectance capture.

    Authors: We appreciate the referee's observation on the design of our adaptive selection process. The framework uses a single set of 4D illumination patterns, selected adaptively to reduce depth uncertainty, for the joint capture and subsequent reconstruction of both shape and reflectance. Depth accuracy is prioritized in adaptation because it directly improves the fidelity of the simulation-to-measurement optimization used for reflectance parameter recovery. While reflectance uncertainty gradients are not back-propagated during selection, the resulting measurements enhance the joint reconstruction quality. We will revise the abstract and §3 to clarify this rationale and the precise scope of the 'joint' and 'unified' aspects of the contribution. revision: partial

  2. Referee: Abstract and §3 (histogram model description): the pixel-independent histogram probability model for joint depth/reflectance uncertainty is used to guide adaptation, yet the text notes it is 'simple' and does not explicitly incorporate view-dependent BRDF effects or inter-reflections. Without such handling, the uncertainty estimates (and thus the selected patterns) may be inaccurate for specular or complex reflectance, directly affecting the reliability of the joint-capture pipeline.

    Authors: The referee accurately identifies that our per-pixel histogram model is intentionally simple and does not model view-dependent BRDF effects or inter-reflections. This choice supports efficient differentiability and practical real-time adaptation on physical hardware. Our experiments include objects with varying appearances, some exhibiting specular highlights, where the method yields usable results. We acknowledge the limitation for highly complex reflectance scenarios. We will expand the discussion in §3 and the conclusion to explicitly state the model's assumptions and outline potential future extensions incorporating advanced reflectance models. revision: yes

Circularity Check

0 steps flagged

No circularity: adaptation driven by real measurements updating independent uncertainty model; reconstruction optimizes against data separately.

full rationale

The derivation proceeds by maintaining a per-pixel histogram probability model whose parameters are updated directly from new physical image measurements after each adaptive illumination choice. The loss for selecting the next pattern is computed from the current uncertainty state (post-update), and the final depth/reflectance maps are obtained by a separate optimization that minimizes simulation-to-measurement residuals over the entire accumulated set. No equation reduces to its own input by construction, no fitted parameter is relabeled as a prediction, and no load-bearing step relies on self-citation or an imported uniqueness theorem. The framework is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the accuracy of the histogram-based probability model and the assumption that simulation-to-measurement differences can be minimized to recover true depth and reflectance.

axioms (1)
  • domain assumption Histogram-based pixel-level probability model for depth and reflectance
    Used to differentiably link illumination conditions to uncertainty reduction.

pith-pipeline@v0.9.0 · 5452 in / 1046 out tokens · 59592 ms · 2026-05-08T13:40:58.931134+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

48 extracted references

  1. [1]

    Practical svbrdf capture in the frequency domain.ACM Trans

    Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. Practical svbrdf capture in the frequency domain.ACM Trans. Graph., 32(4), 2013

  2. [2]

    Gs 3: Efficient relighting with triple gaussian splatting

    Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, and Hongzhi Wu. Gs 3: Efficient relighting with triple gaussian splatting. InSIGGRAPH Asia 2024 Confer- ence Papers, 2024

  3. [3]

    Reflectance scanning: estimating shading frame and brdf with generalized linear light sources.ACM Trans

    Guojun Chen, Yue Dong, Pieter Peers, Jiawan Zhang, and Xin Tong. Reflectance scanning: estimating shading frame and brdf with generalized linear light sources.ACM Trans. Graph., 33(4), 2014

  4. [4]

    Kutulakos

    Wenzheng Chen, Parsa Mirdehghan, Sanja Fidler, and Kiri- akos N. Kutulakos. Auto-tuning structured light by optical stochastic gradient descent. In2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5969–5979, 2020

  5. [5]

    Differentiable display photometric stereo

    Seokjun Choi, Seungwoo Yoon, Giljoo Nam, Seungyong Lee, and Seung-Hwan Baek. Differentiable display photometric stereo. In2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11831–11840, 2024

  6. [6]

    Reflectance and texture of real-world surfaces

    Kristin J Dana, Bram Van Ginneken, Shree K Nayar, and Jan J Koenderink. Reflectance and texture of real-world surfaces. ACM Transactions On Graphics (TOG), 18(1):1–34, 1999

  7. [7]

    Deep appearance modeling: A survey.Visual In- formatics, 3(2):59–68, 2019

    Yue Dong. Deep appearance modeling: A survey.Visual In- formatics, 3(2):59–68, 2019

  8. [8]

    An adaptive parameteri- zation for efficient material acquisition and rendering.ACM Trans

    Jonathan Dupuy and Wenzel Jakob. An adaptive parameteri- zation for efficient material acquisition and rendering.ACM Trans. Graph., 37(6), 2018

  9. [9]

    Hyperdepth: Learning depth from structured light without matching

    Sean Ryan Fanello, Christoph Rhemann, Vladimir Tankovich, Adarsh Kowdle, Sergio Orts Escolano, David Kim, and Shahram Izadi. Hyperdepth: Learning depth from structured light without matching. In2016 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 5441– 5450, 2016

  10. [10]

    Ultrastereo: Efficient learning-based match- ing for active stereo systems

    Sean Ryan Fanello, Julien Valentin, Christoph Rhemann, Adarsh Kowdle, Vladimir Tankovich, Philip Davidson, and Shahram Izadi. Ultrastereo: Efficient learning-based match- ing for active stereo systems. In2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6535–6544, 2017

  11. [11]

    Brdf slices: Accurate adap- tive anisotropic appearance acquisition

    Jir ´ı Filip, Radom´ır V´avra, Michal Haindl, Pavel id, Mikul ´a Krupika, and Vlastimil Havran. Brdf slices: Accurate adap- tive anisotropic appearance acquisition. In2013 IEEE Con- ference on Computer Vision and Pattern Recognition, pages 1468–1473, 2013

  12. [12]

    Martin Fuchs, V olker Blanz, Hendrik P. A. Lensch, and Hans-Peter Seidel. Adaptive sampling of reflectance fields. ACM Trans. Graph., 26:10, 2007

  13. [13]

    Linear light source reflectometry.ACM Trans

    Andrew Gardner, Chris Tchou, Tim Hawkins, and Paul De- bevec. Linear light source reflectometry.ACM Trans. Graph., 22(3):749–758, 2003

  14. [14]

    Wil- son, and Paul Debevec

    Abhijeet Ghosh, Tongbo Chen, Pieter Peers, Cyrus A. Wil- son, and Paul Debevec. Estimating specular roughness and anisotropy from second order spherical gradient illumination. InProceedings of the Twentieth Eurographics Conference on Rendering, page 1161–1170, Goslar, DEU, 2009. Eurograph- ics Association

  15. [15]

    Brdf rep- resentation and acquisition.Computer Graphics Forum, 35, 2016

    Dar’ya Guarnera, Giuseppe Claudio Guarnera, Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross. Brdf rep- resentation and acquisition.Computer Graphics Forum, 35, 2016

  16. [16]

    Mohit Gupta and Shree K. Nayar. Micro phase shifting. In2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 813–820, 2012

  17. [17]

    Efficient reflectance capture using an autoen- coder.ACM Trans

    Kaizhang Kang, Zimin Chen, Jiaping Wang, Kun Zhou, and Hongzhi Wu. Efficient reflectance capture using an autoen- coder.ACM Trans. Graph., 37(4), 2018

  18. [18]

    Deep svbrdf acquisition and modelling: A survey.Computer Graphics Fo- rum, 43(6):e15199, 2024

    Behnaz Kavoosighafi, Saghi Hajisharif, Ehsan Miandji, Gabriel Baravdish, Wen Cao, and Jonas Unger. Deep svbrdf acquisition and modelling: A survey.Computer Graphics Fo- rum, 43(6):e15199, 2024

  19. [19]

    Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick

    Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer White- head, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4015–4026, 2023

  20. [20]

    Koninckx and L

    T.P. Koninckx and L. Van Gool. Real-time range acquisition by adaptive structured light.IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(3):432–445, 2006

  21. [21]

    Koppal, Shuntaro Yamazaki, and Srinivasa G

    Sanjeev J. Koppal, Shuntaro Yamazaki, and Srinivasa G. Narasimhan. Exploiting dlp illumination dithering for recon- struction and photography of high-speed scenes.International Journal of Computer Vision, 96:125 – 144, 2011

  22. [22]

    Inverse shade trees for non-parametric material representation and editing.ACM Transactions on Graphics (ToG), 25(3):735–745, 2006

    Jason Lawrence, Aner Ben-Artzi, Christopher DeCoro, Wo- jciech Matusik, Hanspeter Pfister, Ravi Ramamoorthi, and Szymon Rusinkiewicz. Inverse shade trees for non-parametric material representation and editing.ACM Transactions on Graphics (ToG), 25(3):735–745, 2006

  23. [23]

    Hendrik P. A. Lensch, Jochen Lang, Asla Medeiros S ´a, and Hans-Peter Seidel. Planned sampling of spatially varying brdfs.Computer Graphics Forum, 22, 2003

  24. [24]

    The digital michelangelo project: 3d scanning of large statues

    Marc Levoy, Kari Pulli, Brian Curless, Szymon Rusinkiewicz, David Koller, Lucas Pereira, Matt Ginz- ton, Sean Anderson, James Davis, Jeremy Ginsberg, Jonathan Shade, and Duane Fulk. The digital michelangelo project: 3d scanning of large statues. InProceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, page 131–144, USA,...

  25. [25]

    Pickering, and Michael R

    Qiang Li, Moyuresh Biswas, Mark R. Pickering, and Michael R. Frater. Dense depth estimation using adaptive structured light and cooperative algorithm. InCVPR 2011 WORKSHOPS, pages 21–28, 2011

  26. [26]

    Learning to learn and sample brdfs.Computer Graphics Forum, 42(2): 201–211, 2023

    Chen Liu, Michael Fischer, and Tobias Ritschel. Learning to learn and sample brdfs.Computer Graphics Forum, 42(2): 201–211, 2023

  27. [27]

    Free-form scanning of non-planar appearance with neural trace photography.ACM Trans

    Xiaohe Ma, Kaizhang Kang, Ruisheng Zhu, Hongzhi Wu, and Kun Zhou. Free-form scanning of non-planar appearance with neural trace photography.ACM Trans. Graph., 40(4), 2021

  28. [28]

    Opensvbrdf: A database of measured spatially- varying reflectance.ACM Trans

    Xiaohe Ma, Xianmin Xu, Leyao Zhang, Kun Zhou, and Hongzhi Wu. Opensvbrdf: A database of measured spatially- varying reflectance.ACM Trans. Graph., 42(6), 2023

  29. [29]

    Effi- cient reflectance capture with a deep gated mixture-of-experts

    Xiaohe Ma, Yaxin Yu, Hongzhi Wu, and Kun Zhou. Effi- cient reflectance capture with a deep gated mixture-of-experts. IEEE Transactions on Visualization and Computer Graphics, 30(7):4246–4256, 2024

  30. [30]

    J ´erˆome Martin and James L. Crowley. Experimental com- parison of correlation techniques. 2007

  31. [31]

    Real-time structured light coding for adaptive patterns.J

    Xavier Maurice, Pierre Graebling, and Christophe Doignon. Real-time structured light coding for adaptive patterns.J. Real-Time Image Process., 8(2):169–178, 2013

  32. [32]

    Kutu- lakos

    Parsa Mirdehghan, Wenzheng Chen, and Kiriakos N. Kutu- lakos. Optimal structured light a la carte. In2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6248–6257, 2018

  33. [33]

    Embedded phase shifting: Robust phase shifting with embedded signals

    Daniel Moreno, Kilho Son, and Gabriel Taubin. Embedded phase shifting: Robust phase shifting with embedded signals. In2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2301–2309, 2015

  34. [34]

    Depth reconstruction with neural signed distance fields in structured light systems

    Rukun Qiao, Hiroshi Kawasaki, and Hongbin Zha. Depth reconstruction with neural signed distance fields in structured light systems. In2024 International Conference on 3D Vision (3DV), pages 770–779, 2024

  35. [35]

    Guy Rosman, Daniela Rus, and John W. Fisher. Information- driven adaptive structured-light scanners. In2016 IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 874–883, 2016

  36. [36]

    Pattern codifi- cation strategies in structured light systems.Pattern Recogni- tion, 37(4):827–849, 2004

    Joaquim Salvi, Jordi Pag `es, and Joan Batlle. Pattern codifi- cation strategies in structured light systems.Pattern Recogni- tion, 37(4):827–849, 2004. Agent Based Computer Vision

  37. [37]

    Scharstein and R

    D. Scharstein and R. Szeliski. High-accuracy stereo depth maps using structured light. In2003 IEEE Computer Soci- ety Conference on Computer Vision and Pattern Recognition,

  38. [38]

    Proceedings., pages I–I, 2003

  39. [39]

    Einscan pro 2x plus handheld industrial scan- ner.https://www.einscan.com/handheld- 3d- scanner/2x-plus/, 2024

    Shining3D. Einscan pro 2x plus handheld industrial scan- ner.https://www.einscan.com/handheld- 3d- scanner/2x-plus/, 2024

  40. [40]

    Sankaranarayanan, and Mohit Gupta

    Varun Sundar, Sizhuo Ma, Aswin C. Sankaranarayanan, and Mohit Gupta. Single-photon structured light. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17844–17854, 2022

  41. [41]

    Ac- quiring reflectance and shape from continuous spherical har- monic illumination.ACM Trans

    Borom Tunwattanapong, Graham Fyffe, Paul Graham, Jay Busch, Xueming Yu, Abhijeet Ghosh, and Paul Debevec. Ac- quiring reflectance and shape from continuous spherical har- monic illumination.ACM Trans. Graph., 32(4), 2013

  42. [42]

    Marschner, Hongsong Li, and Ken- neth E

    Bruce Walter, Stephen R. Marschner, Hongsong Li, and Ken- neth E. Torrance. Microfacet models for refraction through rough surfaces. InProceedings of the 18th Eurographics Conference on Rendering Techniques, page 195–206, Goslar, DEU, 2007. Eurographics Association

  43. [43]

    Advances in geometry and reflectance ac- quisition

    Michael Weinmann, Fabian Langguth, Michael Goesele, and Reinhard Klein. Advances in geometry and reflectance ac- quisition. InProceedings of the 37th Annual Conference of the European Association for Computer Graphics: Tutorials, Goslar, DEU, 2016. Eurographics Association

  44. [44]

    A unified spatial-angular structured light for single-view acquisition of shape and re- flectance

    Xianmin Xu, Yuxin Lin, Haoyang Zhou, Chong Zeng, Yaxin Yu, Kun Zhou, and Hongzhi Wu. A unified spatial-angular structured light for single-view acquisition of shape and re- flectance. In2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 206–215, 2023

  45. [45]

    Relighting neural radiance fields with shadow and highlight hints

    Chong Zeng, Guojun Chen, Yue Dong, Pieter Peers, Hongzhi Wu, and Xin Tong. Relighting neural radiance fields with shadow and highlight hints. InSpecial Interest Group on Com- puter Graphics and Interactive Techniques Conference Con- ference Proceedings, page 1–11. ACM, 2023

  46. [46]

    Real-time acquisition and reconstruction of dynamic volumes with neural structured illumination

    Yixin Zeng, Zoubin Bi, Mingrui Yin, Xiang Feng, Kun Zhou, and Hongzhi Wu. Real-time acquisition and reconstruction of dynamic volumes with neural structured illumination. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20186–20195, 2024

  47. [47]

    Robust depth sensing with adaptive structured light illumina- tion.Journal of Visual Communication and Image Represen- tation, 25(4):649–658, 2014

    Yueyi Zhang, Zhiwei Xiong, Pengyu Cong, and Feng Wu. Robust depth sensing with adaptive structured light illumina- tion.Journal of Visual Communication and Image Represen- tation, 25(4):649–658, 2014. 3D Video Processing

  48. [48]

    Estimating Uncertainty in Appearance Acqui- sition

    Zhiqian Zhou, Cheng Zhang, Zhao Dong, Carl Marshall, and Shuang Zhao. Estimating Uncertainty in Appearance Acqui- sition. InEurographics Symposium on Rendering. The Euro- graphics Association, 2024