Recognition: no theorem link
Markerless Head Tracking for Accurate and Accessible Neuronavigation
Pith reviewed 2026-05-16 07:37 UTC · model grok-4.3
The pith
Markerless camera tracking matches marker-based neuronavigation at 2.32 mm median error for head procedures.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Markerless approaches replace traditional marker-based neuronavigation with low-cost cameras and algorithmic facial geometry modeling, yielding a median tracking discrepancy of 2.32 mm and 2.01 degrees in 50 subjects that is sufficient for transcranial magnetic stimulation, with multi-sensor fusion suggested to improve results further.
What carries the argument
Markerless head tracking via stereo and depth-sensing cameras combined with algorithmic facial geometry modeling.
If this is right
- Eliminates expensive hardware, physical markers, and manual registration steps.
- Reduces patient discomfort from subject-mounted markers.
- Lowers overall setup cost and complexity for neuronavigation.
- Expands neuronavigation access to more clinical and research sites.
- Multi-sensor data integration can raise overall tracking precision beyond single-camera results.
Where Pith is reading between the lines
- Real-time correction for head motion during stimulation becomes feasible without marker drift.
- The same camera setup could support other head-positioned interventions such as focused ultrasound or electrode placement.
- Deployment on portable devices might allow neuronavigation in outpatient or resource-limited environments.
Load-bearing premise
The facial geometry modeling and multi-sensor fusion maintain reported accuracy when patients move, change expressions, or experience lighting shifts during actual procedures.
What would settle it
Direct comparison of markerless versus marker-based tracking error during a real transcranial magnetic stimulation session that includes patient head movement and variable room lighting.
Figures
read the original abstract
Neuronavigation is widely used in biomedical research and interventions to guide the precise placement of instruments around the head to support procedures such as transcranial magnetic stimulation. Traditional systems, however, rely on subject-mounted markers that require manual registration, may shift during procedures, and can cause discomfort. We introduce and evaluate markerless approaches that replace expensive hardware and physical markers with low-cost visible and infrared light cameras incorporating stereo and depth sensing, combined with algorithmic modeling of the facial geometry. Validation with 50 human subjects yielded a median tracking discrepancy of only 2.32 mm and 2.01$^\circ$ for the best markerless algorithm compared to a conventional marker-based system, which indicates sufficient accuracy for transcranial magnetic stimulation and a substantial improvement over prior markerless results. The study also suggests that integration of the data from the various camera sensors can improve the overall accuracy further. The proposed markerless neuronavigation methods can reduce setup cost and complexity, improve patient comfort, and expand access to neuronavigation in clinical and research settings.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces markerless head tracking for neuronavigation using low-cost visible and infrared cameras with stereo/depth sensing and facial geometry modeling. It reports validation on 50 human subjects yielding a median tracking discrepancy of 2.32 mm and 2.01° for the best algorithm versus a conventional marker-based system, claiming this accuracy suffices for transcranial magnetic stimulation (TMS) while reducing cost, complexity, and patient discomfort.
Significance. If the accuracy holds under real clinical conditions, the work could meaningfully expand access to neuronavigation by removing physical markers and expensive hardware. The empirical head-to-head comparison on 50 subjects and suggestion of multi-sensor fusion benefits provide a concrete baseline; however, the validation design limits the strength of the clinical-sufficiency claim.
major comments (2)
- [Results] Results section (validation with 50 subjects): the headline median error of 2.32 mm / 2.01° is established solely by agreement with the marker-based reference system. No independent ground-truth metrology (e.g., phantom with known fiducials or optical tracking) is reported, leaving open the possibility of correlated errors from skin deformation or registration drift.
- [Methods] Methods/Experimental protocol: no tests are described under dynamic clinical conditions (induced head motion, facial expressions, lighting changes) that would occur during actual TMS sessions. The multi-sensor fusion and facial-geometry model are therefore unproven in the regime where the sufficiency claim must hold.
minor comments (1)
- [Abstract] Abstract: the phrase 'indicates sufficient accuracy' should be qualified by the validation limitations (comparison only to marker-based system, static conditions).
Simulated Author's Rebuttal
We thank the referee for the constructive comments that highlight important aspects of our validation design. We address each major point below with honest assessment of what the current study can and cannot support.
read point-by-point responses
-
Referee: [Results] Results section (validation with 50 subjects): the headline median error of 2.32 mm / 2.01° is established solely by agreement with the marker-based reference system. No independent ground-truth metrology (e.g., phantom with known fiducials or optical tracking) is reported, leaving open the possibility of correlated errors from skin deformation or registration drift.
Authors: We agree that agreement with the marker-based system alone cannot rule out correlated errors such as skin deformation or registration drift. The marker-based system remains the clinical reference standard, so demonstrating comparable performance is still informative for practical adoption. We have added an explicit limitations paragraph in the revised Discussion acknowledging this gap and recommending phantom-based or optical-tracking ground truth in future work. This is a partial revision consisting of textual clarification; we cannot add new independent metrology to the existing 50-subject dataset. revision: partial
-
Referee: [Methods] Methods/Experimental protocol: no tests are described under dynamic clinical conditions (induced head motion, facial expressions, lighting changes) that would occur during actual TMS sessions. The multi-sensor fusion and facial-geometry model are therefore unproven in the regime where the sufficiency claim must hold.
Authors: The protocol was intentionally limited to controlled, static conditions to establish a clean baseline accuracy comparison across 50 subjects. No induced motion, expression, or lighting variation tests were performed. We have revised the Methods and Discussion sections to state the controlled scope explicitly and to qualify the sufficiency claim for TMS by noting that robustness under dynamic conditions remains to be demonstrated. This is a partial revision via added caveats; new dynamic experiments lie outside the current study. revision: partial
Circularity Check
No circularity: empirical validation against external reference system
full rationale
The paper introduces markerless head-tracking methods based on multi-camera sensing and facial geometry modeling, then reports direct empirical results from a head-to-head comparison on 50 human subjects against a conventional marker-based neuronavigation system. The reported medians (2.32 mm, 2.01°) are measured discrepancies, not quantities derived from equations that reduce to fitted parameters, self-referential definitions, or self-citation chains. No load-bearing uniqueness theorems, ansatzes smuggled via prior work, or renaming of known results appear in the validation chain. The central claim therefore remains an independent experimental outcome relative to the chosen reference.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Stereo and depth camera calibration remains accurate under typical clinical lighting and distances.
- domain assumption Head position can be reliably inferred from facial surface geometry without markers.
Reference graph
Works this paper leans on
-
[2]
C. L. Scrivener, A. T. Reader, Variability of eeg electrode positions and their underlying brain regions: visualizing gel artifacts from a simulta- neous eeg-fmri dataset, Brain and Behavior 12 (2) (2022) e2476
work page 2022
- [3]
-
[4]
A. Karataş, A. Erdem, A. Savaş, G. Kutlu, B. Yağmurlu, I. Erden, E. Bilir, Identification and removal of an epileptogenic lesion using ictal-eeg, functional-neuronavigation and electrocorticography, Journal of Clinical Neuroscience 11 (3) (2004) 343–346
work page 2004
-
[5]
J.-P. Lefaucheur, T. Picht, The value of preoperative functional cor- tical mapping using navigated tms, Neurophysiologie Clinique/Clinical Neurophysiology 46 (2) (2016) 125–133
work page 2016
-
[6]
S. F. W. Neggers, T. R. Langerak, D. J. L. G. Schutter, R. C. W. Mandl, N. F. Ramsey, P. J. J. Lemmens, A. Postma, A stereotactic method for image-guided transcranial magnetic stimulation validated with fMRI and motor-evoked potentials, NeuroImage 21 (4) (2004) 1805–1817.doi:10.1016/j.neuroimage.2003.12.006. URLhttps://www.sciencedirect.com/science/articl...
-
[7]
C. Schönfeldt-Lecuona, A. Thielscher, R. W. Freudenmann, M. Kron, M. Spitzer, U. Herwig, Accuracy of stereotaxic positioning of transcra- nial magnetic stimulation, Brain Topography 17 (4) (2005) 253–259. doi:10.1007/s10548-005-6033-1. 26
-
[8]
E. M. Wassermann, A. V. Peterchev, U. Ziemann, S. H. Lisanby, H. R. Siebner, V. Walsh, The oxford handbook of transcra- nial stimulation: second edition, Oxford University Press, 2024. doi:10.1093/oxfordhb/9780198832256.001.0001. URLhttps://doi.org/10.1093/oxfordhb/9780198832256.001. 0001
- [9]
-
[10]
R. J. Maciunas, R. L. Galloway Jr, J. W. Latimer, The application accuracy of stereotactic frames, Neurosurgery 35 (4) (1994) 682–695
work page 1994
-
[11]
K. L. Holloway, S. E. Gaede, P. A. Starr, J. M. Rosenow, V. Ramakrish- nan, J. M. Henderson, Frameless stereotaxy using bone fiducial markers for deep brain stimulation, Journal of Neurosurgery 103 (3) (2005) 404– 413
work page 2005
-
[12]
C. R. Maurer, J. M. Fitzpatrick, M. Y. Wang, R. L. Galloway, R. J. Maciunas, G. S. Allen, Registration of head volume images using im- plantable fiducial markers, IEEE Transactions on Medical Imaging 16 (4) (1997) 447–462
work page 1997
-
[13]
A. E. Nieminen, J. O. Nieminen, M. Stenroos, P. Novikov, M. Nazarova, S. Vaalto, V. Nikulin, R. J. Ilmoniemi, Accuracy and precision of navi- gated transcranial magnetic stimulation, Journal of Neural Engineering 19 (6) (2022) 066037
work page 2022
- [14]
-
[15]
S. M. Goetz, I. C. Kozyrkov, B. Luber, S. H. Lisanby, D. L. K. Murphy, W. M. Grill, A. V. Peterchev, Accuracy of robotic coil positioning during transcranial magnetic stimulation, Journal of Neural Engineering 16 (5) (2019) 054003.doi:10.1088/1741-2552/ab2953
-
[16]
A. Barton, D. Broadhurst, J. Hitchcock, C. Lund, L. McNichol, C. R. Ratliff, J. T. Moraes, S. Yates, M. Gray, Medical Adhesive- 27 Related Skin Injury at 10 Years: An Updated Consensus, Jour- nal of Wound Ostomy & Continence Nursing 51 (5S) (2024) S2. doi:10.1097/WON.0000000000001116. URLhttps://journals.lww.com/jwocnonline/fulltext/2024/ 09001/medical_ad...
-
[17]
N. J. Thornton, B. R. Gibson, A. M. Ferry, B. Gibson, Contact dermati- tis and medical adhesives: a review, Cureus 13 (3) (2021)
work page 2021
-
[18]
J. M. Houck, E. D. Claus, A comparison of automated and manual co-registration for magnetoencephalography, PLOS One 15 (4) (2020) e0232100
work page 2020
-
[19]
I. Chiurillo, R. M. Sha, F. C. Robertson, J. Liu, J. Li, H. Le Mau, J. M. Amich, W. B. Gormley, R. Stolyarov, High-accuracy neuro-navigation with computer vision for frameless registration and real-time tracking, Bioengineering 10 (12) (2023) 1401
work page 2023
-
[20]
F. V. Kögl, É. Léger, N. Haouchine, E. Torio, P. Juvekar, N. Navab, T. Kapur, S. Pieper, A. Golby, S. Frisken, A tool-free neuronaviga- tion method based on single-view hand tracking, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 11 (4) (2023) 1307–1315
work page 2023
-
[21]
C. Gsaxner, A. Pepe, J. Wallner, D. Schmalstieg, J. Egger, Markerless image-to-face registration for untethered augmented reality in head and neck surgery, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2019, pp. 236–244
work page 2019
-
[22]
S. Sathyanarayana, C. Leuze, B. Hargreaves, B. Daniel, G. Wetzstein, A. Etkin, M. T. Bhati, J. A. McNab, Comparison of head pose track- ing methods for mixed-reality neuronavigation for transcranial mag- netic stimulation, in: Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, Vol. 11315, SPIE, 2020, pp. 147– 154
work page 2020
- [23]
-
[24]
Y. Liu, Z. Song, M. Wang, A new robust markerless method for auto- matic image-to-patient registration in image-guided neurosurgery sys- tem, Computer Assisted Surgery 22 (sup1) (2017) 319–325
work page 2017
-
[25]
T. Zeng, Y. Lu, W. Jiang, J. Zheng, J. Zhang, P. Gravel, Q. Wan, K. Fontaine, T. Mulnix, Y. Jiang, et al., Markerless head motion track- ing and event-by-event correction in brain pet, Physics in Medicine & Biology 68 (24) (2023) 245019
work page 2023
- [26]
-
[27]
MAMMA: Markerless & Automatic Multi-Person Motion Action Capture
H. Cuevas-Velasquez, A. Yiannakidis, S. Shin, G. Becherini, M. Höschle, J. Tesch, T. Obersat, T. Alexiadis, M. Black, Mamma: Marker- less & automatic multi-person motion action capture, arXiv preprint arXiv:2506.13040 (2025)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[28]
C. Brambilla, R. Marani, L. Romeo, M. L. Nicora, F. A. Storm, G. Reni, M.Malosio, T.D’Orazio, A.Scano, Azurekinectperformanceevaluation for human motion and upper limb biomechanical analysis, Heliyon 9 (11) (2023)
work page 2023
-
[29]
J. Bertram, T. Krüger, H. M. Röhling, A. Jelusic, S. Mansow-Model, R. Schniepp, M. Wuehr, K. Otte, Accuracy and repeatability of the microsoft azure kinect for clinical measurement of motor function, PLOS One 18 (1) (2023) e0279697
work page 2023
-
[30]
M.Villa, J.Sancho, G.Rosa-Olmeda, M.Chavarrias, E.Juarez, C.Sanz, Benchmarking commercial depth sensors for intraoperative markerless registration in neurosurgery applications, International Journal of Com- puter Assisted Radiology and Surgery (2025)
work page 2025
-
[31]
R. H. Matsuda, V. H. Souza, P. N. Kirsten, R. J. Ilmoniemi, O. Baffa, Marle: Markerless estimation of head pose for navigated transcranial magnetic stimulation, Physical and Engineering Sciences in Medicine (2023) 1–10. 29
work page 2023
-
[32]
S. Ploumpis, E. Ververas, E. O’Sullivan, S. Moschoglou, H. Wang, N. Pears, W. A. Smith, B. Gecer, S. Zafeiriou, Towards a complete 3d morphable model of the human head, IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (11) (2020) 4142–4160
work page 2020
-
[33]
R. I. Hartley, A. Zisserman, Multiple view geometry in computer vision, 2nd Edition, Cambridge University Press, ISBN: 0521540518, 2004
work page 2004
- [34]
-
[35]
D. F. Dementhon, L. S. Davis, Model-based object pose in 25 lines of code, International Journal of Computer Vision 15 (1) (1995) 123–141, company: Springer Distributor: Springer Institution: Springer Label: Springer Number: 1 Publisher: Kluwer Academic Publishers.doi:10. 1007/BF01450852. URLhttps://link.springer.com/article/10.1007/BF01450852
-
[36]
B. M. Haralick, C.-N. Lee, K. Ottenberg, M. Nölle, Review and anal- ysis of solutions of the three point perspective pose estimation prob- lem, International Journal of Computer Vision 13 (3) (1994) 331–356. doi:10.1007/BF02028352. URLhttps://doi.org/10.1007/BF02028352
-
[37]
V. Lepetit, F. Moreno-Noguer, P. Fua, Epnp: An accurate o(n) solution to the pnp problem, International Journal of Computer Vision 81 (2) (2009) 155–166.doi:10.1007/s11263-008-0152-6. URLhttps://doi.org/10.1007/s11263-008-0152-6
-
[38]
Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs
Y. Kartynnik, A. Ablavatski, I. Grishchenko, M. Grundmann, Real- time facial surface geometry from monocular video on mobile gpus, arXiv:1907.06724 [cs] (Jul. 2019).doi:10.48550/arXiv.1907.06724. URLhttp://arxiv.org/abs/1907.06724
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1907.06724 1907
-
[39]
Z. Liu, X. Zhu, G. Hu, H. Guo, M. Tang, Z. Lei, N. M. Robertson, J. Wang, Semantic alignment: Finding semantically consistent ground- truth for facial landmark detection, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3467– 3476. 30
work page 2019
-
[40]
URLhttps://github.com/microsoft/Azure-Kinect-Sensor-SDK
Microsoft azure kinect sdk, (Retrieved on October 6, 2025) (2022). URLhttps://github.com/microsoft/Azure-Kinect-Sensor-SDK
work page 2025
-
[41]
P. J. Besl, N. D. McKay, Method for registration of 3-d shapes, in: Sensor fusion IV: Control Paradigms and Data Structures, Vol. 1611, Spie, 1992, pp. 586–606
work page 1992
- [42]
-
[43]
O. Schlesinger, R. Kundu, D. Isaev, J. Y. Choi, S. M. Goetz, D. A. Turner, G. Sapiro, A. V. Peterchev, J. M. Di Martino, Scalp surface estimation and head registration using sparse sampling and 3d statistical models, Computers in Biology and Medicine 178 (2024) 108689
work page 2024
-
[44]
O. Schlesinger, R. Kundu, S. Goetz, G. Sapiro, A. V. Peterchev, J. M. Di Martino, Automatic neurocranial landmarks detection from visible facial landmarks leveraging 3d head priors, in: Workshop on Clinical Image-Based Procedures, Springer, 2023, pp. 12–20
work page 2023
-
[45]
H. Dai, N. Pears, W. A. Smith, C. Duncan, A 3d morphable model of craniofacial shape and texture variation, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3085–3093
work page 2017
-
[46]
S. Ploumpis, H. Wang, N. Pears, W. A. Smith, S. Zafeiriou, Combining 3d morphable models: A large scale face-and-head model, in: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 10934–10943
work page 2019
- [47]
-
[48]
H. H. Jasper, The ten-twenty electrode system of the international feder- ation, Electroencephalography and Clinical Neurophysiology 10 (1958) 370–375
work page 1958
-
[49]
J. Green, R. Kundu, A. V. Peterchev, J. M. D. Martino, Toward Accessible Neuronavigation: Tracking Retroreflective Markers with a Consumer-Grade Depth Camera, in: 2024 IEEE URUCON, 2024, pp. 31 1–5.doi:10.1109/URUCON63440.2024.10850257. URLhttps://ieeexplore.ieee.org/document/10850257
-
[50]
X. Zhu, D. Ramanan, Face detection, pose estimation, and landmark localization in the wild, in: 2012 IEEE conference on computer vision and pattern recognition, IEEE, 2012, pp. 2879–2886
work page 2012
-
[51]
C. Ding, D. Tao, A comprehensive survey on pose-invariant face recog- nition, ACM Transactions on intelligent systems and technology (TIST) 7 (3) (2016) 1–42
work page 2016
-
[52]
G. Van Rossum, F. L. Drake, et al., Python reference manual, Vol. 111, Centrum voor Wiskunde en Informatica Amsterdam, 1995, version 3.10.12
work page 1995
-
[53]
R. C. Team, et al., R: A language and environment for statistical com- puting, R foundation for statistical computing, Vienna, AustriaVersion 4.5.2 (2021)
work page 2021
-
[54]
Gautier, rpy2: Python interface to the R language, version 3.6.4 (Retrieved on April 4, 2024) (2024)
L. Gautier, rpy2: Python interface to the R language, version 3.6.4 (Retrieved on April 4, 2024) (2024). URLhttps://rpy2.github.io/
work page 2024
-
[55]
Vallat, Pingouin: statistics in Python., J
R. Vallat, Pingouin: statistics in Python., J. Open Source Softw. 3 (31) (2018) 1026
work page 2018
-
[56]
S. Seabold, J. Perktold, et al., Statsmodels: econometric and statistical modeling with Python., SciPy 7 (1) (2010) 92–96
work page 2010
-
[57]
P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, et al., Scipy 1.0: fundamental algorithms for scientific computing in Python, Nature methods 17 (3) (2020) 261–272
work page 2020
-
[58]
Zhang, Survey on monocular metric depth estimation, arXiv preprint arXiv:2501.11841 (2025)
J. Zhang, Survey on monocular metric depth estimation, arXiv preprint arXiv:2501.11841 (2025)
-
[59]
Y. Guo, T. Gao, A. Dong, X. Jiang, Z. Zhu, F. Wang, A survey of the state of the art in monocular 3d human pose estimation: Methods, benchmarks, and challenges, Sensors (Basel, Switzerland) 25 (8) (2025) 2409. 32
work page 2025
-
[60]
R.Chandel, R.Bhowmick, U.Hariharan, Acomparisonoffacelandmark detection techniques, in: 2023 4th International Conference on Com- putation, Automation and Knowledge Management (ICCAKM), IEEE, 2023, pp. 1–6
work page 2023
-
[61]
L. M. Koponen, J. O. Nieminen, R. J. Ilmoniemi, Multi-locus transcra- nial magnetic stimulation—theory and implementation, Brain Stimula- tion 11 (4) (2018) 849–855
work page 2018
-
[62]
A. Giuffre, C. K. Kahl, E. Zewdie, J. G. Wrightson, A. Bourgeois, E. G. Condliffe, A. Kirton, Reliability of robotic transcranial magnetic stimu- lation motor mapping, Journal of Neurophysiology (2021)
work page 2021
-
[63]
Northern Digital, Inc., Polaris vicra, compact optical tracker for small oem instruments, https://www.ndigital.com/optical-navigation- technology/polaris-vicra/, (Retrieved on October 22, 2024) (2024)
work page 2024
-
[64]
J. Ruohonen, J. Karhu, Navigated transcranial magnetic stimulation, Neurophysiologie clinique/Clinical neurophysiology 40 (1) (2010) 7–17
work page 2010
-
[65]
Soterix Medical, Neural navigator, k191422, (Retrieved on April 4,
I. Soterix Medical, Neural navigator, k191422, (Retrieved on April 4,
-
[66]
URLhttps://www.accessdata.fda.gov/cdrh_docs/pdf19/ K191422.pdf
(2020). URLhttps://www.accessdata.fda.gov/cdrh_docs/pdf19/ K191422.pdf
work page 2020
-
[67]
URLhttps://github.com/PlusToolkit/ndicapi
Ndi combined api c interface library, (Retrieved on October 6, 2025) (2023). URLhttps://github.com/PlusToolkit/ndicapi
work page 2025
-
[68]
libjpeg-turbo, (Retrieved on August 14, 2025) (2025). URLhttps://libjpeg-turbo.org/
work page 2025
-
[69]
URLhttps://github.com/ocornut/imgui
Dear imgui, (Retrieved on August 14, 2025) (2025). URLhttps://github.com/ocornut/imgui
work page 2025
-
[70]
W. Schroeder, K. Martin, B. Lorensen, The Visualization Toolkit (4th ed.), Kitware, 2006
work page 2006
-
[71]
URLhttps://github.com/trlsmax/imgui-vtk 33
imgui-vtk, (Retrieved on October 6, 2025) (2023). URLhttps://github.com/trlsmax/imgui-vtk 33
work page 2025
-
[72]
B. Shoshany, A c++ 17 thread pool for high-performance scientific com- puting, SoftwareX 26 (2024) 101687
work page 2024
-
[73]
URLhttps://github.com/rigtorp/SPSCQueue
Spscqueue, (Retrieved on August 14, 2025) (2023). URLhttps://github.com/rigtorp/SPSCQueue
work page 2025
-
[74]
Bradski, The OpenCV Library, Dr
G. Bradski, The OpenCV Library, Dr. Dobb’s Journal of Software Tools (2000)
work page 2000
-
[75]
S. Choi, Q.-Y. Zhou, V. Koltun, Robust reconstruction of indoor scenes, in: Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, 2015, pp. 5556–5565
work page 2015
-
[76]
H. G. Barrow, J. M. Tenenbaum, R. C. Bolles, H. C. Wolf, Parametric correspondence and chamfer matching: Two new techniques for image matching, in: Proceedings: Image Understanding Workshop, Science Applications, Inc, 1977, pp. 21–27. 34 Appendix A. Supplementary Materials S.1. Experimental Setup S.1.1. Acquisition Hardware Reference data were obtained us...
work page 1977
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.