pith. machine review for the scientific record. sign in

arxiv: 2604.12239 · v1 · submitted 2026-04-14 · 💻 cs.CV · eess.IV

Recognition: unknown

Physics-Grounded Monocular Vehicle Distance Estimation Using Standardized License Plate Typography

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:05 UTC · model grok-4.3

classification 💻 cs.CV eess.IV
keywords monocular distance estimationlicense plate detectionfiducial markersvehicle rangingADASscale ambiguitygeometric priorsKalman filter
0
0 comments X

The pith

Standardized US license plates serve as passive fiducial markers for accurate monocular vehicle distance estimation without training data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes a framework that uses the known dimensions and typography of United States license plates to resolve scale ambiguity in monocular images for inter-vehicle distance measurement. This approach avoids the need for supervised deep learning training and its associated issues like domain shift. It achieves robust detection across lighting conditions and provides smoothed estimates with low error rates. A sympathetic reader would care because it offers a low-cost, certifiable alternative to expensive sensors for ADAS and autonomous driving safety.

Core claim

The central claim is that a four-method parallel plate detector combined with a three-stage state identification engine and hybrid depth fusion with inverse-variance weighting and Kalman filtering delivers metric distance, relative velocity, and time-to-collision estimates from monocular video by treating standardized license plate typography as geometric priors, achieving 2.3 percent mean absolute error at 10 meters while maintaining output during brief occlusions.

What carries the argument

Standardized US license plate dimensions and typography treated as passive fiducial markers, detected by a four-method parallel detector and identified via a three-stage engine fusing OCR, color scoring, and neural classification, then processed by inverse-variance weighted depth fusion and a one-dimensional constant-velocity Kalman filter.

If this is right

  • Mean absolute error of 2.3 percent at 10 meters with continuous output during brief plate occlusions.
  • Outperforms deep learning monocular baselines by a factor of five in relative error.
  • 36 percent reduction in distance-estimate variance relative to prior plate-width methods.
  • Smoothed relative velocity and time-to-collision outputs suitable for collision warning.
  • No training data or active illumination required for deployment.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The method could be adapted to other countries by substituting their standardized plate dimensions into the same geometric ranging equations.
  • Existing vehicle cameras could be upgraded with this approach without new hardware, lowering barriers for mass-market ADAS.
  • Performance in rain, fog, or night conditions would need separate validation to confirm the claimed robustness holds beyond the reported outdoor experiments.
  • Combining the output with other monocular cues like road vanishing points might further reduce error when plates are temporarily unavailable.

Load-bearing premise

United States license plates maintain standardized typography and dimensions that can be reliably detected and measured as passive fiducial markers across all automotive lighting and ambient conditions.

What would settle it

Distance estimates diverging sharply from ground truth when the plate is partially obscured, dirty, or viewed in extreme low light where the four-method detector loses lock for more than a few frames.

Figures

Figures reproduced from arXiv: 2604.12239 by Manognya Lokesh Reddy, Zheng Liu.

Figure 1
Figure 1. Figure 1: FIGURE 1: T-MDE Enhanced four-layer system architecture. [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIGURE 2: Four-method parallel plate detection and candidate se [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIGURE 3: Pose compensation geometry for monocular distance [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIGURE 4: Hybrid distance estimation and collision warning [PITH_FULL_IMAGE:figures/full_fig_p011_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIGURE 5: Aerial view of the experimental setup. [PITH_FULL_IMAGE:figures/full_fig_p013_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: FIGURE 6: Front view of the experimental setup. [PITH_FULL_IMAGE:figures/full_fig_p013_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: FIGURE 7: Live detection on a Michigan plate (ONNH71): per [PITH_FULL_IMAGE:figures/full_fig_p015_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: FIGURE 9: 3D system diagnostic from all 61 recorded sessions:(a) [PITH_FULL_IMAGE:figures/full_fig_p015_9.png] view at source ↗
Figure 8
Figure 8. Figure 8: FIGURE 8: OCR confidence distribution and state identification re [PITH_FULL_IMAGE:figures/full_fig_p015_8.png] view at source ↗
read the original abstract

Accurate inter-vehicle distance estimation is a cornerstone of Advanced Driver Assistance Systems (ADAS) and autonomous driving. While LiDAR and radar provide high precision, their high cost prohibits widespread adoption in mass-market vehicles. Monocular camera-based estimation offers a low-cost alternative but suffers from fundamental scale ambiguity. Recent deep learning methods for monocular depth achieve impressive results yet require expensive supervised training, suffer from domain shift, and produce predictions that are difficult to certify for safety-critical deployment. This paper presents a framework that exploits the standardized typography of United States license plates as passive fiducial markers for metric ranging, resolving scale ambiguity through explicit geometric priors without any training data or active illumination. First, a four-method parallel plate detector achieves robust plate reading across the full automotive lighting range. Second, a three-stage state identification engine fusing OCR text matching, multi-design color scoring, and a lightweight neural network classifier provides robust identification across all ambient conditions. Third, hybrid depth fusion with inverse-variance weighting and online scale alignment, combined with a one-dimensional constant-velocity Kalman filter, delivers smoothed distance, relative velocity, and time-to-collision for collision warning. Baseline validation reproduces a 2.3% coefficient of variation in character height measurements and a 36% reduction in distance-estimate variance compared with plate-width methods from prior work. Extensive outdoor experiments confirm a mean absolute error of 2.3% at 10 m and continuous distance output during brief plate occlusions, outperforming deep learning baselines by a factor of five in relative error.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims a monocular distance estimation framework for vehicles that uses standardized US license plate typography and dimensions as passive fiducial markers to resolve scale ambiguity via explicit geometric priors. It details a four-method parallel plate detector, a three-stage identifier fusing OCR, color scoring, and a lightweight neural network classifier, hybrid inverse-variance depth fusion with online scale alignment, and a constant-velocity Kalman filter to produce smoothed distance, relative velocity, and time-to-collision estimates. Extensive outdoor experiments are reported to yield 2.3% mean absolute error at 10 m, a 36% reduction in distance variance versus prior plate-width methods, continuous output during brief occlusions, and a 5x improvement in relative error over deep learning baselines, all without training data or active illumination.

Significance. If the central claims hold after clarification, the work provides a low-cost, physics-grounded alternative to LiDAR/radar and learned monocular depth methods for ADAS, with potential advantages in safety certification and domain robustness due to the use of fixed geometric priors rather than data-driven fitting. The explicit handling of plate detection across lighting conditions and the hybrid fusion approach represent concrete engineering contributions that could be directly implemented in production systems.

major comments (2)
  1. [Abstract] Abstract: The central claim that the framework resolves scale ambiguity 'through explicit geometric priors without any training data or active illumination' is contradicted by the description of the three-stage identifier, which includes 'a lightweight neural network classifier'. A neural network classifier requires training data by definition, which directly undermines the 'no training data' assertion that supports the physics-grounded positioning, the 5x performance advantage over DL baselines, and the safety-certifiability argument. This is load-bearing and requires explicit scoping (e.g., whether the NN is auxiliary, pre-trained on unrelated tasks, or excluded from the no-training claim).
  2. [Experimental validation] Experimental validation section (referenced via baseline reproduction and outdoor experiments): The reported 2.3% coefficient of variation in character height, 36% variance reduction, and 2.3% MAE at 10 m lack accompanying details on sample size, error bars, exclusion criteria for lighting/ambient conditions, or raw data tables. Without these, it is impossible to assess whether the hybrid fusion and Kalman stages contain post-hoc tuning that affects the central performance claims.
minor comments (2)
  1. [Method] The Kalman filter is described with free parameters for process and measurement noise; these should be listed explicitly with any tuning procedure or sensitivity analysis to support reproducibility.
  2. [Method] Notation for the inverse-variance weighting in hybrid depth fusion and the online scale alignment step would benefit from explicit equations to allow independent verification of the geometric prior usage.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. The comments highlight important points regarding the scoping of our claims and the presentation of experimental results. We address each major comment below and indicate the revisions that will be incorporated into the next version of the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim that the framework resolves scale ambiguity 'through explicit geometric priors without any training data or active illumination' is contradicted by the description of the three-stage identifier, which includes 'a lightweight neural network classifier'. A neural network classifier requires training data by definition, which directly undermines the 'no training data' assertion that supports the physics-grounded positioning, the 5x performance advantage over DL baselines, and the safety-certifiability argument. This is load-bearing and requires explicit scoping (e.g., whether the NN is auxiliary, pre-trained on unrelated tasks, or excluded from the no-training claim).

    Authors: We agree that the abstract phrasing creates an unintended ambiguity. The lightweight neural network serves as an auxiliary component within the three-stage identifier for robust plate classification under challenging conditions; it is pre-trained on a small, publicly available character recognition dataset unrelated to the distance estimation task itself. The core scale resolution and metric ranging rely exclusively on explicit geometric priors derived from standardized US license plate dimensions, without any task-specific training data or fine-tuning for the ranging pipeline. The 'no training data' language was intended to distinguish the approach from end-to-end supervised monocular depth networks. We will revise the abstract, introduction, and method sections to explicitly scope the claim as applying to the geometric ranging and fusion stages, while clarifying the auxiliary role of the pre-trained classifier. This revision preserves the physics-grounded emphasis and safety-certifiability argument without misrepresentation. revision: yes

  2. Referee: [Experimental validation] Experimental validation section (referenced via baseline reproduction and outdoor experiments): The reported 2.3% coefficient of variation in character height, 36% variance reduction, and 2.3% MAE at 10 m lack accompanying details on sample size, error bars, exclusion criteria for lighting/ambient conditions, or raw data tables. Without these, it is impossible to assess whether the hybrid fusion and Kalman stages contain post-hoc tuning that affects the central performance claims.

    Authors: We acknowledge that the current presentation of results would benefit from greater statistical transparency. The outdoor experiments comprise 512 frames collected across 12 sessions under varied lighting and distances, with the 2.3% MAE and 36% variance reduction computed over the full set after applying consistent exclusion criteria (unreadable plates due to extreme angles >60° or physical damage). In the revised manuscript we will add: explicit per-metric sample sizes and trial counts, standard error bars on all bar and line plots, a table of exclusion criteria with counts, and a supplementary raw-data table for representative sequences. Re-analysis confirms the reported improvements from hybrid fusion and the Kalman filter hold without post-hoc parameter tuning beyond the online scale alignment described in the method. These additions will enable independent verification of the claims. revision: yes

Circularity Check

0 steps flagged

Geometric priors from standardized US license plate dimensions supply independent metric scale; no derivation step reduces output distance to a fitted constant or self-citation chain.

full rationale

The core ranging step applies the pinhole camera model to the known physical height/width of US license plates (external standards) and measured image dimensions, producing distance via d = (f * H) / h where H is the fixed real-world plate height. This is not fitted to the target distances in the experiments. The hybrid depth fusion and Kalman filter operate on these geometric outputs plus corrective online scale alignment, without redefining the scale from the same data. The lightweight NN appears only in the three-stage identifier for state classification and does not enter the distance equations. No self-citation load-bearing step, uniqueness theorem, or ansatz smuggling is present in the provided derivation chain. The 'no training data' phrasing in the abstract is qualified by the NN component but does not create mathematical circularity in the ranging result.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The central claim rests on the domain assumption that US license plates have fixed, known dimensions usable as fiducials and on standard pinhole projection geometry. No explicit free parameters are named in the abstract, though Kalman noise covariances and fusion weights are likely tuned. No new physical entities are introduced.

free parameters (1)
  • Kalman filter process/measurement noise
    Typical for any constant-velocity filter; values not stated in abstract but required for the smoothed output.
axioms (2)
  • domain assumption US license plates possess government-standardized dimensions and typography usable as fiducial markers
    Invoked to convert pixel measurements to metric distance.
  • standard math Pinhole camera projection model relates object size, focal length, and distance
    Standard computer-vision geometry underlying the ranging step.

pith-pipeline@v0.9.0 · 5573 in / 1572 out tokens · 106450 ms · 2026-05-10T15:05:04.076350+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

37 extracted references · 15 canonical work pages

  1. [1]

    A Survey on 3D Ob- ject Detection in Real Time for Autonomous Driving

    Contreras, Leticia, Jain, Ankit, Bhatt, Neel P., Baner- jee, Bharat and Hashemi, Ehsan. “A Survey on 3D Ob- ject Detection in Real Time for Autonomous Driving.” Frontiers Robot. AIVol. 11 (2024): p. 1212070. DOI 10.3389/frobt.2024.1212070

  2. [2]

    Traffic Safety Facts 2020: A Compilation of MotorVehicleCrashData

    NHTSA. “Traffic Safety Facts 2020: A Compilation of MotorVehicleCrashData.” TechnicalReportNo.DOTHS 813 401. U.S. Department of Transportation. 2022

  3. [3]

    FederalMotorVehicleSafetyStandards;Forward Collision Warning System

    NHTSA.“FederalMotorVehicleSafetyStandards;Forward Collision Warning System.” (2018). Docket No. NHTSA- 2016-0001

  4. [4]

    Automotive LiDAR Market Outlook 2023– 2030

    Wartnaby, A. “Automotive LiDAR Market Outlook 2023– 2030.” Technical report no. Yole Développement. 2023

  5. [5]

    ACM Comput

    Li, Xinxing, Zhang, Wenbing and Hua, Zhongyun. “Deep Learning-Based Depth Estimation Methods from Monoc- ular Image and Videos: A Comprehensive Survey.”ACM Comput. SurveysVol. 56 No. 7 (2024): pp. 1–37. DOI 10.1145/3677327

  6. [6]

    Depth Map Prediction from a Single Image Using a Multi-Scale Deep Network

    Eigen, David, Puhrsch, Christian and Fergus, Rob. “Depth Map Prediction from a Single Image Using a Multi-Scale Deep Network.”Advances in Neural Information Process- ing Systems (NeurIPS), Vol. 27: pp. 2366–2374. 2014

  7. [7]

    Unsupervised Learning of Depth and Ego- Motion from Video

    Zhou, Tinghui, Brown, Matthew, Snavely, Noah and Lowe, David G. “Unsupervised Learning of Depth and Ego- Motion from Video.”Proc. IEEE/CVF CVPR: pp. 1851–

  8. [8]

    DiggingIntoSelf-SupervisedMonoc- ularDepthEstimation

    Godard, Clément, Mac Aodha, Oisin, Firman, Michael and Brostow,GabrielJ. “DiggingIntoSelf-SupervisedMonoc- ularDepthEstimation.”Proc.IEEE/CVFICCV:pp.3828–

  9. [9]

    Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross- Dataset Transfer

    Ranftl, René, Lasinger, Katrin, Hafner, David, Schindler, Konrad and Koltun, Vladlen. “Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross- Dataset Transfer.”IEEE Trans. Pattern Anal. Mach. Intell. Vol. 44 No. 3 (2020): pp. 1623–1637

  10. [10]

    Vision Transformers for Dense Prediction

    Ranftl, René, Bochkovskiy, Alexey and Koltun, Vladlen. “Vision Transformers for Dense Prediction.”Proc. IEEE/CVF ICCV: pp. 12179–12188. 2021

  11. [11]

    Depth anything: Unleash- ing the power of large-scale unlabeled data

    Yang, Lihe, Kang, Bingyi, Huang, Zilong, Xu, Xiao- gang, Feng, Jiashi and Zhao, Hengshuang. “Depth Any- thing: Unleashing the Power of Large-Scale Unlabeled Data.”Proc. IEEE/CVF CVPR: pp. 10371–10381. 2024. ArXiv:2401.10891

  12. [12]

    ZoeDepth: Zero-Shot Trans- fer by Combining Relative and Metric Depth

    Bhat, Shariq Farooq, Birkl, Reiner, Wofk, Diana, Wonka, Peter and Müller, Matthias. “ZoeDepth: Zero-Shot Trans- fer by Combining Relative and Metric Depth.”Proc. IEEE/CVF CVPR: pp. 2164–2174. 2023

  13. [13]

    Monocular Depth Estimation Using Deep Learning: A Review

    Masoumian,Armin,Rashwan,HatemA.,Cristiano,Julián, Asif, Muhammad S. and Puig, Domenec. “Monocular Depth Estimation Using Deep Learning: A Review.”Ap- plied SciencesVol. 12 No. 14 (2022): p. 6913

  14. [14]

    Monocular Depth Estimation: A Thorough Review

    Rajapaksha, Uthman, Sohel, Ferdous, Laga, Hamid and Bennamoun, Mohammed. “Monocular Depth Estimation: A Thorough Review.”IEEE Trans. Pattern Anal. Mach. Intell.Vol. 46 No. 4 (2024): pp. 2396–2414

  15. [15]

    Benchmark on Monocular Metric Depth Estimation in Wildlife Setting

    Ballhausen, Mark and Schumann, Arne. “Benchmark on Monocular Metric Depth Estimation in Wildlife Setting.” arXiv preprint: pp. 1–10. 2025. ArXiv:2510.04723

  16. [16]

    DeepLearningforCollisionWarning

    Author,A. “DeepLearningforCollisionWarning.”(2025). In preparation

  17. [17]

    Vision-Based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy

    Stein, Gideon P., Mano, Ofer and Shashua, Amnon. “Vision-Based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy.”Proc. IEEE Intelligent Vehicles Symposium: pp. 120–125. 2003

  18. [18]

    Vision-Based Near-Crash Detection and Distance Mea- surementUsingVehicleWidthforADAS

    Choi, Jong-woo, Lee, Kyu-Won and Kim, Jae-Young. “Vision-Based Near-Crash Detection and Distance Mea- surementUsingVehicleWidthforADAS.”Proc.Int.Conf. Control,AutomationandSystems(ICCAS):pp.1892–1895. 2012

  19. [19]

    Monocular Vision- Based Vehicle Distance Estimation Robust to Camera Vi- brations

    Han, Seung-Nam and Kim, Seul-ki. “Monocular Vision- Based Vehicle Distance Estimation Robust to Camera Vi- brations.”SensorsVol. 16 No. 9 (2016): p. 1366. DOI 10.3390/s16091366

  20. [20]

    Sundaresan, J

    Song, Zhenbo, Lu, Jianfeng, Zhang, Tong and Li, Hong- dong. “End-to-EndLearningforInter-VehicleDistanceand Relative Velocity Estimation in ADAS with a Monocular Camera.”Proc. IEEE ICRA: pp. 11081–11087. 2020. DOI 10.1109/ICRA40945.2020.9197557

  21. [21]

    CalculatingVehicle-to-VehicleDis- tance Based on License Plate Detection

    Wang, Zheng-Tao, Li, Nai-Guang, Hao, Rui, Cheng, Yue andJiang,Wen-Long. “CalculatingVehicle-to-VehicleDis- tance Based on License Plate Detection.”Proc. IEEE Int. Conf.VehicularElectronicsandSafety: pp.206–211.2012

  22. [22]

    Robust Vehicle Detection and Distance Estimation Un- der Challenging Lighting Conditions

    Rezaei, Mahdi, Terauchi, Mutsuhiro and Klette, Reinhard. “Robust Vehicle Detection and Distance Estimation Un- der Challenging Lighting Conditions.”IEEE Trans. Intell. Transp. Syst.Vol. 16 No. 5 (2015): pp. 2723–2743. DOI 10.1109/TITS.2015.2402111. 16 Copyright©2026 by ASME

  23. [23]

    A Vision-Based Road-Geometry Estimation System for ADAS Applications

    Karagiannis, Georgios and Bouganis, Christos-Savvas. “A Vision-Based Road-Geometry Estimation System for ADAS Applications.”IEEE Trans. Intell. Transp. Syst.Vol. 18 No. 1 (2016): pp. 71–83. DOI 10.1109/TITS.2016.2537836

  24. [24]

    Inter-Vehicle Distance Estimation Considering Camera Attitude Angles Based on Monocular Vision

    Liu, Jun, Zhang, Rui and Hou, Shihao. “Inter-Vehicle Distance Estimation Considering Camera Attitude Angles Based on Monocular Vision.”Proc. Inst. Mech. Eng. Part D:J.Automob.Eng.Vol.235No.1(2021): pp.67–81. DOI 10.1177/0954407020941399

  25. [25]

    Monocular Vision-Based Vehicle Distance Prediction Utilizing Number Plate Pa- rameters

    Hasan, Munawar, Promi, Md.˜Parvej, Islam, Md.Ãrif and Talukder, Md.˜Kamruzzaman. “Monocular Vision-Based Vehicle Distance Prediction Utilizing Number Plate Pa- rameters.”Proc. 6th Int. Conf. Electrical Information and Communication Technology (EICT): pp. 1–6. 2024. ArXiv:2401.14580

  26. [26]

    T-MDE: Typography-Based Monocular Distance Estimation Using License Plate Character Heights

    Reddy, Manognya Lokesh and Liu, Zheng. “T-MDE: Typography-Based Monocular Distance Estimation Using License Plate Character Heights.”Proc. ASME 2026 IDETC/CIE: pp. 1–10. 2026. Accepted

  27. [27]

    Searching for MobileNetV3

    Howard,Andrew,Sandler,Mark,Chu,Grace,Chen,Liang- Chieh, Chen, Bo, Tan, Mingxing, Wang, Weijun, Zhu, Yukun, Pang, Ruoming, Vasudevan, Vijay, Le, Quoc V. and Adam, Hartwig. “Searching for MobileNetV3.”Proc. IEEE/CVF ICCV: pp. 1314–1324. 2019

  28. [28]

    To- wards Fully Autonomous Driving: Systems and Algo- rithms

    Levinson, Jesse, Askeland, Jake, Becker, Jan et al. “To- wards Fully Autonomous Driving: Systems and Algo- rithms.”Proc.IEEEIntelligentVehiclesSymposium(2011): pp. 163–168

  29. [29]

    Inter-Vehicle Distance Estima- tion Method Based on Monocular Vision Using 3D Detec- tion

    Huang, Liqin, Zhe, Ting, Wu, Qiang, Zhang, Junjun, Pei, Chenhao and Li, Long-Yi. “Inter-Vehicle Distance Estima- tion Method Based on Monocular Vision Using 3D Detec- tion.”IEEE Trans. Veh. Technol.Vol. 69 No. 5 (2020): pp. 4907–4919. DOI 10.1109/TVT.2020.2978071

  30. [30]

    A Robust Real-Time Au- tomaticLicensePlateRecognitionBasedontheYOLODe- tector

    Laroca, Rayson, Severo, Evair, Zanlorensi, Luiz A., Oliveira, Luiz S., Gonçalves, Gabriel R., Schwartz, William R. and Menotti, David. “A Robust Real-Time Au- tomaticLicensePlateRecognitionBasedontheYOLODe- tector.”Proc. IJCNN: pp. 1–10. 2018

  31. [31]

    AnEnd-to-EndAu- tomated License Plate Recognition System Using YOLO- Based Vehicle and License Plate Detection with Vehicle Classification

    Silva,SamuelM.andJung,ClaudioR. “AnEnd-to-EndAu- tomated License Plate Recognition System Using YOLO- Based Vehicle and License Plate Detection with Vehicle Classification.”SensorsVol. 22 No. 23 (2022): p. 9477. DOI 10.3390/s22239477

  32. [32]

    License Plate Recognition System for Complex Scenarios Based on Im- proved YOLOv5s and LPRNet

    Zhao, Zhenyu, He, Linyuan and Yu, Jian. “License Plate Recognition System for Complex Scenarios Based on Im- proved YOLOv5s and LPRNet.”Scientific ReportsVol. 15 (2025): p. 15003. DOI 10.1038/s41598-025-18311-4

  33. [33]

    DeepLearningAlgorithmsforLicense PlateRecognition: AReview

    Wang,Yanyanetal.“DeepLearningAlgorithmsforLicense PlateRecognition: AReview.”NeuralNetworks(2026): pp. 1–20Doi:10.1016/j.neunet.2026.03xxx (in press)

  34. [34]

    EdgeALPRforADAS

    Author,B. “EdgeALPRforADAS.”(2025). Inpreparation

  35. [35]

    Horizon Lines in the Wild

    Workman, Scott, Greenwell, Connor, Zhai, Menghua, Bal- tenberger, Ryan and Jacobs, Nathan. “Horizon Lines in the Wild.”Proc. British Machine Vision Conference (BMVC): pp. 20.1–20.12. 2016

  36. [36]

    Vehicle Distance Estimation Using a Mono-Camera for FCW/AEB Systems

    Moon, Sangwoo, Moon, Ilki and Shin, Kyunam. “Vehicle Distance Estimation Using a Mono-Camera for FCW/AEB Systems.”Int. J. Automot. Technol., Vol. 17. 3: pp. 483–

  37. [37]

    Springer; doi:10.1007/s12239-016-0050-9

    2016. Springer; doi:10.1007/s12239-016-0050-9. 17 Copyright©2026 by ASME