pith. machine review for the scientific record. sign in

arxiv: 2604.14208 · v1 · submitted 2026-04-04 · ⚛️ physics.optics · cs.LG· math.OC· physics.comp-ph

Recognition: 2 theorem links

· Lean Theorem

ML-based approach to classification and generation of structured light propagation in turbulent media

Authors on Pith no claims yet

Pith reviewed 2026-05-13 17:01 UTC · model grok-4.3

classification ⚛️ physics.optics cs.LGmath.OCphysics.comp-ph
keywords structured lightturbulent propagationmachine learning classificationgenerative diffusion modelBregman distance minimizationstochastic paraxial equationspeckle disturbanceshigh-frequency modes
0
0 comments X

The pith

Machine learning classifies structured light in turbulence using diffusion models with Bregman minimization.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper develops convolutional neural networks to classify structured light beams that acquire random speckle patterns while propagating through turbulent atmospheres. Propagation is simulated via the stochastic paraxial equation, and the networks are trained with one-hot encoding for classification. When data is limited, a prediction-based generative diffusion model supplies extra training examples. The key improvement comes from minimizing Bregman distance in the learning step, which enhances the quality of high-frequency modes in the generated data. A reader would care if this leads to more robust optical systems that work despite atmospheric distortion.

Core claim

Convolutional neural networks can accurately classify different structured light beams despite turbulence-induced speckle, and a generative diffusion model trained with Bregman distance minimization produces additional data that improves classifier performance especially on high-frequency features.

What carries the argument

Convolutional neural networks for classification and a prediction-based generative diffusion model augmented by Bregman distance minimization for data generation.

If this is right

  • Improved classification of light beams in turbulent conditions.
  • Better generation of high-frequency modes through Bregman minimization.
  • Ability to train effective classifiers with limited initial data by supplementing with generated examples.
  • The stochastic paraxial equation simulation provides usable training data for real-world applications.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Such classifiers could support free-space optical communication links that adapt to turbulence in real time.
  • The generative approach might apply to other random media propagation problems like underwater acoustics.
  • Validation would require comparing performance on experimental data collected in actual turbulent environments.
  • Extending the model to predict beam evolution over longer distances or varying turbulence strengths could be a next step.

Load-bearing premise

The numerical simulation of the stochastic paraxial equation accurately models real turbulent propagation, allowing the generated data to improve classifier generalization to real conditions.

What would settle it

Collect real experimental data of structured light propagating through turbulence and measure whether the classifier trained with generated data performs significantly better than one trained only on simulations.

Figures

Figures reproduced from arXiv: 2604.14208 by Anjali Nair, Aokun Wang, Guillaume Bal, Zhongjian Wang.

Figure 1
Figure 1. Figure 1: Structured-light dataset overview. Left block: the [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Classifier architecture diagrams. Each block indicates the main computational stages from input to logits. In panel (b), the inset expands a residual BasicBlock; here s denotes the convolution stride. 3.3 Preprocessing, Cropping, and Controlled Shifts The simulated intensity fields are produced on a 2048 × 2048 grid (Section 2), which are too large for convolutional processing in real applications. For xra… view at source ↗
Figure 3
Figure 3. Figure 3: Normalized confusion matrix for the best ResNet18 checkpoint under generative augmentation [PITH_FULL_IMAGE:figures/full_fig_p013_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Frequency-domain resolution check for the OAM code [PITH_FULL_IMAGE:figures/full_fig_p020_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Autocorrelation-based resolution diagnostics for the OAM code [PITH_FULL_IMAGE:figures/full_fig_p020_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Plane-wave scintillation diagnostics under the default setting [PITH_FULL_IMAGE:figures/full_fig_p021_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Fig. 1-style overview comparing simulated and generated propagated patterns under the [PITH_FULL_IMAGE:figures/full_fig_p022_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Representative generated samples across the controlled diffusion-model configurations from [PITH_FULL_IMAGE:figures/full_fig_p023_8.png] view at source ↗
read the original abstract

This work develops machine learning approaches to classify structured light wave beams developing random speckle disturbances as they propagate through turbulent atmospheres. Beam propagation is modeled by the numerical simulation of a stochastic paraxial equation. We design convolutional neural networks tailored for this specific application and use them for a classification model with one-hot encoding. To address the challenge of potentially limited available data, we develop a prediction-based generative diffusion model to provide additional data during classifier training. We show that a Bregman distance minimization during the learning step improves the quality of the generation of high-frequency modes.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript develops CNN-based classifiers for structured light beams subject to speckle from turbulent propagation, with the propagation modeled exclusively by numerical integration of the stochastic paraxial equation. To mitigate limited training data, the authors introduce a diffusion generative model whose training incorporates Bregman-distance minimization, claiming this step improves fidelity of high-frequency modes and thereby aids downstream classification accuracy.

Significance. If the simulated turbulence statistics prove representative of laboratory or field conditions, the pipeline could offer a practical route to data-augmented classification in optics. The technical device of Bregman regularization inside the diffusion objective is a concrete, testable contribution that could be adopted elsewhere; however, the absence of any experimental beam-propagation benchmark leaves the headline performance claims conditional on an unverified modeling assumption.

major comments (2)
  1. [Beam propagation model and data generation sections] The central claim that Bregman minimization improves high-frequency mode generation (and thereby classifier generalization) rests entirely on data generated by the stochastic paraxial solver. No quantitative comparison of simulated speckle statistics, scintillation index, or spatial-frequency content against laboratory or field measurements is provided; this modeling assumption is load-bearing for every reported accuracy or fidelity number.
  2. [Results and generative-model training] The abstract states that Bregman-distance minimization improves generation quality, yet the provided text supplies neither numerical values for the claimed improvement, error bars, nor ablation tables that isolate the Bregman term from other training choices. Without these, the strength of the improvement cannot be assessed.
minor comments (2)
  1. [Methods] Notation for the one-hot encoding scheme and the precise form of the Bregman divergence should be written explicitly rather than left to standard references.
  2. [Figures] Figure captions for the generated speckle patterns should state the turbulence strength parameter (e.g., C_n^2 or Rytov variance) used in each panel.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for the constructive and detailed feedback. We address the major comments point by point below, indicating the revisions we will incorporate to strengthen the manuscript while remaining faithful to the scope of our numerical study.

read point-by-point responses
  1. Referee: [Beam propagation model and data generation sections] The central claim that Bregman minimization improves high-frequency mode generation (and thereby classifier generalization) rests entirely on data generated by the stochastic paraxial solver. No quantitative comparison of simulated speckle statistics, scintillation index, or spatial-frequency content against laboratory or field measurements is provided; this modeling assumption is load-bearing for every reported accuracy or fidelity number.

    Authors: We agree that the stochastic paraxial equation is a modeling assumption whose fidelity to real turbulence is central to interpreting all results. While our work is a purely numerical study and we do not have access to new laboratory or field data, the stochastic paraxial solver is a standard model in the optics literature whose statistics (scintillation index, speckle contrast, and spatial-frequency content) have been validated against experiments in multiple prior works. In the revised manuscript we will (i) expand the beam-propagation section with explicit citations to those validation studies, (ii) report the specific turbulence parameters (e.g., refractive-index structure constant, inner/outer scale) used in our simulations, and (iii) add a limitations subsection that quantifies how deviations from real-world statistics would affect the reported classifier accuracies. This will make the modeling assumptions transparent without overstating the scope of the present contribution. revision: partial

  2. Referee: [Results and generative-model training] The abstract states that Bregman-distance minimization improves generation quality, yet the provided text supplies neither numerical values for the claimed improvement, error bars, nor ablation tables that isolate the Bregman term from other training choices. Without these, the strength of the improvement cannot be assessed.

    Authors: We accept this criticism. The revised manuscript will include: (a) concrete numerical values (with standard deviations over at least five independent training runs) for the improvement in high-frequency mode fidelity when the Bregman term is active versus inactive; (b) error bars on all reported generation-quality and downstream classification metrics; and (c) a dedicated ablation table that isolates the Bregman-distance minimization from other training hyperparameters. These additions will be placed in the generative-model training and results sections so that the magnitude of the claimed improvement can be directly evaluated. revision: yes

standing simulated objections not resolved
  • Direct experimental validation of the simulated turbulence statistics against laboratory or field measurements is outside the scope of the present numerical study and cannot be supplied in the revision.

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper's core pipeline begins with independent numerical integration of the stochastic paraxial equation to produce synthetic speckle data, which then trains both the CNN classifier and the diffusion generative model. Bregman distance minimization is introduced as a standard training objective to improve high-frequency fidelity in the generator; this is an optimization choice applied to the learned model rather than a redefinition or refitting of the input simulation itself. No self-citations, uniqueness theorems, or ansatzes are invoked to justify the central claims, and the reported improvements are evaluated directly on held-out simulated data. The derivation therefore remains self-contained: simulation statistics determine the training distribution, and ML performance metrics follow from that distribution without reducing back to fitted parameters by construction.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 0 invented entities

The approach relies on standard ML techniques and physics simulation assumptions; no new entities invented. Free parameters include NN hyperparameters and diffusion model parameters chosen to fit the task.

free parameters (2)
  • CNN architecture hyperparameters
    Number of layers, filters, learning rates chosen to fit the classification task on simulated speckle data.
  • Diffusion model training parameters
    Parameters in the generative diffusion model tuned during training to produce high-frequency modes.
axioms (1)
  • domain assumption The stochastic paraxial equation models beam propagation in turbulent media
    Invoked in the beam propagation modeling section of the abstract.

pith-pipeline@v0.9.0 · 5397 in / 1164 out tokens · 34310 ms · 2026-05-13T17:01:17.849232+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • Cost.FunctionalEquation washburn_uniqueness_aczel unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We use a hybrid objective L_total = L_pixel + λ L_freq ... D_ρ(x,y)=ρ(x)−ρ(y)−Re⟨ξ_y,x−y⟩ ... Theorem 1 (Consistency of Bregman Divergence Minimization)

  • Foundation.RealityFromDistinction reality_from_one_distinction unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    Beam propagation is modeled by the numerical simulation of a stochastic paraxial equation ... split-step Fourier method

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

37 extracted references · 37 canonical work pages · 2 internal anchors

  1. [1]

    A. E. Willner, H. Huang, Y. Yan, Y. Ren, N. Ahmed, G. Xie, C. Bao, L. Li, Y. Cao, Z. Zhao, J. Wang, M. P. J. Lavery, M. Tur, S. Ramachandran, A. F. Molisch, N. Ashrafi, and S. Ashrafi. Optical communications using orbital angular momentum beams.Adv. Opt. Photon., 7(1):66–106, Mar 2015. doi: 10.1364/AOP.7.000066. URL https://opg.optica.org/aop/abstract.cfm...

  2. [2]

    CRC press, 2016

    Gregory J Gbur.Singular Optics. CRC press, 2016

  3. [3]

    Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities.Light: Science & Applications, 8(1):90, 2019

    Yijie Shen, Xuejiao Wang, Zhenwei Xie, Changjun Min, Xing Fu, Qiang Liu, Mali Gong, and Xiaocong Yuan. Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities.Light: Science & Applications, 8(1):90, 2019

  4. [4]

    Multimode communication through the turbulent atmosphere.Journal of the Optical Society of America A, 37(5):720–730, 2020

    Liliana Borcea, Josselin Garnier, and Knut Sølna. Multimode communication through the turbulent atmosphere.Journal of the Optical Society of America A, 37(5):720–730, 2020

  5. [5]

    Structured light.Nature photonics, 15(4): 253–262, 2021

    Andrew Forbes, Michael De Oliveira, and Mark R Dennis. Structured light.Nature photonics, 15(4): 253–262, 2021. 13

  6. [6]

    Structured light in turbulence.IEEE Journal of Selected Topics in Quantum Electronics, 27(2):1–21, 2020

    Mitchell A Cox, Nokwazi Mphuthi, Isaac Nape, Nikiwe Mashaba, Ling Cheng, and Andrew Forbes. Structured light in turbulence.IEEE Journal of Selected Topics in Quantum Electronics, 27(2):1–21, 2020

  7. [7]

    SPIE press, 2001

    Larry C Andrews, Ronald L Phillips, and Cynthia Y Hopen.Laser Beam Scintillation with Applica- tions, volume 99. SPIE press, 2001

  8. [8]

    Splitting algorithms for paraxial and Itô-Schrödinger models of wave propagation in random media, 2025

    Guillaume Bal and Anjali Nair. Splitting algorithms for paraxial and Itô-Schrödinger models of wave propagation in random media, 2025. URLhttps://arxiv.org/abs/2503.00633

  9. [9]

    Twisted light transmission over 143 km.Proceedings of the National Academy of Sciences, 113(48):13648–13653, November 2016

    Mario Krenn, Johannes Handsteiner, Matthias Fink, Robert Fickler, Rupert Ursin, Mehul Malik, and Anton Zeilinger. Twisted light transmission over 143 km.Proceedings of the National Academy of Sciences, 113(48):13648–13653, November 2016. ISSN 1091-6490. doi: 10.1073/pnas.1612023113. URLhttp://dx.doi.org/10.1073/pnas.1612023113

  10. [10]

    Timothy Doster and Abbie T. Watnik. Machine learning approach to OAM beam demultiplexing via convolutional neural networks.Appl. Opt., 56(12):3386–3396, Apr 2017. doi: 10.1364/AO.56.003386. URLhttps://opg.optica.org/ao/abstract.cfm?URI=ao-56-12-3386

  11. [11]

    Esposito, and Charles Nelson

    Svetlana Avramov-Zamurovic, Joel M. Esposito, and Charles Nelson. Classifying beams carrying orbital angular momentum with machine learning: tutorial.J. Opt. Soc. Am. A, 40(1):64–77, Jan

  12. [12]

    URL https://opg.optica.org/josaa/abstract.cfm?URI= josaa-40-1-64

    doi: 10.1364/JOSAA.474611. URL https://opg.optica.org/josaa/abstract.cfm?URI= josaa-40-1-64

  13. [13]

    William A Jarrett, Svetlana Avramov-Zamurovic, Joel M Esposito, K Peter Judd, and Charles Nelson. Neural network classification of beams carrying orbital angular momentum after propagating through controlled experimentally generated optical turbulence.Journal of the Optical Society of America A, 41(6):B1–B13, 2024

  14. [14]

    Structured light meets machine intelligence

    Zilong Zhang, Lingyu Kong, Lianghaoyue Zhang, Xiangyang Pan, Trishita Das, Benquan Wang, Baolei Liu, Fan Wang, Isaac Nape, Yijie Shen, et al. Structured light meets machine intelligence. eLight, 5(1):26, 2025

  15. [15]

    Complex Gaussianity of long-distance random wave processes.Archive for Rational Mechanics and Analysis, 249(65), 2025

    Guillaume Bal and Anjali Nair. Complex Gaussianity of long-distance random wave processes.Archive for Rational Mechanics and Analysis, 249(65), 2025

  16. [16]

    Long distance propagation of light in random media with partially coherent sources.Waves in Random and Complex Media, pages 1–33, 2025

    Guillaume Bal and Anjali Nair. Long distance propagation of light in random media with partially coherent sources.Waves in Random and Complex Media, pages 1–33, 2025

  17. [17]

    Fourth-moment analysis for wave propagation in the white-noise paraxial regime.Archive for Rational Mechanics and Analysis, 220(1):37–81, 2016

    Josselin Garnier and Knut Sølna. Fourth-moment analysis for wave propagation in the white-noise paraxial regime.Archive for Rational Mechanics and Analysis, 220(1):37–81, 2016

  18. [18]

    Denoising diffusion probabilistic models.Advances in neural information processing systems, 33:6840–6851, 2020

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.Advances in neural information processing systems, 33:6840–6851, 2020

  19. [19]

    Scaling limits for beam wave propagation in atmospheric turbulence.Stochastics and Dynamics, 4(01):135–151, 2004

    Albert C Fannjiang and Knut Solna. Scaling limits for beam wave propagation in atmospheric turbulence.Stochastics and Dynamics, 4(01):135–151, 2004

  20. [20]

    Coupled paraxial wave equations in random media in the white-noise regime.The Annals of Applied Probability, pages 318–346, 2009

    Josselin Garnier and Knut Sølna. Coupled paraxial wave equations in random media in the white-noise regime.The Annals of Applied Probability, pages 318–346, 2009

  21. [21]

    Long distance propagation of wave beams in paraxial regime

    Guillaume Bal and Anjali Nair. Long distance propagation of wave beams in paraxial regime. Multiscale Modeling & Simulation, 23(3):1209–1235, 2025

  22. [22]

    Intensity images and statistics from numerical simulation of wave propagation in 3-D random media.Applied Optics, 27(11):2111–2126, 1988

    JM Martin and Stanley M Flatté. Intensity images and statistics from numerical simulation of wave propagation in 3-D random media.Applied Optics, 27(11):2111–2126, 1988. 14

  23. [23]

    The split-step solution in random wave propagation.Journal of computational and applied mathematics, 27(3):349–361, 1989

    M Spivack and BJ Uscinski. The split-step solution in random wave propagation.Journal of computational and applied mathematics, 27(3):349–361, 1989

  24. [24]

    Schmidt.Numerical Simulation of Optical Wave Propagation with Examples in MATLAB

    Jason D. Schmidt.Numerical Simulation of Optical Wave Propagation with Examples in MATLAB. SPIE, July 2010. ISBN 9780819483270. doi: 10.1117/3.866274. URLhttp://dx.doi.org/10.1117/ 3.866274

  25. [25]

    & Haffner, P

    Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition.Proceedings of the IEEE, 86(11):2278–2324, 1998. doi: 10.1109/5.726791. URLhttps://doi.org/10.1109/5.726791

  26. [26]

    Deep residual learning for image recognition

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016

  27. [27]

    Leslie N. Smith. Cyclical learning rates for training neural networks. In2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 464–472. IEEE, 2017. doi: 10.1109/WACV.2017

  28. [28]

    URLhttps://doi.org/10.1109/WACV.2017.58

  29. [29]

    Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations.Nature photonics, 8(10):784–790, 2014

    Ori Katz, Pierre Heidmann, Mathias Fink, and Sylvain Gigan. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations.Nature photonics, 8(10):784–790, 2014

  30. [30]

    Score-Based Generative Modeling through Stochastic Differential Equations

    Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations, 2021. URL https://arxiv.org/abs/2011.13456

  31. [31]

    Back to Basics: Let Denoising Generative Models Denoise

    Tianhong Li and Kaiming He. Back to basics: Let denoising generative models denoise.arXiv preprint arXiv:2511.13720, 2025

  32. [32]

    Mathematical analysis of singularities in the diffusion model under the submanifold assumption.East Asian Journal on Applied Mathematics, 15(4):669–700, 2025

    Yubin Lu, Zhongjian Wang, and Guillaume Bal. Mathematical analysis of singularities in the diffusion model under the submanifold assumption.East Asian Journal on Applied Mathematics, 15(4):669–700, 2025

  33. [33]

    Iterative methods by space decomposition and subspace correction.SIAM review, 34(4): 581–613, 1992

    Jinchao Xu. Iterative methods by space decomposition and subspace correction.SIAM review, 34(4): 581–613, 1992

  34. [34]

    On the spectral bias of neural networks

    Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. InInternational conference on machine learning, pages 5301–5310. PMLR, 2019

  35. [35]

    Focal frequency loss for image reconstruction and synthesis

    Liming Jiang, Bo Dai, Wayne Wu, and Chen Change Loy. Focal frequency loss for image reconstruction and synthesis. InProceedings of the IEEE/CVF international conference on computer vision, pages 13919–13929, 2021

  36. [36]

    Pathway toO( √ D)complexity bound under Wasserstein metric of flow-based models.arXiv preprint arXiv:2512.06702, 2025

    Xiangjun Meng and Zhongjian Wang. Pathway toO( √ D)complexity bound under Wasserstein metric of flow-based models.arXiv preprint arXiv:2512.06702, 2025. A Supplementary Analyses and Numerical Details The remaining material collects the numerical details, validation diagnostics, and auxiliary controlled comparisons that support the main text. 15 A.1 Wavebe...

  37. [37]

    Gen. config

    Figure 5 shows the empirical mean and standard deviation of the radial ACF together with the histogram of the resulting correlation lengths, and Table 7 reports the corresponding summary statistics. In this setting, the estimated correlation length islc = 62.91±8.66pixels, which confirms that the propagated intensity remains well resolved on the2048×2048g...