Recognition: 2 theorem links
· Lean TheoremAdversarial Robustness of Time-Series Classification for Crystal Collimator Alignment
Pith reviewed 2026-05-10 19:45 UTC · model grok-4.3
The pith
Adversarial fine-tuning improves robust accuracy of an LHC crystal-collimator CNN by up to 18.6 percentage points without lowering clean accuracy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a CNN classifying BLM time series for crystal-collimator alignment at CERN can have its adversarial robustness improved by up to 18.6% through adversarial fine-tuning without loss in clean accuracy, after formalizing a local robustness property under a real-world threat model and implementing a preprocessing-aware wrapper to enable gradient-based attacks on the deployed pipeline. The paper further shows that sequence-level adversarial sequences can serve as counterexamples to temporal robustness over full scans.
What carries the argument
a preprocessing-aware differentiable wrapper that encodes time-series normalization, padding constraints, and structured perturbations in front of the CNN, allowing existing gradient-based robustness frameworks to operate on the full deployed pipeline
Load-bearing premise
The chosen adversarial threat model based on real-world plausibility for structured perturbations to BLM time series during crystal rotation accurately captures the inputs an attacker could realistically supply to the deployed system.
What would settle it
Finding a perturbation within the allowed bounds that changes the fine-tuned CNN's classification on a real BLM time series recorded during crystal rotation would show that the reported robustness gain does not hold.
Figures
read the original abstract
In this paper, we analyze and improve the adversarial robustness of a convolutional neural network (CNN) that assists crystal-collimator alignment at CERN's Large Hadron Collider (LHC) by classifying a beam-loss monitor (BLM) time series during crystal rotation. We formalize a local robustness property for this classifier under an adversarial threat model based on real-world plausibility. Building on established parameterized input-transformation patterns used for transformation- and semantic-perturbation robustness, we instantiate a preprocessing-aware wrapper for our deployed time-series pipeline: we encode time-series normalization, padding constraints, and structured perturbations as a lightweight differentiable wrapper in front of the CNN, so that existing gradient-based robustness frameworks can operate on the deployed pipeline. For formal verification, data-dependent preprocessing such as per-window z-normalization introduces nonlinear operators that require verifier-specific abstractions. We therefore focus on attack-based robustness estimates and pipeline-checked validity by benchmarking robustness with the frameworks Foolbox and ART. Adversarial fine-tuning of the resulting CNN improves robust accuracy by up to 18.6 % without degrading clean accuracy. Finally, we extend robustness on time-series data beyond single windows to sequence-level robustness for sliding-window classification, introduce adversarial sequences as counterexamples to a temporal robustness requirement over full scans, and observe attack-induced misclassifications that persist across adjacent windows.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript analyzes the adversarial robustness of a CNN for classifying beam loss monitor (BLM) time series during crystal rotation for collimator alignment at the LHC. It formalizes local robustness under a real-world plausible threat model, develops a differentiable wrapper encoding normalization, padding, and structured perturbations to enable use of Foolbox and ART frameworks, reports that adversarial fine-tuning boosts robust accuracy by up to 18.6% with no clean accuracy degradation, and extends the analysis to sequence-level robustness using adversarial sequences for sliding windows.
Significance. If the threat model is realistic, this provides a practical demonstration of improving robustness in a deployed time-series classifier for a high-stakes scientific application. The use of existing attack libraries via a wrapper and the extension to temporal robustness across sequences are useful contributions. Concrete numerical improvements are reported from standard benchmarks.
major comments (1)
- [Threat model (abstract and §2)] The adversarial threat model is described as based on real-world plausibility for structured perturbations to BLM time series, but the manuscript does not include comparisons to recorded LHC beam-loss traces, physics-based collimator misalignment simulators, or measured sensor noise characteristics (see abstract and threat model description). This leaves open whether the perturbation family matches realistic attacker capabilities, which is load-bearing for the transferability of the reported 18.6% robust accuracy gain from adversarial fine-tuning.
minor comments (2)
- [Abstract] The claim of 'up to 18.6 %' improvement would benefit from specifying the corresponding attack parameters, baseline accuracy, and dataset details for immediate context.
- [Experimental evaluation] Details on data splits, number of samples, and statistical significance testing for the reported accuracy improvements should be clarified to support the empirical claims.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on the threat model. We address the single major comment below and propose targeted revisions to improve clarity without altering the core contributions.
read point-by-point responses
-
Referee: [Threat model (abstract and §2)] The adversarial threat model is described as based on real-world plausibility for structured perturbations to BLM time series, but the manuscript does not include comparisons to recorded LHC beam-loss traces, physics-based collimator misalignment simulators, or measured sensor noise characteristics (see abstract and threat model description). This leaves open whether the perturbation family matches realistic attacker capabilities, which is load-bearing for the transferability of the reported 18.6% robust accuracy gain from adversarial fine-tuning.
Authors: We acknowledge that the manuscript does not contain direct empirical comparisons to recorded LHC traces, collimator simulators, or measured noise statistics. The threat model in §2 is instead derived from domain knowledge of the BLM system and plausible attack surfaces (e.g., sensor tampering or control-system injection during crystal rotation), with perturbation families chosen to respect physical constraints such as temporal continuity and the deployed normalization/padding pipeline. This choice enables the differentiable wrapper and use of Foolbox/ART while remaining relevant to the high-stakes LHC setting. We will revise §2 and the abstract to (i) cite relevant literature on accelerator sensor security and (ii) explicitly state the assumptions and limitations of the chosen perturbation family, thereby clarifying the scope of the 18.6% robust-accuracy claim. We cannot, however, add the requested empirical matching because operational LHC data are proprietary and not available for this study. revision: partial
- Direct empirical validation of the perturbation family against recorded LHC beam-loss traces, physics-based simulators, or measured sensor noise, due to lack of access to proprietary operational data.
Circularity Check
No significant circularity detected
full rationale
The paper reports an empirical result: adversarial fine-tuning on a CNN for BLM time-series classification yields up to 18.6% robust-accuracy gain with no clean-accuracy loss. This is obtained by instantiating a differentiable wrapper around normalization/padding/structured perturbations and running gradient attacks via external libraries (Foolbox, ART). No equations, self-definitional loops, or load-bearing self-citations appear in the provided text that would reduce the accuracy gain to a fitted parameter or input by construction. The threat model is presented as an assumption justified by real-world plausibility rather than derived from prior results of the same authors. The central claim therefore rests on standard adversarial-training methodology applied to an external pipeline and does not collapse into tautology.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The adversarial threat model based on real-world plausibility for time-series perturbations is appropriate for the LHC crystal-collimator application.
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We instantiate a preprocessing-aware wrapper … encode time-series normalization, padding constraints, and structured perturbations as a lightweight differentiable wrapper … adversarial fine-tuning … improves robust accuracy by up to 18.6 %
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We formalize a local robustness property … threat model based on real-world plausibility … sequence-level robustness for sliding-window classification
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
In: ICML
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing Robust Adversarial Examples. In: ICML. Proceedings of Machine Learning Research, vol. 80, pp. 284–
-
[2]
A., Lines, J., Flynn, M., Large, J., Bostrom, A.,
Bagnall, A.J., Dau, H.A., Lines, J., Flynn, M., Large, J., Bostrom, A., Southam, P., Keogh, E.J.: The UEA multivariate time series classification archive, 2018. CoRR abs/1811.00075(2018)
-
[3]
In: IJCAI
Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q.: Recent Advances in Adversarial Training for Adversarial Robustness. In: IJCAI. pp. 4312–4321. ijcai.org (2021)
2021
-
[4]
In: Lectures on Runtime Verification, Lecture Notes in Computer Science, vol
Bartocci, E., Deshmukh, J.V., Donzé, A., Fainekos, G., Maler, O., Nickovic, D., Sankaranarayanan, S.: Specification-Based Monitoring of Cyber-Physical Systems: A Survey on Theory, Tools and Applications. In: Lectures on Runtime Verification, Lecture Notes in Computer Science, vol. 10457, pp. 135–175. Springer (2018)
2018
-
[5]
In: IJCAI
Belkhouja, T., Doppa, J.R.: Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features (Extended Abstract). In: IJCAI. pp. 6845–6850. ijcai.org (2023)
2023
-
[6]
Benedikt, M., Bartmann, W., Burnet, J.P., Carli, C., Chance, A., Craievich, P., Giovannozzi, M., Grojean, C., Gutleber, J., Hanke, K., Henriques, A., Janot, P., Lourenco, C., Mangano, M., Otto, T., Poole, J.H., Rajagopalan, S., Raubenheimer, T., Todesco, E., Ulrici, L., Watson, T.P., Wilkinson, G., Zimmermann, F.: Fu- ture Circular Collider Feasibility St...
-
[7]
Brix, C., Müller, M.N., Bak, S., Johnson, T.T., Liu, C.: First three years of the international verification of neural networks competition (VNN-COMP). Int. J. Softw. Tools Technol. Transf.25(3), 329–339 (2023)
2023
-
[8]
In: IEEE Symposium on Security and Privacy Workshops
Carlini, N., Wagner, D.A.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In: IEEE Symposium on Security and Privacy Workshops. pp. 1–7. IEEE Computer Society (2018)
2018
-
[9]
In: ICML
Cohen, J., Rosenfeld, E., Kolter, J.Z.: Certified Adversarial Robustness via Ran- domized Smoothing. In: ICML. Proceedings of Machine Learning Research, vol. 97, pp. 1310–1320. PMLR (2019)
2019
- [10]
-
[11]
In: AAAI
Ding, D., Zhang, M., Feng, F., Huang, Y., Jiang, E., Yang, M.: Black-Box Adver- sarial Attack on Time Series Classification. In: AAAI. pp. 7358–7368. AAAI Press (2023)
2023
-
[12]
In: INDIN
Dix, M., Manca, G., Okafor, K.C., Borrison, R., Kirchheim, K., Sharma, D., R, C.K., Maduskar, D., Ortmeier, F.: Measuring the Robustness of ML Models Against Data Quality Issues in Industrial Time Series Data. In: INDIN. pp. 1–8. IEEE (2023)
2023
-
[13]
In: CVPR
Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust Physical-World Attacks on Deep Learning Visual Classification. In: CVPR. pp. 1625–1634. Computer Vision Foundation / IEEE Computer Society (2018)
2018
-
[14]
In: ICLR (Poster) (2015) 20 X
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples. In: ICLR (Poster) (2015) 20 X. Fink et al
2015
-
[15]
Nature Medicine26(3), 360–363 (Mar 2020).https://doi.org/10.1038/ s41591-020-0791-x,https://www.nature.com/articles/s41591-020-0791-x
Han, X., Hu, Y., Foschini, L., Chinitz, L., Jankelson, L., Ranganath, R.: Deep learning models for electrocardiograms are susceptible to adversarial at- tack. Nature Medicine26(3), 360–363 (Mar 2020).https://doi.org/10.1038/ s41591-020-0791-x,https://www.nature.com/articles/s41591-020-0791-x
2020
-
[16]
In: IEEE Nuclear Science Symposium Conference Record, 2005
Holzer, E., Dehning, B., Effinger, E., Emery, J., Ferioli, G., Gonzalez, J., Gschwendtner, E., Guaglio, G., Hodgson, M., Kramer, D., Leitner, R., Ponce, L., Prieto, V., Stockner, M., Zamantzas, C.: Beam loss monitoring system for the LHC. In: IEEE Nuclear Science Symposium Conference Record, 2005. vol. 2, pp. 1052–1056 (Oct 2005).https://doi.org/10.1109/N...
-
[17]
In: ACM Multimedia
Jiang, L., Ma, X., Chen, S., Bailey, J., Jiang, Y.G.: Black-box Adversarial Attacks on Video Recognition Models. In: ACM Multimedia. pp. 864–872. ACM (2019)
2019
-
[18]
In: CAV (1)
Katz, G., Barrett, C.W., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In: CAV (1). Lecture Notes in Computer Science, vol. 10426, pp. 97–117. Springer (2017)
2017
-
[19]
Li, H., Cui, Y., Wang, S., Liu, J., Qin, J., Yang, Y.: Multivariate Financial Time- SeriesPredictionWithCertifiedRobustness.IEEEAccess8,109133–109143(2020)
2020
-
[20]
In: EANN
Lopez-Miguel, I.D., Adiego, B.F., Ghawash, F., Viñuela, E.B.: Verification of Neu- ral Networks Meets PLC Code: An LHC Cooling Tower Control System at CERN. In: EANN. Communications in Computer and Information Science, vol. 1826, pp. 420–432. Springer (2023)
2023
-
[21]
In: ICLR (Poster)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards Deep Learn- ing Models Resistant to Adversarial Attacks. In: ICLR (Poster). OpenReview.net (2018)
2018
-
[22]
In: Proceedings of 12th Large Hadron Collider Physics Conference — PoS(LHCP2024)
Malara, A., ATLAS and CMS Collaborations: Exploring jets: substructure and flavour tagging in CMS and ATLAS. In: Proceedings of 12th Large Hadron Collider Physics Conference — PoS(LHCP2024). p. 150. Sissa Medialab, Boston, USA (Dec 2024).https://doi.org/10.22323/1.478.0150,https://pos.sissa.it/478/150
-
[23]
In: AIPR
Mode, G.R., Hoque, K.A.: Adversarial Examples in Deep Learning for Multivariate Time Series Regression. In: AIPR. pp. 1–10. IEEE (2020)
2020
-
[24]
In: CVPR
Mohapatra, J., Weng, T.W., Chen, P.Y., Liu, S., Daniel, L.: Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations. In: CVPR. pp. 241–249. Computer Vision Foundation / IEEE (2020)
2020
-
[25]
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Nicolae, M.I., Sinn, M., Tran, M.N., Buesser, B., Rawat, A., Wistuba, M., Zant- edeschi, V., Baracaldo, N., Chen, B., Ludwig, H., Molloy, I.M., Edwards, B.: Adver- sarial Robustness Toolbox v1.0.0 (Nov 2019).https://doi.org/10.48550/arXiv. 1807.01069,http://arxiv.org/abs/1807.01069, arXiv:1807.01069 [cs]
work page internal anchor Pith review doi:10.48550/arxiv 2019
-
[26]
In: MILCOM
Papernot, N., McDaniel, P.D., Swami, A., Harang, R.E.: Crafting adversarial input sequences for recurrent neural networks. In: MILCOM. pp. 49–54. IEEE (2016)
2016
-
[27]
In: SAFECOMP
Paterson, C., Wu, H., Grese, J., Calinescu, R., Pasareanu, C.S., Barrett, C.W.: DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers. In: SAFECOMP. Lecture Notes in Computer Science, vol. 12852, pp. 3–17. Springer (2021)
2021
-
[28]
Rauber,J.,Zimmermann,R.,Bethge,M.,Brendel,W.:FoolboxNative:Fastadver- sarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. J. Open Source Softw.5(53), 2607 (2020)
2020
-
[29]
Redaelli, S., Aberle, O., Abramov, A., Bruce, R., Cai, R., Calviani, M., D’Andrea, M., Demassieux, Q., Dewhurst, K., Di Castro, M., Esposito, L., Gilardoni, S., Hermes, P., Lindström, B., Lechner, A., Masi, A., Matheson, E., Mirarchi, D., Potoine, J.B., Ricci, G., Rodin, V., Seidenbinder, R., Paiva, S.S., Bandiera, L., Robustness of Time-Series Classifica...
-
[30]
Ricci, G., D’Andrea, M., Di Castro, M., Matheson, E., Mirarchi, D., Mostacci, A., Redaelli, S.: Machine learning based crystal collimator alignment optimization. Physical Review Accelerators and Beams27(9), 093001 (Sep 2024).https:// doi.org/10.1103/PhysRevAccelBeams.27.093001,https://link.aps.org/doi/ 10.1103/PhysRevAccelBeams.27.093001
-
[31]
In: Proceedings of 42nd International Conference on High Energy Physics — PoS(ICHEP2024)
Sarkar, U., CMS Collaboration: Run 3 performance and advances in heavy-flavor jet tagging in CMS. In: Proceedings of 42nd International Conference on High Energy Physics — PoS(ICHEP2024). p. 992. Sissa Medialab, Prague, Czech Re- public (Jan 2025).https://doi.org/10.22323/1.476.0992,https://pos.sissa. it/476/992
-
[32]
In: ICLR (Poster) (2014)
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks. In: ICLR (Poster) (2014)
2014
-
[33]
In: ICML
Uesato, J., O’Donoghue, B., Kohli, P., Oord, A.v.d.: Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. In: ICML. Proceedings of Machine Learning Research, vol. 80, pp. 5032–5041. PMLR (2018)
2018
-
[34]
Wang, S., Zhang, H., Xu, K., Lin, X., Jana, S., Hsieh, C.J., Kolter, J.Z.: Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification (Oct 2021).https://doi.org/10.48550/arXiv.2103.06624,http://arxiv.org/abs/ 2103.06624, arXiv:2103.06624 [cs]
-
[35]
In: IJCNN
Wang, Z., Yan, W., Oates, T.: Time series classification from scratch with deep neural networks: A strong baseline. In: IJCNN. pp. 1578–1585. IEEE (2017)
2017
-
[36]
In: CAV (2)
Wu, H., Isac, O., Zeljic, A., Tagomori, T., Daggitt, M.L., Kokke, W., Refaeli, I., Amir, G., Julian, K., Bassan, S., Huang, P., Lahav, O., Wu, M., Zhang, M., Komendantskaya, E., Katz, G., Barrett, C.W.: Marabou 2.0: A Versatile Formal Analyzer of Neural Networks. In: CAV (2). Lecture Notes in Computer Science, vol. 14682, pp. 249–264. Springer (2024)
2024
-
[37]
Wu, T., Wang, X., Qiao, S., Xian, X., Liu, Y., Zhang, L.: Small perturbations are enough: Adversarial attacks on time series prediction. Inf. Sci.587, 794–812 (2022)
2022
-
[38]
In: IJCAI
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial Attacks on Neural Net- works for Graph Data. In: IJCAI. pp. 6246–6250. ijcai.org (2019) A Sequence-Level Robustness and Extendability This appendix formalizes the notion of extendability of adversarial sequences introduced in Section 3 and clarifies its relationship to local robustness verifica- tion....
2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.