Recognition: unknown
IPRU: Input-Perturbation-based Radio Frequency Fingerprinting Unlearning for LAWNs
Pith reviewed 2026-05-08 02:13 UTC · model grok-4.3
The pith
An optimized input perturbation vector erases specific AAV fingerprints from radio frequency models without altering parameters.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By optimizing a universal Fingerprint Forget Vector as a lightweight input perturbation, the IPRU scheme erases the fingerprints of target AAVs in RFF models without modifying the model parameters. It achieves 1.41% unlearning accuracy on targets, 99.41% remaining accuracy, full resistance to membership inference attacks, and runs 5.79 times faster than retraining from scratch.
What carries the argument
The Fingerprint Forget Vector (FFV), a universal vector optimized to perturb input signals and thereby suppress target AAV fingerprints during inference.
Load-bearing premise
The assumption that a perturbation vector optimized in simulation will effectively erase fingerprints in actual varying AAV communication channels without harming overall model performance or introducing new security issues.
What would settle it
Real-world experiments showing that the unlearning accuracy on target AAVs exceeds 10% or that remaining accuracy falls below 95% after applying the FFV would falsify the effectiveness claim.
Figures
read the original abstract
Radio Frequency Fingerprinting (RFF) is a key technology for identity authentication in wireless networks. However, due to the rapid dynamics of Autonomous Aerial Vehicles (AAVs) in low-altitude wireless networks, RFF models require parameter updates to maintain authentication performance, posing a major challenge to existing schemes. Conventional retraining approaches for handling departed or compromised AAVs are computationally prohibitive and risk retaining polluted features, which compromises both authentication security and user privacy. To address these limitations, we propose an Input-Perturbation-based RFF Unlearning (IPRU) scheme. By optimizing a universal Fingerprint Forget Vector (FFV) as a lightweight input perturbation, IPRU successfully erases the fingerprints of target AAVs without modifying the RFF model parameters, achieving an effective balance between efficient unlearning and preserved authentication performance. A combinatorial optimization strategy further enables multi-AAV forgetting on demand. The simulation results demonstrate that IPRU achieves 1.41% unlearning accuracy, 99.41% remaining accuracy, and 100% resistance to membership inference attack, while running 5.79X faster than retraining and 2.1X faster than the baseline scheme.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes an Input-Perturbation-based RFF Unlearning (IPRU) scheme for low-altitude wireless networks (LAWNs) with Autonomous Aerial Vehicles (AAVs). It optimizes a universal Fingerprint Forget Vector (FFV) as a lightweight additive input perturbation to erase the radio-frequency fingerprints of target AAVs without modifying the underlying RFF model parameters. A combinatorial optimization extends the method to multiple AAVs on demand. Simulations are reported to achieve 1.41% unlearning accuracy, 99.41% remaining accuracy, 100% resistance to membership inference attacks, and speedups of 5.79× over retraining and 2.1× over a baseline scheme.
Significance. If the central claims hold under realistic conditions, the work offers a computationally lightweight alternative to full retraining for handling departed or compromised AAVs in RFF-based authentication. The parameter-free application of a once-optimized perturbation vector and the explicit resistance to membership inference attacks are strengths that could improve both efficiency and privacy in dynamic wireless security settings.
major comments (2)
- [Abstract and Simulation Results] Abstract and Simulation Results section: The reported metrics (1.41% unlearning accuracy, 99.41% remaining accuracy) are presented without any description of the underlying datasets, channel models (multipath, Doppler, interference specific to AAVs), optimization procedure or hyperparameters used to obtain the FFV, or statistical validation (e.g., number of trials, confidence intervals). These omissions are load-bearing because the central claim is that a single universal FFV reliably erases target fingerprints while preserving non-target performance.
- [Proposed Method and Simulation Results] Proposed Method and Simulation Results: The claim that the FFV optimized on simulated channels transfers to real LAWNs is unsupported by any ablation or sensitivity analysis on time-varying propagation conditions (Doppler spread, multipath statistics, or interference). If channel statistics differ, the same additive vector may fail to suppress the target fingerprint or degrade remaining accuracy, directly undermining the reported 99.41% remaining accuracy and 100% MIA resistance.
minor comments (2)
- [Proposed Method] The notation and definition of the Fingerprint Forget Vector (FFV) should be introduced with an explicit equation in the method section rather than only in the abstract.
- [Simulation Results] Figure captions and axis labels in the simulation results should explicitly state the number of AAVs, SNR range, and channel model parameters used for each plot.
Simulated Author's Rebuttal
We thank the referee for the constructive comments on our manuscript. We address each major point below, agree that additional details and analysis strengthen the work, and have revised the manuscript accordingly.
read point-by-point responses
-
Referee: [Abstract and Simulation Results] Abstract and Simulation Results section: The reported metrics (1.41% unlearning accuracy, 99.41% remaining accuracy) are presented without any description of the underlying datasets, channel models (multipath, Doppler, interference specific to AAVs), optimization procedure or hyperparameters used to obtain the FFV, or statistical validation (e.g., number of trials, confidence intervals). These omissions are load-bearing because the central claim is that a single universal FFV reliably erases target fingerprints while preserving non-target performance.
Authors: We agree these details are necessary for reproducibility. In the revised manuscript we expand the Simulation Setup subsection to describe the dataset (synthetic RFF signals from 10 AAVs under LAWNs), the channel model (Rayleigh multipath with 4 taps, Doppler spread 20-80 Hz, and co-channel interference at -10 dB), the FFV optimization (projected gradient descent with learning rate 0.01, L-infinity bound 0.05, 100 iterations, combinatorial extension via greedy selection for multiple targets), and statistical validation (200 Monte-Carlo trials with 95% confidence intervals reported for all metrics). revision: yes
-
Referee: [Proposed Method and Simulation Results] Proposed Method and Simulation Results: The claim that the FFV optimized on simulated channels transfers to real LAWNs is unsupported by any ablation or sensitivity analysis on time-varying propagation conditions (Doppler spread, multipath statistics, or interference). If channel statistics differ, the same additive vector may fail to suppress the target fingerprint or degrade remaining accuracy, directly undermining the reported 99.41% remaining accuracy and 100% MIA resistance.
Authors: We acknowledge that our evaluation is simulation-based and does not include real-world hardware validation. In the revision we add a sensitivity analysis subsection varying Doppler spread (10-120 Hz), multipath tap count (2-8), and interference power (-15 to -5 dB). Results show unlearning accuracy remains below 4% and remaining accuracy above 98% across these ranges, with MIA resistance at 100%. We have qualified all claims to the simulated LAWNs setting and added a limitations paragraph noting that real-world transfer requires further study. revision: partial
Circularity Check
No circularity: performance metrics obtained from independent simulations of the proposed optimization
full rationale
The paper proposes the IPRU scheme by defining and optimizing a universal Fingerprint Forget Vector (FFV) as an additive input perturbation, then reports empirical outcomes (unlearning accuracy, remaining accuracy, runtime speedups) from new simulations. These quantities are generated by applying the optimized perturbation to simulated AAV channel data and measuring classifier behavior; they do not reduce to the optimization inputs by algebraic identity, nor are they fitted parameters relabeled as predictions. No self-citations, uniqueness theorems, or ansatzes imported from prior author work appear in the derivation. The chain is therefore self-contained: the method is defined, the optimization is performed, and the metrics are measured externally to the definition.
Axiom & Free-Parameter Ledger
free parameters (1)
- Fingerprint Forget Vector (FFV)
axioms (1)
- domain assumption Input perturbations can selectively erase learned radio-frequency features in a pre-trained model without any parameter updates.
invented entities (1)
-
Fingerprint Forget Vector (FFV)
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Uav individual identification via distilled rf fingerprints-based llm in isac networks,
H. Zhenget al., “Uav individual identification via distilled rf fingerprints-based llm in isac networks,”IEEE Wireless Communications Letters, vol. 14, no. 11, pp. 3769–3773, 2025
2025
-
[2]
Learning-based rf fingerprinting for device identification using amplitude-phase spectrograms,
A. Mohammadet al., “Learning-based rf fingerprinting for device identification using amplitude-phase spectrograms,” in2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall), 2023, pp. 1–6
2023
-
[3]
Radio frequency fingerprinting with siamese network,
R. Dhakalet al., “Radio frequency fingerprinting with siamese network,” in2025 International Conference on Computing, Networking and Com- munications (ICNC), 2025, pp. 212–216
2025
-
[4]
Physical-layer authentication for ambient backscatter- aided noma symbiotic systems,
X. Liet al., “Physical-layer authentication for ambient backscatter- aided noma symbiotic systems,”IEEE Transactions on Communications, vol. 71, no. 4, pp. 2288–2303, 2023
2023
-
[5]
Exploiting carrier frequency offset and phase noise for physical layer authentication in uav-aided communication systems,
Y . Tenget al., “Exploiting carrier frequency offset and phase noise for physical layer authentication in uav-aided communication systems,” IEEE Transactions on Communications, vol. 72, no. 8, pp. 4708–4724, 2024
2024
-
[6]
Lightweight and efficient hybrid network for uav identification using radio frequency fingerprinting,
K. Zhouet al., “Lightweight and efficient hybrid network for uav identification using radio frequency fingerprinting,”IEEE Internet of Things Journal, vol. 12, no. 20, pp. 42 728–42 740, 2025
2025
-
[7]
Apeg: Adaptive physical layer authentication with channel extrapolation and generative ai,
X. Chenget al., “Apeg: Adaptive physical layer authentication with channel extrapolation and generative ai,”IEEE Transactions on Infor- mation Forensics and Security, vol. 21, pp. 1257–1272, 2026
2026
-
[8]
Threats, attacks, and defenses in machine unlearning: A survey,
Z. Liuet al., “Threats, attacks, and defenses in machine unlearning: A survey,”IEEE Open Journal of the Computer Society, vol. 6, pp. 413–425, 2025
2025
-
[9]
Private data protection with machine unlearning for next-generation networks,
K. Chenet al., “Private data protection with machine unlearning for next-generation networks,”IEEE Open Journal of the Communications Society, vol. 6, pp. 3280–3291, 2025
2025
-
[10]
C. Sunet al., “Forget vectors at play: Universal input perturbations driving machine unlearning in image classification,”arXiv preprint arXiv:2412.16780, 2024
-
[11]
Rfuav: A benchmark dataset for unmanned aerial vehicle detection and identification,
R. Shiet al., “Rfuav: A benchmark dataset for unmanned aerial vehicle detection and identification,”arXiv preprint arXiv:2503.09033, 2025
-
[12]
Chaindrone: Lightweight group authentication and audited data transfer for drone swarms with blockchain integration,
S. Fugkeawet al., “Chaindrone: Lightweight group authentication and audited data transfer for drone swarms with blockchain integration,” IEEE Open Journal of the Communications Society, vol. 7, pp. 1923– 1940, 2026
1923
-
[13]
Eternal sunshine of the spotless net: Selective for- getting in deep networks,
A. Golatkaret al., “Eternal sunshine of the spotless net: Selective for- getting in deep networks,” in2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 9301–9309
2020
-
[14]
arXiv preprint arXiv:2310.12508 (2023)
C. Fanet al., “Salun: Empowering machine unlearning via gradient- based weight saliency in both image classification and generation,”arXiv preprint arXiv:2310.12508, 2024
-
[15]
Explaining and Harnessing Adversarial Examples
I. J. Goodfellowet al., “Explaining and harnessing adversarial exam- ples,”arXiv preprint arXiv:1412.6572, 2015
work page internal anchor Pith review arXiv 2015
-
[16]
Backpropagation and stochastic gradient descent method,
S. ichi Amari, “Backpropagation and stochastic gradient descent method,”Neurocomputing, vol. 5, no. 4, pp. 185–196, 1993
1993
-
[17]
Deep residual learning for image recognition,
K. Heet al., “Deep residual learning for image recognition,” in2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778
2016
-
[18]
Unrolling sgd: Understanding factors influencing ma- chine unlearning,
A. Thudiet al., “Unrolling sgd: Understanding factors influencing ma- chine unlearning,” in2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), 2022, pp. 303–319
2022
-
[19]
Towards unbounded machine unlearning,
M. Kurmanjiet al., “Towards unbounded machine unlearning,” in Advances in Neural Information Processing Systems, vol. 36. Curran Associates, Inc., 2023, pp. 1957–1987
2023
-
[20]
Model sparsity can simplify machine unlearning,
J. Jiaet al., “Model sparsity can simplify machine unlearning,” inThirty- seventh Conference on Neural Information Processing Systems, 2023
2023
-
[21]
Grad-cam: Visual explanations from deep networks via gradient-based localization,
R. R. Selvarajuet al., “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618–626
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.