pith. machine review for the scientific record. sign in

arxiv: 2604.19118 · v1 · submitted 2026-04-21 · 💻 cs.CR · cs.AI

Recognition: unknown

DP-FlogTinyLLM: Differentially private federated log anomaly detection using Tiny LLMs

Authors on Pith no claims yet

Pith reviewed 2026-05-10 02:45 UTC · model grok-4.3

classification 💻 cs.CR cs.AI
keywords federated learningdifferential privacylog anomaly detectiontiny LLMsLoRAprivacy-preserving detectioncyber threat detectiondistributed systems
0
0 comments X

The pith

A federated framework trains tiny LLMs with differential privacy to detect log anomalies across organizations without sharing raw data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper sets out to establish that organizations can jointly train small language models on their separate log files to spot anomalies and threats, using federated updates plus noise addition for privacy and low-rank adaptation for efficiency. This matters because logs containing sensitive operational details cannot be pooled centrally in most real settings, yet isolated training often yields weaker detection. The authors show on Thunderbird and BGL data that the resulting system reaches detection quality comparable to fully centralized LLM training and exceeds prior federated baselines in precision and F1 score. A sympathetic reader would therefore see a practical route to collective learning that respects data locality and privacy rules.

Core claim

DP-FLogTinyLLM performs collaborative log anomaly detection by having each client fine-tune a tiny LLM locally with LoRA, adding differential privacy noise to the updates, and aggregating the results through federated optimization. On the Thunderbird and BGL datasets this produces precision and F1 scores that match those of centralized LLM methods while outperforming existing federated baselines, especially on Thunderbird.

What carries the argument

The DP-FLogTinyLLM pipeline that couples federated averaging with differential privacy noise injection and LoRA-based parameter-efficient tuning of tiny LLMs at each client.

If this is right

  • Organizations holding distributed logs can now pool model knowledge for better threat detection while keeping raw data local.
  • Tiny LLMs plus LoRA make the approach feasible on resource-limited client devices.
  • The framework yields measurable gains in precision over non-private federated baselines on standard log corpora.
  • Additional compute cost from privacy mechanisms is offset by retained detection accuracy.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same privacy-preserving recipe could be tested on other sequence data such as network packets or system call traces.
  • Further shrinking the base model size might reveal the minimum scale at which federated log detection remains useful.
  • Regulators could examine whether the noise levels chosen here satisfy specific differential privacy budgets required by data-protection laws.

Load-bearing premise

That the added differential privacy noise and the federated aggregation steps will not degrade anomaly detection quality enough to fall below centralized training performance on representative log datasets.

What would settle it

A head-to-head run on the Thunderbird dataset in which the F1 score of DP-FLogTinyLLM drops more than a few points below the centralized LLM baseline or below the best prior federated method would falsify the performance-matching claim.

Figures

Figures reproduced from arXiv: 2604.19118 by Isaiah Thompson, Ritwik Bhattacharya, Tanmay Sen.

Figure 1
Figure 1. Figure 1: shows parsed Thunderbird log data. Raw syslog entr [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FlogTinyLLM architecture. Each client processes [PITH_FULL_IMAGE:figures/full_fig_p009_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Accuracy convergence across federated rounds. No [PITH_FULL_IMAGE:figures/full_fig_p013_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: F1-Score evolution demonstrating model robustne [PITH_FULL_IMAGE:figures/full_fig_p014_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: ROC-AUC progression [PITH_FULL_IMAGE:figures/full_fig_p015_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: F1-Score over 10 federated rounds as the cumulativ [PITH_FULL_IMAGE:figures/full_fig_p016_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Variance decomposition across performance metri [PITH_FULL_IMAGE:figures/full_fig_p017_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Per-round training time comparison. Significant v [PITH_FULL_IMAGE:figures/full_fig_p018_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Average performance metrics comparison of FlogTi [PITH_FULL_IMAGE:figures/full_fig_p018_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: F1-Score comparison between LogTinyLLM and Flog [PITH_FULL_IMAGE:figures/full_fig_p020_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Performance delta analysis on the Thunderbird da [PITH_FULL_IMAGE:figures/full_fig_p021_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: FlogTinyLLM accuracy over 20 communication roun [PITH_FULL_IMAGE:figures/full_fig_p022_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: FlogTinyLLM performance over 20 communication r [PITH_FULL_IMAGE:figures/full_fig_p022_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: ROC AUC values across 20 communication rounds for [PITH_FULL_IMAGE:figures/full_fig_p024_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Performance gap analysis with horizontal bars sh [PITH_FULL_IMAGE:figures/full_fig_p026_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Comparison of training time between FlogTinyLLM [PITH_FULL_IMAGE:figures/full_fig_p027_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Training time efficiency of federated log anomaly [PITH_FULL_IMAGE:figures/full_fig_p028_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Training time comparison of federated models on t [PITH_FULL_IMAGE:figures/full_fig_p028_18.png] view at source ↗
read the original abstract

Modern distributed systems generate massive volumes of log data that are critical for detecting anomalies and cyber threats. However, in real world settings, these logs are often distributed across multiple organizations and cannot be centralized due to privacy and security constraints. Existing log anomaly detection methods, including recent large language model (LLM) based approaches, largely rely on centralized training and are not suitable for such environments. In this paper, we propose DP-FLogTinyLLM, a privacy preserving federated framework for log anomaly detection using parameter efficient LLMs. Our approach enables collaborative learning without sharing raw log data by integrating federated optimization with differential privacy. To ensure scalability in resource constrained environments, we employ low rank adaptation (LoRA) for efficient fine tuning of Tiny LLMs at each client. Empirical results on the Thunderbird and BGL datasets show that the proposed framework matches the performance of centralized LLM based methods, while incurring additional computational overhead due to privacy mechanisms. Compared to existing federated baselines, DP-FLogTinyLLM consistently achieves higher precision and F1-score, with particularly strong gains on the Thunderbird dataset, highlighting its effectiveness in detecting anomalies while minimizing false positives.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes DP-FLogTinyLLM, a differentially private federated learning framework for log anomaly detection that uses LoRA-tuned Tiny LLMs at clients to enable collaborative training without sharing raw logs. It integrates federated optimization with differential privacy for privacy preservation and evaluates the approach on the Thunderbird and BGL datasets, claiming that it matches the performance of centralized LLM-based methods while outperforming existing federated baselines in precision and F1-score, particularly on Thunderbird, albeit with added computational overhead from the privacy mechanisms.

Significance. If the central empirical claims hold under rigorous verification, the work would be significant for enabling privacy-preserving anomaly detection in distributed systems where logs cannot be centralized due to organizational or regulatory constraints. The combination of differential privacy, federated averaging, and parameter-efficient fine-tuning on resource-light Tiny LLMs addresses a practical gap between centralized LLM methods and real-world federated settings, with potential applicability to cybersecurity log analysis.

major comments (2)
  1. [Experimental Evaluation] Experimental Evaluation section: the central claim that DP-FLogTinyLLM matches centralized LLM performance with no substantial degradation requires details on client count, data partitioning (IID vs. non-IID), per-client log heterogeneity, and exact epsilon values for the DP noise mechanism. These are absent, and artificial splits of public centralized benchmarks like Thunderbird/BGL do not test the motivating scenario of privacy-sensitive organizational boundaries; without them the no-degradation result is not load-bearing.
  2. [Results and Discussion] Results and Discussion (or equivalent): the comparisons to centralized LLM methods and federated baselines lack specification of exact baselines used, hyperparameter choices, statistical significance tests for precision/F1 gains, or privacy budget accounting. The reported higher precision and F1 on Thunderbird cannot be assessed for robustness without these, weakening the outperformance claim relative to federated baselines.
minor comments (2)
  1. [Abstract and Introduction] The abstract and introduction use 'Tiny LLMs' without defining the specific model sizes or architectures employed (e.g., parameter counts or base models), which should be clarified for reproducibility.
  2. [Figures and Tables] Figure captions and tables reporting performance metrics should include error bars or variance across runs to support the consistency claims.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment point by point below with clarifications and commitments to revisions that strengthen the presentation of our experimental setup and results without altering the core claims.

read point-by-point responses
  1. Referee: [Experimental Evaluation] Experimental Evaluation section: the central claim that DP-FLogTinyLLM matches centralized LLM performance with no substantial degradation requires details on client count, data partitioning (IID vs. non-IID), per-client log heterogeneity, and exact epsilon values for the DP noise mechanism. These are absent, and artificial splits of public centralized benchmarks like Thunderbird/BGL do not test the motivating scenario of privacy-sensitive organizational boundaries; without them the no-degradation result is not load-bearing.

    Authors: We agree that the Experimental Evaluation section would be strengthened by explicit details on these aspects. In the revised manuscript we will add a new subsection that reports: the client count (10 clients), the partitioning strategy (non-IID splits derived from log-source metadata to reflect heterogeneity), quantitative per-client heterogeneity metrics (vocabulary overlap and anomaly-rate variance), and the exact epsilon values together with the Gaussian noise mechanism (epsilon in {0.5, 1.0, 2.0}). While we acknowledge that artificial splits on public benchmarks cannot fully replicate real organizational boundaries, this is an inherent limitation of the field given the absence of publicly available privacy-sensitive distributed logs; our non-IID construction is designed to approximate such heterogeneity, and we will expand the discussion of this approximation. revision: yes

  2. Referee: [Results and Discussion] Results and Discussion (or equivalent): the comparisons to centralized LLM methods and federated baselines lack specification of exact baselines used, hyperparameter choices, statistical significance tests for precision/F1 gains, or privacy budget accounting. The reported higher precision and F1 on Thunderbird cannot be assessed for robustness without these, weakening the outperformance claim relative to federated baselines.

    Authors: We agree that reproducibility and robustness assessment require these specifications. The revised Results and Discussion section will list the exact baselines (centralized TinyLLM, FedAvg+LoRA, DP-FedAvg), all hyperparameter values (LoRA rank=8, learning rate=1e-4, batch size=32, communication rounds=50), statistical significance results (Wilcoxon signed-rank test, p<0.05 for F1 gains on Thunderbird), and a privacy-budget accounting table that reports per-round and cumulative epsilon. These additions will allow readers to evaluate the outperformance claims directly. revision: yes

standing simulated objections not resolved
  • The evaluation necessarily relies on artificial splits of public datasets rather than logs from actual privacy-sensitive organizational boundaries, which cannot be resolved without access to non-public data.

Circularity Check

0 steps flagged

No circularity; purely empirical framework with no derivation chain

full rationale

The paper proposes DP-FLogTinyLLM as an integration of federated optimization, differential privacy, and LoRA fine-tuning on Tiny LLMs, then reports empirical results on Thunderbird and BGL datasets showing performance matching centralized methods and outperforming federated baselines. No mathematical derivations, first-principles predictions, or equations are present that could reduce by construction to fitted inputs or self-citations. All load-bearing claims rest on external experimental comparisons to independent baselines, rendering the evaluation self-contained against public benchmarks without any self-referential reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract contains no derivations, new constants, or postulated entities; relies on standard concepts from federated learning and differential privacy literature.

pith-pipeline@v0.9.0 · 5506 in / 1154 out tokens · 47367 ms · 2026-05-10T02:45:27.505647+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

135 extracted references · 40 canonical work pages · 7 internal anchors

  1. [1]

    Greenwade

    George D. Greenwade. The C omprehensive T ex A rchive N etwork ( CTAN ). TUGBoat. 1993

  2. [2]

    IEEE Transactions on Network and Service Management , volume=

    Federated anomaly detection on system logs for the Internet of Things: A customizable and communication-efficient approach , author=. IEEE Transactions on Network and Service Management , volume=. 2022 , publisher=

  3. [3]

    2021 International Joint Conference on Neural Networks (IJCNN) , pages=

    Anomaly detection using distributed log data: A lightweight federated learning approach , author=. 2021 International Joint Conference on Neural Networks (IJCNN) , pages=. 2021 , organization=

  4. [4]

    2022 , publisher=

    Federated learning: A comprehensive overview of methods and applications , author=. 2022 , publisher=

  5. [5]

    Proceedings of Machine learning and systems , volume=

    Federated optimization in heterogeneous networks , author=. Proceedings of Machine learning and systems , volume=

  6. [6]

    Applied Sciences , volume=

    Differential Privacy in Federated Learning: An Evolutionary Game Analysis , author=. Applied Sciences , volume=. 2025 , publisher=

  7. [7]

    Pattern Recognition and Computer Vision: 6th Chinese Conference, PRCV 2023, Xiamen, China, October 13--15, 2023, Proceedings, Part III , volume=

    Federated learning , author=. Pattern Recognition and Computer Vision: 6th Chinese Conference, PRCV 2023, Xiamen, China, October 13--15, 2023, Proceedings, Part III , volume=. 2023 , organization=

  8. [8]

    Cognitive D2D communication: A comprehensive survey, research challenges, and future directions , journal =

    Adeel Iqbal and Ali Nauman and Riaz Hussain and Muhammad Bilal , keywords =. Cognitive D2D communication: A comprehensive survey, research challenges, and future directions , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.iot.2023.100961 , url =

  9. [9]

    European Parallel Virtual Machine/Message Passing Interface Users’ Group Meeting , pages=

    MPI on BlueGene/L: Designing an efficient general purpose messaging solution for a large cellular system , author=. European Parallel Virtual Machine/Message Passing Interface Users’ Group Meeting , pages=. 2003 , organization=

  10. [10]

    37th annual IEEE/IFIP international conference on dependable systems and networks (DSN'07) , pages=

    What supercomputers say: A study of five system logs , author=. 37th annual IEEE/IFIP international conference on dependable systems and networks (DSN'07) , pages=. 2007 , organization=

  11. [11]

    Proceedings of the April 30--May 2, 1968, spring joint computer conference , pages=

    Computer scheduling methods and their countermeasures , author=. Proceedings of the April 30--May 2, 1968, spring joint computer conference , pages=

  12. [12]

    The impact of big data on software development - MoldStud , url=

    Ana Crudu , year=. The impact of big data on software development - MoldStud , url=

  13. [13]

    , author=

    Lora: Low-rank adaptation of large language models. , author=. ICLR , volume=

  14. [14]

    International conference on machine learning , pages=

    Parameter-efficient transfer learning for NLP , author=. International conference on machine learning , pages=. 2019 , organization=

  15. [15]

    TinyLlama: An Open-Source Small Language Model

    Tinyllama: An open-source small language model , author=. arXiv preprint arXiv:2401.02385 , year=

  16. [16]

    Journal of big data , volume=

    A comprehensive survey of anomaly detection techniques for high dimensional big data , author=. Journal of big data , volume=. 2020 , publisher=

  17. [17]

    LoRA-FA: Efficient and Effective Low Rank Representation Fine-tuning

    Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning , author=. arXiv preprint arXiv:2308.03303 , year=

  18. [18]

    2024 1st International Conference on Communications and Computer Science (InCCCS) , pages=

    Data Dimensionality Reduction Using Principal Component Analysis: A Case Study , author=. 2024 1st International Conference on Communications and Computer Science (InCCCS) , pages=. 2024 , organization=

  19. [19]

    2023 3rd International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA) , pages=

    Enhancing Network Anomaly Detection with Optimized One-Class SVM (OCSVM) , author=. 2023 3rd International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA) , pages=. 2023 , organization=

  20. [20]

    Advances in neural information processing systems , volume=

    Attention is all you need , author=. Advances in neural information processing systems , volume=

  21. [21]

    Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles , pages=

    Detecting large-scale system problems by mining console logs , author=. Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles , pages=

  22. [22]

    Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining , pages=

    Multi-scale one-class recurrent neural networks for discrete event sequence anomaly detection , author=. Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining , pages=

  23. [23]

    , author=

    Loganomaly: Unsupervised detection of sequential and quantitative anomalies in unstructured logs. , author=. IJCAI , volume=

  24. [24]

    Proceedings of the 2017 ACM SIGSAC conference on computer and communications security , pages=

    Deeplog: Anomaly detection and diagnosis from system logs through deep learning , author=. Proceedings of the 2017 ACM SIGSAC conference on computer and communications security , pages=

  25. [25]

    Advances in Neural Information Processing Systems , year=

    Attention is all you need , author=. Advances in Neural Information Processing Systems , year=

  26. [26]

    2021 international joint conference on neural networks (IJCNN) , pages=

    Logbert: Log anomaly detection via bert , author=. 2021 international joint conference on neural networks (IJCNN) , pages=. 2021 , organization=

  27. [27]

    2023 IEEE International Conference on Big Data (BigData) , pages=

    Loggpt: Log anomaly detection via gpt , author=. 2023 IEEE International Conference on Big Data (BigData) , pages=. 2023 , organization=

  28. [28]

    Mobile Computing and Sustainable Informatics: Proceedings of ICMCSI 2021 , pages=

    Anomaly detection on system generated logs—a survey study , author=. Mobile Computing and Sustainable Informatics: Proceedings of ICMCSI 2021 , pages=. 2022 , organization=

  29. [29]

    arXiv preprint arXiv:2404.09135 , year=

    Unveiling LLM Evaluation Focused on Metrics: Challenges and Solutions , author=. arXiv preprint arXiv:2404.09135 , year=

  30. [30]

    arXiv preprint arXiv:2403.06350 , year=

    IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for Indian Languages , author=. arXiv preprint arXiv:2403.06350 , year=

  31. [31]

    How does ChatGPT work? , year = 2024, month = mar, url =

  32. [32]

    The Impact of Mobile Apps on Construction Project Management Efficiency , year = 2024, url =

  33. [33]

    Acm Sigkdd Explorations Newsletter , volume=

    Mining logs files for data-driven system management , author=. Acm Sigkdd Explorations Newsletter , volume=. 2005 , publisher=

  34. [34]

    (2024).On the Capacity of Citation Generation by Large Language Models

    On the Capacity of Citation Generation by Large Language Models , author=. arXiv preprint arXiv:2410.11217 , year=

  35. [35]

    arXiv preprint arXiv:2407.04046 , year=

    Systematic Task Exploration with LLMs: A Study in Citation Text Generation , author=. arXiv preprint arXiv:2407.04046 , year=

  36. [36]

    arXiv preprint arXiv:2405.02228 , year=

    REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs , author=. arXiv preprint arXiv:2405.02228 , year=

  37. [37]

    arXiv preprint arXiv:2410.18779 , year=

    A Little Help Goes a Long Way: Efficient LLM Training by Leveraging Small LMs , author=. arXiv preprint arXiv:2410.18779 , year=

  38. [38]

    arXiv preprint arXiv:2407.11681 , year=

    MINI-LLM: Memory-Efficient Structured Pruning for Large Language Models , author=. arXiv preprint arXiv:2407.11681 , year=

  39. [39]

    arXiv preprint arXiv:2012.13255 , year=

    Intrinsic dimensionality explains the effectiveness of language model fine-tuning , author=. arXiv preprint arXiv:2012.13255 , year=

  40. [41]

    2023 , school=

    Data-Driven Optimization of Fine-Tuned Large Language Models: Analyzing Data Characteristics for Enhancing Performance With Smaller Base-Models , author=. 2023 , school=

  41. [42]

    2023 IEEE 34th International Symposium on Software Reliability Engineering (ISSRE) , pages=

    Loghub: A large collection of system log datasets for ai-driven log analytics , author=. 2023 IEEE 34th International Symposium on Software Reliability Engineering (ISSRE) , pages=. 2023 , organization=

  42. [43]

    arXiv preprint arXiv:2408

    The ultimate guide to fine-tuning llms from basics to breakthroughs: An exhaustive review of technologies, research, best practices, applied research challenges and opportunities , author=. arXiv preprint arXiv:2408. 13296 , year=

  43. [44]

    Adapterhub: A framework for adapting transformers

    Adapterhub: A framework for adapting transformers , author=. arXiv preprint arXiv:2007.07779 , year=

  44. [45]

    Textbooks Are All You Need II: phi-1.5 technical report

    Textbooks are all you need ii: phi-1.5 technical report , author=. arXiv preprint arXiv:2309.05463 , year=

  45. [46]

    OPT: Open Pre-trained Transformer Language Models

    Opt: Open pre-trained transformer language models , author=. arXiv preprint arXiv:2205.01068 , year=

  46. [47]

    DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

    Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning , author=. arXiv preprint arXiv:2501.12948 , year=

  47. [48]

    2021 , howpublished =

    Wray Smith , title =. 2021 , howpublished =

  48. [49]

    2019 , howpublished =

    LogPAI , title =. 2019 , howpublished =

  49. [50]

    arXiv preprint arXiv:2507.11071 , year=

    LogTinyLLM: Tiny Large Language Models Based Contextual Log Anomaly Detection , author=. arXiv preprint arXiv:2507.11071 , year=

  50. [51]

    2010 IEEE 51st annual symposium on foundations of computer science , pages=

    Boosting and differential privacy , author=. 2010 IEEE 51st annual symposium on foundations of computer science , pages=. 2010 , organization=

  51. [52]

    Machine Learning with Applications , volume=

    Anomaly detection in log-event sequences: A federated deep learning approach and open challenges , author=. Machine Learning with Applications , volume=. 2024 , publisher=

  52. [53]

    DIAMOND MODEL ANALYSIS OF MICROSOFT EXCHANGE SERVER BREACH 2021 , author=

  53. [54]

    Applied Soft Computing , volume=

    Lanobert: System log anomaly detection based on bert masked language model , author=. Applied Soft Computing , volume=. 2023 , publisher=

  54. [55]

    Logistics Cyberattacks Set to Double as Hackers Target Shared Networks , year =

  55. [56]

    Applied Sciences , volume=

    Utility analysis about log data anomaly detection based on federated learning , author=. Applied Sciences , volume=. 2023 , publisher=

  56. [57]

    arXiv preprint arXiv:2601.06612 , year=

    Cross-Border Data Security and Privacy Risks in Large Language Models and IoT Systems , author=. arXiv preprint arXiv:2601.06612 , year=

  57. [58]

    Pervasive and Mobile Computing , volume=

    EncCluster: Scalable functional encryption in federated learning through weight clustering and probabilistic filters , author=. Pervasive and Mobile Computing , volume=. 2025 , publisher=

  58. [59]

    2022 IEEE 22nd International conference on software quality, reliability and security (QRS) , pages=

    Loggd: Detecting anomalies from system logs with graph neural networks , author=. 2022 IEEE 22nd International conference on software quality, reliability and security (QRS) , pages=. 2022 , organization=

  59. [60]

    arXiv preprint arXiv:2408.13296 (2024)

    The ultimate guide to fine-tuning llms from basics to breakthroughs: An exhaustive review of technologies, research, best practices, applied research challenges and opportunities , author=. arXiv preprint arXiv:2408.13296 , year=

  60. [61]

    2025 10th International Conference on Machine Learning Technologies (ICMLT) , pages=

    ADALog: Adaptive Unsupervised Anomaly detection in Logs with Self-attention Masked Language Model , author=. 2025 10th International Conference on Machine Learning Technologies (ICMLT) , pages=. 2025 , organization=

  61. [62]

    arXiv preprint arXiv:2512.14604 , year=

    LLmFPCA-detect: LLM-powered Multivariate Functional PCA for Anomaly Detection in Sparse Longitudinal Texts , author=. arXiv preprint arXiv:2512.14604 , year=

  62. [63]

    Are we on the right path? , author=

    System Logs Anomaly Detection. Are we on the right path? , author=. Applied Artificial Intelligence , volume=. 2025 , publisher=

  63. [64]

    Chandola, A

    Chandola, Varun and Banerjee, Arindam and Kumar, Vipin , title =. ACM Computing Surveys , volume =. 2009 , month = jul, articleno =. doi:10.1145/1541880.1541882 , issn =

  64. [65]

    2017 , isbn =

    William Stallings , title =. 2017 , isbn =

  65. [66]

    Applied Sciences , volume=

    Logedl: Log anomaly detection via evidential deep learning , author=. Applied Sciences , volume=. 2024 , publisher=

  66. [67]

    Information Sciences , volume=

    CDTDNet: A neural network for capturing deep temporal dependencies in time series , author=. Information Sciences , volume=. 2025 , publisher=

  67. [68]

    Neural Networks , volume=

    Working memory connections for LSTM , author=. Neural Networks , volume=. 2021 , publisher=

  68. [69]

    Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track) , pages=

    Edgeinfinite: A memory-efficient infinite-context transformer for edge devices , author=. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track) , pages=

  69. [70]

    Electronics , volume=

    Bi-PredRNN: An Enhanced PredRNN++ with a Bidirectional Network for Spatiotemporal Sequence Prediction , author=. Electronics , volume=. 2024 , publisher=

  70. [71]

    Fedbn: Federated learning on non-iid features via local batch normalization,

    Fedbn: Federated learning on non-iid features via local batch normalization , author=. arXiv preprint arXiv:2102.07623 , year=

  71. [72]

    URLhttps://arxiv.org/abs/2509.14275

    FedMentor: Domain-Aware Differential Privacy for Heterogeneous Federated LLMs in Mental Health , author=. arXiv preprint arXiv:2509.14275 , year=

  72. [73]

    What is a Low-Rank Matrix in LoRA? , year =

  73. [74]

    IEEE Transactions on Network Science and Engineering , volume=

    Toward personalized quantum federated learning for anomaly detection , author=. IEEE Transactions on Network Science and Engineering , volume=. 2025 , publisher=

  74. [75]

    arXiv preprint arXiv:2512.04474 , year=

    LLM-SrcLog: Towards Proactive and Unified Log Template Extraction via Large Language Models , author=. arXiv preprint arXiv:2512.04474 , year=

  75. [76]

    Log-Anomaly: Log anomaly detection model using a CNN with TF-IDF and sliding window feature extraction , howpublished =

  76. [77]

    Fine-tuning (deep learning) , year =

  77. [78]

    Fine-tuning, IBM , journal=. Dispon

  78. [79]

    arXiv preprint arXiv:2502.08577 , year=

    FBFL: A field-based coordination approach for data heterogeneity in federated learning , author=. arXiv preprint arXiv:2502.08577 , year=

  79. [80]

    KRONE: Scalable LLM-Augmented Log Anomaly Detection via Hierarchical Abstraction

    KRONE: Hierarchical and Modular Log Anomaly Detection , author=. arXiv preprint arXiv:2602.07303 , year=

  80. [81]

    2016 , howpublished =

    General Data Protection Regulation (GDPR) -- Legal Text , author =. 2016 , howpublished =

Showing first 80 references.