pith. machine review for the scientific record. sign in

arxiv: 2604.07360 · v1 · submitted 2026-03-31 · 💻 cs.AR · cs.AI· cs.LG

Recognition: no theorem link

Position Paper: From Edge AI to Adaptive Edge AI

Authors on Pith no claims yet

Pith reviewed 2026-05-13 23:52 UTC · model grok-4.3

classification 💻 cs.AR cs.AIcs.LG
keywords edge aiadaptive systemsmodel driftresource constraintsruntime reconfigurationpredictive reliabilityagent-system-environmentlong-horizon deployment
0
0 comments X

The pith

Edge AI in long-running deployments must adapt its computation and model state or else violate time-varying budgets or lose predictive reliability.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Edge AI is often treated as a one-time compression and deployment problem under fixed constraints. This paper argues that realistic, long-horizon operation makes fixed configurations unsustainable: as data and conditions shift, the system must either breach latency, energy, thermal, connectivity or privacy limits or suffer degraded accuracy and calibration. The authors introduce an Agent-System-Environment lens to define precisely what changes, what is observed, what can be reconfigured, and which constraints must stay satisfied. They then outline ten research challenges needed to turn this adaptive view into working systems.

Core claim

A fixed Edge AI configuration faces a fundamental failure mode over time: evolving data and operating conditions force either violations of time-varying budgets (latency, energy, thermal, connectivity, privacy) or loss of predictive reliability (accuracy and calibration), with risk highest in transients and rare intervals. Without the ability to reconfigure computation and, when needed, model state, the system reduces to static embedded inference and cannot deliver sustained utility. The Agent-System-Environment lens makes adaptivity operational by specifying the four elements above, and the paper uses it to frame ten open challenges spanning guarantees, dynamic architectures, hybrid model-1

What carries the argument

The Agent-System-Environment (ASE) lens, which specifies what changes, what is observed, what can be reconfigured, and which constraints must remain satisfied over time.

If this is right

  • Theoretical guarantees are required for systems whose architecture and parameters evolve while remaining within evolving constraints.
  • Dynamic architectures must support seamless transitions between data-driven and model-based components.
  • Fault and anomaly detection must trigger targeted, low-overhead model updates rather than full retraining.
  • Evaluation protocols must quantify lifecycle efficiency, recovery time, and stability under drift and external interventions.
  • System-1/System-2 decompositions become necessary to deliver anytime intelligence under changing resource limits.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This framing implies that current benchmarking practices focused on average-case accuracy will systematically understate the operational risk of static deployments.
  • The ASE lens could be applied to other constrained domains such as autonomous vehicles or medical monitoring devices that face similar long-term drift.
  • Successful adaptive edge systems would reduce the frequency of manual model redeployments, lowering total cost of ownership for large device fleets.

Load-bearing premise

Any required reconfiguration of computation and model state can be performed without introducing new budget violations or new reliability failures.

What would settle it

A non-adaptive Edge AI model deployed for months or years that maintains both its accuracy/calibration targets and all time-varying resource budgets without any reconfiguration, even under documented shifts in data distribution or operating conditions.

read the original abstract

Edge AI is often framed as model compression and deployment under tight constraints. We argue a stronger operational thesis: Edge AI in realistic deployments is necessarily adaptive. In long-horizon operation, a fixed (non-adaptive) configuration faces a fundamental failure mode: as data and operating conditions evolve and change in time, it must either (i) violate time-varying budgets (latency/energy/thermal/connectivity/privacy) or (ii) lose predictive reliability (accuracy and, critically, calibration), with risk concentrating in transient regimes and rare time intervals rather than in average performance. If a deployed system cannot reconfigure its computation - and, when required, its model state - under evolving conditions and constraints, it reduces to static embedded inference and cannot provide sustained utility. This position paper introduces a minimal Agent-System-Environment (ASE) lens that makes adaptivity precise at the edge by specifying (i) what changes, (ii) what is observed, (iii) what can be reconfigured, and (iv) which constraints must remain satisfied over time. Building on this framing, we formulate ten research challenges for the next decade, spanning theoretical guarantees for evolving systems, dynamic architectures and hybrid transitions between data-driven and model-based components, fault/anomaly-driven targeted updates, System-1/System-2 decompositions (anytime intelligence), modularity, validation under scarce labels, and evaluation protocols that quantify lifecycle efficiency and recovery/stability under drift and interventions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that Edge AI in realistic long-horizon deployments is necessarily adaptive: fixed configurations must either violate time-varying budgets (latency/energy/thermal/connectivity/privacy) or lose predictive reliability (accuracy and calibration) as data and conditions evolve, with risk concentrating in transients. It introduces a minimal Agent-System-Environment (ASE) lens specifying what changes, what is observed, what can be reconfigured, and which constraints must hold, then enumerates ten research challenges spanning guarantees, dynamic architectures, hybrid transitions, fault-driven updates, System-1/2 decompositions, modularity, validation under scarce labels, and lifecycle evaluation protocols.

Significance. If the necessity argument holds, the work reframes Edge AI from static compression/deployment to sustained lifecycle management of evolving systems. The ASE lens supplies a compact vocabulary for integrating resource constraints with model/state adaptation, which could organize research on drift-robust edge intelligence and influence evaluation standards that emphasize recovery and stability rather than average-case metrics.

major comments (2)
  1. [ASE lens definition] The central necessity claim (abstract and opening sections) that non-adaptive systems face a fundamental failure mode is load-bearing on the unstated premise that ASE-specified reconfiguration (observation, decision, model/state update) can be executed without new violations of the same time-varying budgets. No bound, model, or argument is supplied showing that monitoring cost, decision latency, or transition energy remains feasible, especially in the transient regimes where risk is said to concentrate.
  2. [ASE lens formulation] The formulation of the ASE lens (specifying what changes, what is observed, what can be reconfigured, and constraints) does not address overheads of the adaptation process itself. This omission directly weakens the claim that adaptivity is required to avoid budget violations, as the skeptic concern notes.
minor comments (2)
  1. [Research challenges section] The ten research challenges are listed without prioritization, interdependencies, or mapping back to specific ASE components, which reduces their utility as an actionable roadmap.
  2. [Introduction] As a position paper, the generalizations would be strengthened by one or two brief concrete examples (e.g., a deployed vision or sensor system) illustrating the failure mode or ASE application.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the constructive feedback. The comments correctly identify that the necessity argument for adaptivity rests on the feasibility of the adaptation process itself. As a position paper, we frame the problem and enumerate challenges rather than resolve all feasibility questions, but we will partially revise to make this explicit and connect the ASE lens more directly to the listed research challenges on guarantees and lifecycle evaluation.

read point-by-point responses
  1. Referee: [ASE lens definition] The central necessity claim (abstract and opening sections) that non-adaptive systems face a fundamental failure mode is load-bearing on the unstated premise that ASE-specified reconfiguration (observation, decision, model/state update) can be executed without new violations of the same time-varying budgets. No bound, model, or argument is supplied showing that monitoring cost, decision latency, or transition energy remains feasible, especially in the transient regimes where risk is said to concentrate.

    Authors: We agree that a rigorous necessity proof would require demonstrating feasible adaptation overheads. The paper does not supply such bounds because it is a position paper whose purpose is to define the ASE lens and surface open problems. Challenge 1 (theoretical guarantees for evolving systems) and Challenge 10 (lifecycle evaluation protocols) are explicitly intended to address overhead accounting, stability under transients, and whether adaptation can itself remain within budgets. We will revise the abstract and Section 2 to state that the constraint set in the ASE formulation must encompass monitoring, decision, and transition costs, and that showing such costs are manageable is a core open question. revision: partial

  2. Referee: [ASE lens formulation] The formulation of the ASE lens (specifying what changes, what is observed, what can be reconfigured, and constraints) does not address overheads of the adaptation process itself. This omission directly weakens the claim that adaptivity is required to avoid budget violations, as the skeptic concern notes.

    Authors: The minimal ASE lens is deliberately abstract to provide vocabulary rather than a concrete mechanism; overheads of the adaptation loop are therefore left as an open modeling question. This does not weaken the position but highlights why the ten challenges (particularly dynamic architectures, hybrid transitions, and validation protocols) are needed. We will add a short clarifying sentence in the ASE section noting that any concrete instantiation must fold adaptation overheads into the time-varying constraints, and that demonstrating non-violation during reconfiguration is part of the research agenda. revision: partial

Circularity Check

0 steps flagged

No circularity: conceptual thesis without derivations or self-referential reductions

full rationale

The paper is a position paper advancing the thesis that realistic Edge AI deployments require adaptivity because fixed configurations face failure modes under evolving data and constraints. No equations, fitted parameters, or mathematical derivations appear in the provided text. The ASE lens is introduced as a definitional framing to make adaptivity precise, not derived from or reduced to quantities defined inside the paper. Claims rest on general statements about operational realities and long-horizon behavior rather than any closed-loop construction, self-citation chain, or renaming of known results. The argument is self-contained as a conceptual proposal.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central thesis rests on the domain assumption that non-adaptive edge systems will inevitably encounter the stated failure modes under realistic drift; the ASE lens is an invented framing device with no independent empirical grounding supplied.

axioms (1)
  • domain assumption Fixed non-adaptive configurations must either violate time-varying budgets or lose predictive reliability under evolving data and operating conditions
    This is the load-bearing premise stated in the abstract as the reason adaptivity is necessary.
invented entities (1)
  • Agent-System-Environment (ASE) lens no independent evidence
    purpose: To make adaptivity precise by specifying what changes, what is observed, what can be reconfigured, and which constraints must remain satisfied
    New conceptual framing introduced by the authors to organize the discussion of edge adaptivity.

pith-pipeline@v0.9.0 · 5554 in / 1410 out tokens · 28914 ms · 2026-05-13T23:52:29.948094+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

53 extracted references · 53 canonical work pages

  1. [1]

    Empowering edge intelligence: A comprehensive survey on on-device ai models,

    X. Wang, Z. Tang, J. Guo, T. Meng, C. Wang, T. Wang, and W. Jia, “Empowering edge intelligence: A comprehensive survey on on-device ai models,”ACM Comput. Surv., vol. 57, no. 9, Apr. 2025. [Online]. Available: https://doi.org/10.1145/3724420

  2. [2]

    Conditional computation in neural networks: Principles and research trends,

    S. Scardapane, A. Baiocchi, A. Devoto, V . Marsocci, P. Minervini, and J. Pomponi, “Conditional computation in neural networks: Principles and research trends,” pp. 175–190, 2024. [Online]. Available: https://journals.sagepub.com/doi/abs/10.3233/IA-240035

  3. [3]

    Beyond model adaptation at test time: A survey,

    Z. Xiao and C. G. M. Snoek, “Beyond model adaptation at test time: A survey,” 2024. [Online]. Available: https://arxiv.org/abs/2411.03687

  4. [4]

    Botta: Benchmarking on-device test time adaptation,

    M. Danilowski, S. Chatterjee, and A. Ghosh, “Botta: Benchmarking on-device test time adaptation,” 2025. [Online]. Available: https: //arxiv.org/abs/2504.10149

  5. [5]

    arXiv: 2410.12032

    A. Tschand, A. T. R. Rajan, S. Idgunji, A. Ghosh, J. Holleman, C. Kiraly, P. Ambalkar, R. Borkar, R. Chukka, T. Cockrell, O. Curtis, G. Fursin, M. Hodak, H. Kassa, A. Lokhmotov, D. Miskovic, Y . Pan, M. P. Manmathan, L. Raymond, T. S. John, A. Suresh, R. Taubitz, S. Zhan, S. Wasson, D. Kanter, and V . J. Reddi, “Mlperf power: Benchmarking the energy effic...

  6. [6]

    Thermal-aware scheduling for deep learning on mobile devices with npu,

    T. Tan and G. Cao, “Thermal-aware scheduling for deep learning on mobile devices with npu,”IEEE Transactions on Mobile Computing, vol. 23, no. 12, pp. 10 706–10 719, 2024

  7. [7]

    A survey on autonomous environmental monitoring approaches: towards unifying active sensing and reinforcement learning,

    D. Mansfield and A. Montazeri, “A survey on autonomous environmental monitoring approaches: towards unifying active sensing and reinforcement learning,”Frontiers in Robotics and AI, vol. V olume 11 - 2024, 2024. [Online]. Available: https://www.frontiersin.org/ journals/robotics-and-ai/articles/10.3389/frobt.2024.1336612

  8. [8]

    Fast yet safe: Early- exiting with risk control,

    M. Jazbec, A. Timans, T. H. Veljkovi ´c, K. Sakmann, D. Zhang, C. A. Naesseth, and E. Nalisnick, “Fast yet safe: Early- exiting with risk control,” inAdvances in Neural Information Processing Systems, A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, Eds., vol. 37. Curran Associates, Inc., 2024, pp. 129 825–129 854. [Onlin...

  9. [9]

    A survey on concept drift adaptation,

    J. a. Gama, I. ˇZliobaitundefined, A. Bifet, M. Pechenizkiy, and A. Bouchachia, “A survey on concept drift adaptation,”ACM Comput. Surv., vol. 46, no. 4, Mar. 2014. [Online]. Available: https://doi.org/10.1145/2523813

  10. [10]

    Incremental on-line learning: A review and comparison of state of the art algorithms,

    V . Losing, B. Hammer, and H. Wersing, “Incremental on-line learning: A review and comparison of state of the art algorithms,” Neurocomputing, vol. 275, pp. 1261–1274, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231217315928

  11. [11]

    Pac-bayesian generalisation error bounds for gaussian process classification,

    M. Seeger, “Pac-bayesian generalisation error bounds for gaussian process classification,”J. Mach. Learn. Res., vol. 3, no. null, p. 233–269, Mar. 2003. [Online]. Available: https://doi.org/10.1162/ 153244303765208386

  12. [12]

    Pac-bayesian domain adaptation bounds for multi-view learning,

    M. Hennequin, K. Benabdeslem, and H. Elghazel, “Pac-bayesian domain adaptation bounds for multi-view learning,” 2024. [Online]. Available: https://arxiv.org/abs/2401.01048

  13. [13]

    On f-divergence principled domain adaptation: An improved framework,

    Z. Wang and Y . Mao, “On f-divergence principled domain adaptation: An improved framework,” inAdvances in Neural Information Processing Systems, A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, Eds., vol. 37. Curran Associates, Inc., 2024, pp. 6711–6748. [Online]. Available: https://proceedings.neurips.cc/paper files/pape...

  14. [14]

    Online convex programming and generalized infinites- imal gradient ascent,

    M. Zinkevich, “Online convex programming and generalized infinites- imal gradient ascent,” inProceedings of the Twentieth International Conference on International Conference on Machine Learning, ser. ICML’03. AAAI Press, 2003, p. 928–935

  15. [15]

    Efficient learning algorithms for changing environments,

    E. Hazan and C. Seshadhri, “Efficient learning algorithms for changing environments,” inProceedings of the 26th Annual International Conference on Machine Learning, ser. ICML ’09. New York, NY , USA: Association for Computing Machinery, 2009, p. 393–400. [Online]. Available: https://doi.org/10.1145/1553374.1553425

  16. [16]

    Efficient methods for non-stationary online learning,

    P. Zhao, Y .-F. Xie, L. Zhang, and Z.-H. Zhou, “Efficient methods for non-stationary online learning,”Journal of Machine Learning Research, vol. 26, no. 208, pp. 1–66, 2025. [Online]. Available: http://jmlr.org/papers/v26/23-1188.html

  17. [17]

    Safe online convex optimization with multi-point feedback,

    S. Hutchinson and M. Alizadeh, “Safe online convex optimization with multi-point feedback,” inProceedings of the 6th Annual Learning for Dynamics &; Control Conference, ser. Proceedings of Machine Learning Research, A. Abate, M. Cannon, K. Margellos, and A. Papachristodoulou, Eds., vol. 242. PMLR, 15–17 Jul 2024, pp. 168–180. [Online]. Available: https://...

  18. [18]

    Chasing convex functions with long-term constraints,

    A. Lechowicz, N. Christianson, B. Sun, N. Bashir, M. Hajiesmaili, A. Wierman, and P. Shenoy, “Chasing convex functions with long-term constraints,” inProceedings of the 41st International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, and F. Berkenka...

  19. [19]

    Adaptive smooth nonstationary bandits,

    J. Suk, “Adaptive smooth nonstationary bandits,”SIAM Journal on Mathematics of Data Science, vol. 7, no. 3, pp. 1265–1291, 2025. [Online]. Available: https://doi.org/10.1137/24M167651X

  20. [20]

    Conformal inference for online prediction with arbitrary distribution shifts,

    I. Gibbs and E. J. Cand `es, “Conformal inference for online prediction with arbitrary distribution shifts,”Journal of Machine Learning Research, vol. 25, no. 162, pp. 1–36, 2024. [Online]. Available: http://jmlr.org/papers/v25/22-1218.html

  21. [21]

    Combining physics- based and data-driven models: advancing the frontiers of research with scientific machine learning,

    A. Quarteroni, P. Gervasio, and F. Regazzoni, “Combining physics- based and data-driven models: advancing the frontiers of research with scientific machine learning,”Mathematical Models and Methods in Applied Sciences, vol. 35, no. 04, pp. 905–1071, 2025. [Online]. Available: https://doi.org/10.1142/S0218202525500125

  22. [22]

    Early-exit meets model- distributed inference at edge networks,

    M. Colocrese, E. Koyuncu, and H. Seferoglu, “Early-exit meets model- distributed inference at edge networks,” 2024. [Online]. Available: https://arxiv.org/abs/2408.05247

  23. [23]

    The elephant in the room: Towards a reliable time-series anomaly detection benchmark,

    Q. Liu and J. Paparrizos, “The elephant in the room: Towards a reliable time-series anomaly detection benchmark,” inThe Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024. [Online]. Available: https://openreview.net/ forum?id=R6kJtWsTGy

  24. [24]

    Deep learning for time series anomaly detection: A survey,

    Z. Zamanzadeh Darban, G. I. Webb, S. Pan, C. Aggarwal, and M. Salehi, “Deep learning for time series anomaly detection: A survey,” ACM Comput. Surv., vol. 57, no. 1, Oct. 2024. [Online]. Available: https://doi.org/10.1145/3691338

  25. [25]

    Root cause analysis using anomaly detection and temporal informed causal graphs,

    J. Rehak, S. Youssef, and J. Beyerer, “Root cause analysis using anomaly detection and temporal informed causal graphs,” 2024. [Online]. Available: https://publica.fraunhofer.de/handle/publica/467932

  26. [26]

    Root cause analysis of anomalies in multivariate time series through granger causal discovery,

    X. Han, S. Absar, L. Zhang, and S. Yuan, “Root cause analysis of anomalies in multivariate time series through granger causal discovery,” inThe Thirteenth International Conference on Learning Representations, 2025. [Online]. Available: https://openreview.net/ forum?id=k38Th3x4d9

  27. [27]

    Reglo: Provable neural network repair for global robustness properties,

    F. Fu, Z. Wang, W. Zhou, Y . Wang, J. Fan, C. Huang, Q. Zhu, X. Chen, and W. Li, “Reglo: Provable neural network repair for global robustness properties,” vol. 38, no. 11, Mar. 2024, pp. 12 061–12 071. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/29094

  28. [28]

    Interpretability based neural network repair,

    Z. Chen, J. Zhou, Y . Sun, J. Wang, Q. Xuan, and X. Yang, “Interpretability based neural network repair,” inProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2024. New York, NY , USA: Association for Computing Machinery, 2024, p. 908–919. [Online]. Available: https://doi.org/10.1145/3650212.3680330

  29. [29]

    Branchynet: Fast inference via early exiting from deep neural networks,

    S. Teerapittayanon, B. McDanel, and H. Kung, “Branchynet: Fast inference via early exiting from deep neural networks,” in2016 23rd International Conference on Pattern Recognition (ICPR), 2016, pp. 2464–2469

  30. [30]

    Early-exit neural networks with nested prediction sets,

    M. Jazbec, P. Forr ´e, S. Mandt, D. Zhang, and E. Nalisnick, “Early-exit neural networks with nested prediction sets,” inProceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, ser. Proceedings of Machine Learning Research, N. Kiyavash and J. M. Mooij, Eds., vol. 244. PMLR, 15–19 Jul 2024, pp. 1780–1796. [Online]. Available: http...

  31. [31]

    Hierarchical selective classification,

    S. Goren, I. Galil, and R. El-Yaniv, “Hierarchical selective classification,” inThe Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. [Online]. Available: https://openreview.net/forum?id=wzof7Y66xs

  32. [32]

    Scalable modular network: A framework for adaptive learning via agreement routing,

    M. Hu, H. Chang, B. Ma, S. Shan, and X. CHEN, “Scalable modular network: A framework for adaptive learning via agreement routing,” inThe Twelfth International Conference on Learning Representations,

  33. [33]

    Available: https://openreview.net/forum?id=pEKJl5sflp

    [Online]. Available: https://openreview.net/forum?id=pEKJl5sflp

  34. [34]

    Enhancing online continual learning with plug-and-play state space model and class-conditional mixture of discretization,

    S. Liu, Y . Yang, X. Li, D. A. Clifton, and B. Ghanem, “Enhancing online continual learning with plug-and-play state space model and class-conditional mixture of discretization,” pp. 20 502–20 511, 2025

  35. [35]

    Skill expansion and composition in parameter space,

    T. Liu, J. Li, Y . Zheng, H. Niu, Y . Lan, X. Xu, and X. Zhan, “Skill expansion and composition in parameter space,” inThe Thirteenth International Conference on Learning Representations, 2025. [Online]. Available: https://openreview.net/forum?id=GLWf2fq0bX

  36. [36]

    Selective classification under distribution shifts,

    H. Liang, L. Peng, and J. Sun, “Selective classification under distribution shifts,”Transactions on Machine Learning Research, 2024. [Online]. Available: https://openreview.net/forum?id=dmxMGW6J7N

  37. [37]

    Deep neural network benchmarks for selective classification,

    A. Pugnana, L. Perini, J. Davis, and S. Ruggieri, “Deep neural network benchmarks for selective classification,”Journal of Data- centric Machine Learning Research, 2024, reproducibility Certification. [Online]. Available: https://openreview.net/forum?id=xDPzHbtAEs

  38. [38]

    Conformal risk control,

    A. N. Angelopoulos, S. Bates, A. Fisch, L. Lei, and T. Schuster, “Conformal risk control,” inThe Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https: //openreview.net/forum?id=33XGfHLtZg

  39. [39]

    Toward green ai: A methodological survey of the scientific literature,

    E. Barbierato and A. Gatti, “Toward green ai: A methodological survey of the scientific literature,”IEEE Access, vol. 12, pp. 23 989–24 013, 2024

  40. [40]

    Redundancy-aware efficient continual learning on edge devices,

    S. Li, G. Yuan, Y . Dai, T. Wang, Y . Wu, A. K. Jones, J. Hu, Tony, Geng, Y . Wang, B. Yuan, Y . Ding, and X. Tang, “Redundancy-aware efficient continual learning on edge devices,” 2025. [Online]. Available: https://arxiv.org/abs/2401.16694

  41. [41]

    Federated learning for internet of things: A compre- hensive survey,

    D. C. Nguyen, M. Ding, P. N. Pathirana, A. Seneviratne, J. Li, and H. Vincent Poor, “Federated learning for internet of things: A compre- hensive survey,” pp. 1622–1658, 2021

  42. [42]

    Towards on-device personalization: Cloud-device collaborative data augmentation for efficient on-device language model,

    Z. Zhong, W. Yuan, L. Qu, T. Chen, H. Wang, X. Zhao, and H. Yin, “Towards on-device personalization: Cloud-device collaborative data augmentation for efficient on-device language model,” New York, NY , USA, Jan. 2026. [Online]. Available: https://doi.org/10.1145/3779452

  43. [43]

    A systematic literature review on hardware reliability assessment methods for deep neural networks,

    M. H. Ahmadilivani, M. Taheri, J. Raik, M. Daneshtalab, and M. Jenihhin, “A systematic literature review on hardware reliability assessment methods for deep neural networks,”ACM Comput. Surv., vol. 56, no. 6, Jan. 2024. [Online]. Available: https: //doi.org/10.1145/3638242

  44. [44]

    Resilience of deep learning applications: A systematic literature review of analysis and hardening techniques,

    C. Bolchini, L. Cassano, and A. Miele, “Resilience of deep learning applications: A systematic literature review of analysis and hardening techniques,”Computer Science Review, vol. 54, p. 100682, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S1574013724000662

  45. [45]

    Fast and robust analog in-memory deep neural network training,

    M. J. Rasch, F. Carta, O. Fagbohungbe, and T. Gokmen, “Fast and robust analog in-memory deep neural network training,”Nature Communications, vol. 15, no. 1, p. 7133, Aug 2024. [Online]. Available: https://doi.org/10.1038/s41467-024-51221-z

  46. [46]

    A survey on failure analysis and fault injection in ai systems,

    G. Yu, G. Tan, H. Huang, Z. Zhang, P. Chen, R. Natella, Z. Zheng, and M. R. Lyu, “A survey on failure analysis and fault injection in ai systems,” New York, NY , USA, Dec. 2025. [Online]. Available: https://doi.org/10.1145/3732777

  47. [47]

    Wages: The worst transistor aging analysis for large-scale analog integrated circuits via domain generalization,

    T. Chen, H. Geng, Q. Sun, S. Wan, Y . Sun, H. Yu, and B. Yu, “Wages: The worst transistor aging analysis for large-scale analog integrated circuits via domain generalization,”ACM Trans. Des. Autom. Electron. Syst., vol. 29, no. 5, Aug. 2024. [Online]. Available: https://doi.org/10.1145/3659950

  48. [48]

    Save: software-implemented fault tolerance for model inference against gpu memory bit flips,

    W. Zheng, B. Xu, J. Gu, and H. Chen, “Save: software-implemented fault tolerance for model inference against gpu memory bit flips,” in Proceedings of the 2025 USENIX Conference on Usenix Annual Tech- nical Conference, ser. USENIX ATC ’25. USA: USENIX Association, 2025

  49. [49]

    Bayes2imc: In-memory computing for bayesian binary neural networks,

    P. Katti, C. Ruah, O. Simeone, B. M. Al-Hashimi, and B. Rajen- dran, “Bayes2imc: In-memory computing for bayesian binary neural networks,” pp. 5422–5435, 2025

  50. [50]

    A ferroelectric–memristor memory for both training and inference,

    M. Martemucci, F. Rummens, Y . Malot, T. Hirtzlin, O. Guille, S. Martin, C. Carabasse, A. F. Vincent, S. Sa ¨ıghi, L. Grenouillet, D. Querlioz, and E. Vianello, “A ferroelectric–memristor memory for both training and inference,”Nature Electronics, vol. 8, no. 10, pp. 921–933, Oct

  51. [51]

    Available: https://doi.org/10.1038/s41928-025-01454-7

    [Online]. Available: https://doi.org/10.1038/s41928-025-01454-7

  52. [52]

    Benchmarking test-time dnn adaptation at edge with compute-in- memory,

    Z. Fan, Z. Wan, C.-K. Liu, A. Lu, K. Bhardwaj, and A. Raychowdhury, “Benchmarking test-time dnn adaptation at edge with compute-in- memory,”ACM J. Auton. Transport. Syst., vol. 1, no. 3, Jul. 2024. [Online]. Available: https://doi.org/10.1145/3665898

  53. [53]

    Benchmarking energy and latency in tinyml: A novel method for resource-constrained ai,

    P. Bartoli, C. Veronesi, A. Giudici, D. Siorpaes, D. Trojaniello, and F. Zappa, “Benchmarking energy and latency in tinyml: A novel method for resource-constrained ai,” pp. 1–8, 2025