pith. machine review for the scientific record. sign in

arxiv: 2604.09799 · v1 · submitted 2026-04-10 · 💻 cs.LG · cs.AI

Recognition: unknown

Explainable Human Activity Recognition: A Unified Review of Concepts and Mechanisms

Catherine Chen, Ismail Uysal, Mainak Kundu, Ria Kanjilal, Rifatul Islam

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:12 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords explainable artificial intelligencehuman activity recognitionXAI-HARtaxonomysensor datamultimodal sensinginterpretabilitydeep learning
0
0 comments X

The pith

A review separates conceptual dimensions of explainability from algorithmic mechanisms to organize XAI methods for human activity recognition.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper reviews techniques that make deep learning models for recognizing human activities from sensors more understandable to users. It claims that earlier surveys mix high-level ideas about what makes a system explainable with the concrete algorithms that produce explanations. Drawing a clear line between these lets the authors build a taxonomy organized around the mechanisms themselves, such as attention, prototypes, and perturbation-based approaches. The review shows how these mechanisms are applied across wearable, ambient, physiological, and multimodal sensor data while addressing time sequences, multiple data types, and semantic meaning. It also covers current ways of judging the quality of explanations and identifies obstacles to creating HAR systems that people can trust and act on.

Core claim

We introduce a unified perspective that separates conceptual dimensions of explainability from algorithmic explanation mechanisms, reducing ambiguities in prior surveys. Building on this distinction, we present a mechanism-centric taxonomy of XAI-HAR methods covering major explanation paradigms that address the temporal, multimodal, and semantic complexities of sensor-based activity recognition.

What carries the argument

The mechanism-centric taxonomy, which classifies explanation methods by their algorithmic paradigms after first distinguishing those paradigms from broader conceptual dimensions of explainability.

If this is right

  • Explanation methods are grouped by how they generate outputs, such as through attention maps, prototype examples, or model-agnostic perturbations applied to time-series sensor streams.
  • Current approaches show clear gaps when explaining long temporal dependencies or fusing data from multiple sensor types.
  • Evaluation of explanations in HAR still lacks standardized measures tied to human decision-making and real-world deployment.
  • Trustworthy activity recognition systems will require explanations that directly support monitoring, assistance, and interaction tasks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same conceptual-mechanism split could be tested on XAI methods for video-based activity recognition or continuous health monitoring to see if it reduces similar confusion.
  • Future surveys in other sensor domains might adopt the taxonomy to check whether it prevents redundant classifications.
  • Regulatory standards for AI in healthcare could reference this distinction when requiring explainability for activity data used in diagnosis or alerts.

Load-bearing premise

That cleanly separating conceptual dimensions from algorithmic mechanisms will capture the full range of temporal, multimodal, and semantic issues in HAR without creating new overlaps or leaving important methods out.

What would settle it

A collection of recent XAI-HAR papers that cannot be placed consistently into the proposed taxonomy categories or that continue to show the same classification ambiguities the separation was intended to resolve.

Figures

Figures reproduced from arXiv: 2604.09799 by Catherine Chen, Ismail Uysal, Mainak Kundu, Ria Kanjilal, Rifatul Islam.

Figure 1
Figure 1. Figure 1: Canonical explainable HAR pipeline illustrating the separation between training and inference and two complementary explainability paradigms. [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Structural overview of XAI-HAR illustrating two complementary layers: (a) the conceptual layer, which defines what aspects of model behavior [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Visualization of the local contribution of EEG features using the LIME model for classifying a single test instance (predicted class = Working activity) [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: SHAP interaction plot showing the most influential features and their [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: (a) Overview of the EfficientGCN pipeline illustrating the variables used to compute faithfulness and stability, with perturbations applied during the [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: (a) Output of the original GNNExplainer, limited to arc importance, [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
read the original abstract

Human activity recognition (HAR) has become a key component of intelligent systems for healthcare monitoring, assistive living, smart environments, and human-computer interaction. Although deep learning has substantially improved HAR performance on multivariate sensor data, the resulting models often remain opaque, limiting trust, reliability, and real-world deployment. Explainable artificial intelligence (XAI) has therefore emerged as a critical direction for making HAR systems more transparent and human-centered. This paper presents a comprehensive review of explainable HAR methods across wearable, ambient, physiological, and multimodal sensing settings. We introduce a unified perspective that separates conceptual dimensions of explainability from algorithmic explanation mechanisms, reducing ambiguities in prior surveys. Building on this distinction, we present a mechanism-centric taxonomy of XAI-HAR methods covering major explanation paradigms. The review examines how these methods address the temporal, multimodal, and semantic complexities of HAR, and summarize their interpretability objectives, explanation targets, and limitations. In addition, we discuss current evaluation practices, highlight key challenges in achieving reliable and deployable XAI-HAR, and outline directions toward trustworthy activity recognition systems that better support human understanding and decision-making.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The paper presents a comprehensive review of explainable human activity recognition (XAI-HAR) methods for wearable, ambient, physiological, and multimodal sensing. It introduces a unified perspective that separates conceptual dimensions of explainability from algorithmic explanation mechanisms to reduce ambiguities in prior surveys, and builds on this to propose a mechanism-centric taxonomy covering major explanation paradigms. The review examines how methods address temporal, multimodal, and semantic complexities in HAR, summarizes interpretability objectives, explanation targets, and limitations, discusses evaluation practices, highlights challenges for reliable XAI-HAR, and outlines directions for trustworthy activity recognition systems.

Significance. If the proposed separation of conceptual dimensions from mechanisms and the resulting taxonomy hold without substantial overlaps or omissions, the review would provide a valuable organizational framework for the XAI-HAR literature. This synthesis of limitations, evaluation practices, and future directions could help guide research toward more transparent and deployable HAR systems in healthcare and assistive applications. As a review paper, its strength lies in conceptual clarification and coverage rather than new empirical results or derivations.

minor comments (2)
  1. Abstract: The statement that the unified perspective 'reducing ambiguities in prior surveys' would benefit from a brief concrete example of an ambiguity resolved (e.g., a specific prior survey's conflation of dimensions and mechanisms) to make the contribution more tangible to readers.
  2. The manuscript would be strengthened by explicitly stating the inclusion criteria or search strategy used to select the reviewed XAI-HAR papers, as is standard for systematic reviews in the field.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive and accurate summary of our work, as well as the recommendation for minor revision. We appreciate the recognition that the separation of conceptual dimensions from algorithmic mechanisms and the resulting taxonomy can provide a valuable organizational framework for the XAI-HAR literature. No specific major comments were raised in the report, so we have no point-by-point rebuttals to provide at this stage. We are happy to address any minor suggestions or clarifications that may arise.

Circularity Check

0 steps flagged

No significant circularity in proposed taxonomy

full rationale

This review paper proposes an organizational separation of conceptual dimensions of explainability from algorithmic mechanisms and builds a mechanism-centric taxonomy of existing XAI-HAR methods. No equations, fitted parameters, or derivations appear in the abstract or described claims. The contribution is a synthesis and classification scheme that cites prior surveys; the central claims do not reduce by construction to self-defined quantities or self-citation chains. The taxonomy is presented as a proposed perspective rather than a falsifiable prediction derived from the paper's own inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

As a literature review the paper introduces no new free parameters, axioms, or invented entities; it relies entirely on concepts and methods drawn from the prior surveys and papers it cites.

pith-pipeline@v0.9.0 · 5508 in / 1102 out tokens · 44869 ms · 2026-05-10T17:12:21.235156+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

63 extracted references · 8 canonical work pages · 1 internal anchor

  1. [1]

    Activity recognition in parkinson’s patients from motion data using a cnn model trained by healthy subjects,

    S. Davidashvilly, M. Hssayeni, C. Chi, J. Jimenez-Shahed, and B. Gho- raani, “Activity recognition in parkinson’s patients from motion data using a cnn model trained by healthy subjects,” in2022 44th annual international conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2022, pp. 3199–3202

  2. [2]

    Decoding de- generation: the implementation of machine learning for clinical detection of neurodegenerative disorders,

    F. Khaliq, J. Oberhauser, D. Wakhloo, and S. Mahajani, “Decoding de- generation: the implementation of machine learning for clinical detection of neurodegenerative disorders,”Neural Regeneration Research, vol. 18, no. 6, pp. 1235–1242, 2023

  3. [3]

    Gnn-xar: A graph neural network for explainable activity recognition in smart homes,

    M. Fiori, D. Mor, G. Civitarese, and C. Bettini, “Gnn-xar: A graph neural network for explainable activity recognition in smart homes,” in International Conference on Mobile and Ubiquitous Systems: Comput- ing, Networking, and Services. Springer, 2024, pp. 341–360

  4. [4]

    Explainable activity recognition for smart home systems,

    D. Das, Y . Nishimura, R. P. Vivek, N. Takeda, S. T. Fish, T. Ploetz, and S. Chernova, “Explainable activity recognition for smart home systems,” ACM Transactions on Interactive Intelligent Systems, vol. 13, no. 2, pp. 1–39, 2023

  5. [5]

    Explainable ai-enhanced human activity recognition for human–robot collaboration in agriculture

    L. Benos, D. Tsaopoulos, A. C. Tagarakis, D. Kateris, P. Busato, and D. Bochtis, “Explainable ai-enhanced human activity recognition for human–robot collaboration in agriculture.”Applied Sciences (2076- 3417), vol. 15, no. 2, 2025

  6. [6]

    Development of a human activity recognition system for ballet tasks,

    D. Hendry, K. Chai, A. Campbell, L. Hopper, P. O’Sullivan, and L. Straker, “Development of a human activity recognition system for ballet tasks,”Sports medicine-open, vol. 6, no. 1, p. 10, 2020

  7. [7]

    A deep learning human activity recogni- tion framework for socially assistive robots to support reablement of older adults,

    F. Robinson and G. Nejat, “A deep learning human activity recogni- tion framework for socially assistive robots to support reablement of older adults,” in2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 6160–6167

  8. [8]

    Towards A Rigorous Science of Interpretable Machine Learning

    F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,”arXiv preprint arXiv:1702.08608, 2017

  9. [9]

    Molnar,Interpretable machine learning

    C. Molnar,Interpretable machine learning. Lulu. com, 2020

  10. [10]

    Interpretability via Model Extraction

    O. Bastani, C. Kim, and H. Bastani, “Interpretability via model extrac- tion,”arXiv preprint arXiv:1706.09773, 2017

  11. [11]

    Explaining explanations: An overview of interpretability of machine learning,

    L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, “Explaining explanations: An overview of interpretability of machine learning,” in2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 2018, pp. 80–89

  12. [12]

    Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav),

    B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas et al., “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav),” inInternational conference on machine learning. PMLR, 2018, pp. 2668–2677

  13. [13]

    Benchmark- ing deep learning interpretability in time series predictions,

    A. A. Ismail, M. Gunady, H. Corrada Bravo, and S. Feizi, “Benchmark- ing deep learning interpretability in time series predictions,”Advances in neural information processing systems, vol. 33, pp. 6441–6452, 2020

  14. [14]

    Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

    K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013

  15. [15]

    Striving for simplicity: The all convolutional net

    J. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net. in arxiv: cs,”arXiv preprint arXiv:1412.6806, 2015

  16. [16]

    ” why should i trust you?

    M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” inProceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1135–1144

  17. [17]

    A unified approach to interpreting model predictions,

    S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,”Advances in neural information processing systems, vol. 30, 2017

  18. [18]

    Axiomatic attribution for deep networks,

    M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” inInternational conference on machine learning. PMLR, 2017, pp. 3319–3328

  19. [19]

    Grad-cam: visual explanations from deep networks via gradient-based localization,

    R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: visual explanations from deep networks via gradient-based localization,”International journal of computer vision, vol. 128, no. 2, pp. 336–359, 2020

  20. [20]

    Counterfactual explanations without opening the black box: Automated decisions and the gdpr,

    S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Automated decisions and the gdpr,” Harv. JL & Tech., vol. 31, p. 841, 2017

  21. [21]

    Explanation in artificial intelligence: Insights from the social sciences,

    T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,”Artificial intelligence, vol. 267, pp. 1–38, 2019

  22. [22]

    Xai—explainable artificial intelligence,

    D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang, “Xai—explainable artificial intelligence,”Science robotics, vol. 4, no. 37, p. eaay7120, 2019

  23. [23]

    Explainable artificial intelligence (xai): Concepts, taxonomies, opportu- nities and challenges toward responsible ai,

    A. B. Arrieta, N. D ´ıaz-Rodr´ıguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garc ´ıa, S. Gil-L ´opez, D. Molina, R. Benjaminset al., “Explainable artificial intelligence (xai): Concepts, taxonomies, opportu- nities and challenges toward responsible ai,”Information fusion, vol. 58, pp. 82–115, 2020

  24. [24]

    Explainable machine learning in deployment,

    U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y . Jia, J. Ghosh, R. Puri, J. M. Moura, and P. Eckersley, “Explainable machine learning in deployment,” inProceedings of the 2020 conference on fairness, accountability, and transparency, 2020, pp. 648–657

  25. [25]

    Recent advances in trustworthy explainable artificial intelligence: Sta- tus, challenges, and perspectives,

    A. Rawal, J. McCoy, D. B. Rawat, B. M. Sadler, and R. S. Amant, “Recent advances in trustworthy explainable artificial intelligence: Sta- tus, challenges, and perspectives,”IEEE Transactions on Artificial Intelligence, vol. 3, no. 6, pp. 852–866, 2021. 14

  26. [26]

    Challenges and future perspectives in explainable ai a roadmap for new scholars,

    C. Sil, P. Ghosh, S. Das, and M. A. Mondal, “Challenges and future perspectives in explainable ai a roadmap for new scholars,” in2025 3rd International Conference on Intelligent Systems, Advanced Computing and Communication (ISACC). IEEE, 2025, pp. 1214–1219

  27. [27]

    Towards human-centered ex- plainable ai: A survey of user studies for model explanations,

    Y . Rong, T. Leemann, T.-T. Nguyen, L. Fiedler, P. Qian, V . Unhelkar, T. Seidel, G. Kasneci, and E. Kasneci, “Towards human-centered ex- plainable ai: A survey of user studies for model explanations,”IEEE transactions on pattern analysis and machine intelligence, vol. 46, no. 4, pp. 2104–2122, 2023

  28. [28]

    Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-based neural structured learning,

    M. Z. Uddin and A. Soylu, “Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-based neural structured learning,”Scientific Reports, vol. 11, no. 1, p. 16455, 2021

  29. [29]

    From movements to metrics: Evaluating explainable ai methods in skeleton-based human activity recognition,

    K. N. Pellano, I. Str ¨umke, and E. A. Ihlen, “From movements to metrics: Evaluating explainable ai methods in skeleton-based human activity recognition,”Sensors, vol. 24, no. 6, p. 1940, 2024

  30. [30]

    Rich learning representations for human ac- tivity recognition: How to empower deep feature learning for biological time series,

    R. Kanjilal and I. Uysal, “Rich learning representations for human ac- tivity recognition: How to empower deep feature learning for biological time series,”Journal of Biomedical Informatics, vol. 134, p. 104180, 2022

  31. [31]

    Human activity recognition: A review of rfid and wearable sensor technologies powered by ai,

    R. Kanjilal, M. F. Kucuk, and I. Uysal, “Human activity recognition: A review of rfid and wearable sensor technologies powered by ai,”IEEE Journal of Radio Frequency Identification, 2025

  32. [32]

    Revealing the importance of local and global interpretability in smartphone based human activity recognition,

    V . Arul, P. Karthikeyan, and E. Ramanujam, “Revealing the importance of local and global interpretability in smartphone based human activity recognition,” in2024 IEEE Students Conference on Engineering and Systems (SCES). IEEE, 2024, pp. 1–6

  33. [33]

    Enhanced human activity recognition framework for wearable devices based on explainable ai,

    C. Liu, T. Perumal, J. Cheng, and Y . Xie, “Enhanced human activity recognition framework for wearable devices based on explainable ai,” in 2024 IEEE International Symposium on Consumer Technology (ISCT). IEEE, 2024, pp. 385–391

  34. [34]

    A survey on human activity recognition using wearable sensors,

    O. D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors,”IEEE communications surveys & tutorials, vol. 15, no. 3, pp. 1192–1209, 2012

  35. [35]

    Human activity recognition with smartphone and wearable sensors using deep learning techniques: A review,

    E. Ramanujam, T. Perumal, and S. Padmavathi, “Human activity recognition with smartphone and wearable sensors using deep learning techniques: A review,”IEEE Sensors Journal, vol. 21, no. 12, pp. 13 029–13 040, 2021

  36. [36]

    X-char: A concept-based explainable complex human activity recognition model,

    J. V . Jeyakumar, A. Sarker, L. A. Garcia, and M. Srivastava, “X-char: A concept-based explainable complex human activity recognition model,” Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies, vol. 7, no. 1, pp. 1–28, 2023

  37. [37]

    Deep learning models for real-life human activity recognition from smartphone sensor data,

    D. Garcia-Gonzalez, D. Rivero, E. Fernandez-Blanco, and M. R. Luaces, “Deep learning models for real-life human activity recognition from smartphone sensor data,”Internet of Things, vol. 24, p. 100925, 2023

  38. [38]

    Grad-cam: Visual explanations from deep networks via gradient-based localization,

    R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” inProceedings of the IEEE international conference on computer vision, 2017, pp. 618–626

  39. [39]

    An explainable eeg-based human activity recognition model using machine-learning approach and lime,

    I. Hussain, R. Jany, R. Boyer, A. Azad, S. A. Alyami, S. J. Park, M. M. Hasan, and M. A. Hossain, “An explainable eeg-based human activity recognition model using machine-learning approach and lime,”Sensors, vol. 23, no. 17, p. 7452, 2023

  40. [40]

    Gexse (generative explanatory sensor system): An interpretable deep generative model for human activity recognition in smart spaces,

    Y . Sun, N. Pai, V . V . Ramesh, M. Aldeer, and J. Ortiz, “Gexse (generative explanatory sensor system): An interpretable deep generative model for human activity recognition in smart spaces,”arXiv preprint arXiv:2306.15857, 2023

  41. [41]

    Sez-harn: Self-explainable zero-shot human activity recognition net- work,

    D. Y . De Silva, S. Wickramanayake, D. Meedeniya, and S. Rasnayaka, “Sez-harn: Self-explainable zero-shot human activity recognition net- work,”arXiv preprint arXiv:2507.00050, 2025

  42. [42]

    Explaining and visualizing embeddings of one-dimensional convolutional models in human activity recognition tasks,

    G. Aquino, M. G. F. Costa, and C. F. F. C. Filho, “Explaining and visualizing embeddings of one-dimensional convolutional models in human activity recognition tasks,”Sensors, vol. 23, no. 9, p. 4409, 2023

  43. [43]

    Efficient and explainable human activity recognition using deep residual network with squeeze- and-excitation mechanism,

    S. Mekruksavanich and A. Jitpattanakul, “Efficient and explainable human activity recognition using deep residual network with squeeze- and-excitation mechanism,”Applied System Innovation, vol. 8, no. 3, p. 57, 2025

  44. [44]

    Interpretable human activity recognition with temporal convolutional networks and model-agnostic explanations,

    V . Bijalwan, A. M. Khan, H. Baek, S. Jeon, and Y . Kim, “Interpretable human activity recognition with temporal convolutional networks and model-agnostic explanations,”IEEE Sensors Journal, vol. 24, no. 17, pp. 27 607–27 617, 2024

  45. [45]

    Toward explainable ai-empowered cognitive health assessment,

    A. R. Javed, H. U. Khan, M. K. B. Alomari, M. U. Sarwar, M. Asim, A. S. Almadhor, and M. Z. Khan, “Toward explainable ai-empowered cognitive health assessment,”Frontiers in public health, vol. 11, p. 1024195, 2023

  46. [46]

    Explaining human activity recognition with shap: validating insights with perturbation and quantitative measures,

    F. Tempel, E. A. F. Ihlen, L. Adde, and I. Str ¨umke, “Explaining human activity recognition with shap: validating insights with perturbation and quantitative measures,”Computers in Biology and Medicine, vol. 188, p. 109838, 2025

  47. [47]

    Anchors: High-precision model-agnostic explanations,

    M. T. Ribeiro, S. Singh, and C. Guestrin, “Anchors: High-precision model-agnostic explanations,” inProceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018

  48. [48]

    A lightweight and explainable hybrid deep learning model for wearable sensor-based human activity recognition,

    P. Tokas, V . B. Semwal, and S. Jain, “A lightweight and explainable hybrid deep learning model for wearable sensor-based human activity recognition,”IEEE Sensors Journal, 2025

  49. [49]

    Granular and explainable human activity recog- nition through sound segmentation and deep learning,

    J. Kim and B. Yoo, “Granular and explainable human activity recog- nition through sound segmentation and deep learning,”Journal of Computational Design and Engineering, vol. 12, no. 8, pp. 252–269, 2025

  50. [50]

    Diat-separable-cnn-eca-harnet: a lightweight and explainable model for efficient human activity recog- nition,

    A. Waghumbare and U. Singh, “Diat-separable-cnn-eca-harnet: a lightweight and explainable model for efficient human activity recog- nition,”Signal, Image and Video Processing, vol. 19, no. 3, p. 245, 2025

  51. [51]

    A tiny inertial transformer for human activity recognition via multimodal knowledge distillation and explainable ai,

    I. Lamaakal, C. Yahyati, Y . Maleh, K. El Makkaoui, I. Ouahbi, A. A. A. El-Latif, M. Zomorodi, and B. A. El-Rahiem, “A tiny inertial transformer for human activity recognition via multimodal knowledge distillation and explainable ai,”Scientific Reports, vol. 15, no. 1, p. 42335, 2025

  52. [52]

    Explainable deep learning framework for human activity recognition,

    Y . Huang, Y . Zhou, H. Zhao, T. Riedel, and M. Beigl, “Explainable deep learning framework for human activity recognition,”arXiv preprint arXiv:2408.11552, 2024

  53. [53]

    Extracting tree-structured representations of trained networks,

    M. Craven and J. Shavlik, “Extracting tree-structured representations of trained networks,”Advances in neural information processing systems, vol. 8, 1995

  54. [54]

    Learning certifiably optimal rule lists for categorical data,

    E. Angelino, N. Larus-Stone, D. Alabi, M. Seltzer, and C. Rudin, “Learning certifiably optimal rule lists for categorical data,”Journal of Machine Learning Research, vol. 18, no. 234, pp. 1–78, 2018

  55. [55]

    Explainable activity recognition over interpretable models,

    C. Bettini, G. Civitarese, and M. Fiori, “Explainable activity recognition over interpretable models,” in2021 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). IEEE, 2021, pp. 32–37

  56. [56]

    Discovering intrinsic spatial- temporal logic rules to explain human actions,

    C. Cao, C. Yang, R. Zhang, and S. Li, “Discovering intrinsic spatial- temporal logic rules to explain human actions,”Advances in Neural Information Processing Systems, vol. 36, pp. 67 948–67 959, 2023

  57. [57]

    Probabilistic model checking for human activity recognition in medical serious games,

    T. L’Yvonnet, E. De Maria, S. Moisan, and J.-P. Rigault, “Probabilistic model checking for human activity recognition in medical serious games,”Science of Computer Programming, vol. 206, p. 102629, 2021

  58. [58]

    Gnnex- plainer: Generating explanations for graph neural networks,

    Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnex- plainer: Generating explanations for graph neural networks,”Advances in neural information processing systems, vol. 32, 2019

  59. [59]

    Explainability methods for graph convolutional neural networks,

    P. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, and H. Hoffmann, “Explainability methods for graph convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019

  60. [60]

    Ex- plaining image classifiers by counterfactual generation,

    C.-H. Chang, E. Creager, A. Goldenberg, and D. Duvenaud, “Ex- plaining image classifiers by counterfactual generation,”arXiv preprint arXiv:1807.08024, 2018

  61. [61]

    Concept bottleneck models,

    P. W. Koh, T. Nguyen, Y . S. Tang, S. Mussmann, E. Pierson, B. Kim, and P. Liang, “Concept bottleneck models,” inProceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 119. PMLR, 2020, pp. 5338–5348. [Online]. Available: https://proceedings.mlr.press/v119/koh20a.html

  62. [62]

    Leveraging multi- head attention and counterfactual explanations for precise and efficient activity recognition and heart attack detection,

    H. AbdelRaouf, M. Abouyoussef, and M. I. Ibrahem, “Leveraging multi- head attention and counterfactual explanations for precise and efficient activity recognition and heart attack detection,”IEEE Internet of Things Journal, 2025

  63. [63]

    Dexar: Deep explainable sensor-based activity recognition in smart-home environments,

    L. Arrotta, G. Civitarese, and C. Bettini, “Dexar: Deep explainable sensor-based activity recognition in smart-home environments,”Pro- ceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 1, pp. 1–30, 2022