pith. machine review for the scientific record. sign in

arxiv: 2602.21874 · v2 · submitted 2026-02-25 · 💻 cs.HC

Recognition: no theorem link

Interactive Augmented Reality-enabled Outdoor Scene Visualization For Enhanced Real-time Disaster Response

Authors on Pith no claims yet

Pith reviewed 2026-05-15 19:36 UTC · model grok-4.3

classification 💻 cs.HC
keywords augmented realitydisaster response3D Gaussian splattingworld in miniaturepoints of interestuser evaluationsituational awareness
0
0 comments X

The pith

An augmented reality interface for disaster response uses 3D Gaussian Splatting and lightweight interactions to improve real-time coordination.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a user-centered AR interface designed for visualizing outdoor disaster scenes in real time. It reconstructs environments using 3D Gaussian Splatting and supports interaction through a world-in-miniature navigation system paired with filterable semantic points of interest. This setup aims to keep cognitive load low while allowing users to maintain situational awareness and make quick decisions. User evaluations indicate the interface is easy to use with high acceptance rates, suggesting it can aid responders in chaotic environments.

Core claim

The developed AR interface, which visualizes detailed scene reconstructions via 3D Gaussian Splatting and employs a lightweight combination of World-in-Miniature navigation with semantic Points of Interest that can be filtered, achieves strong usability and high user acceptance ratios according to preliminary evaluations for disaster response tasks.

What carries the argument

The lightweight WIM-plus-POI interaction approach supported by a streaming architecture for evolving 3D Gaussian Splatting reconstructions.

If this is right

  • Responders gain better situational awareness from detailed, updatable 3D visualizations.
  • The design facilitates real-time coordination and fast decision-making in context.
  • Filterable POIs reduce information overload during high-stress operations.
  • High usability supports potential adoption in actual disaster response scenarios.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This could extend to other high-stakes outdoor AR uses like search and rescue operations.
  • Automatic POI updates from additional sensors might further improve responsiveness.
  • Training programs for responders could incorporate this interface to reduce learning curves.

Load-bearing premise

3D Gaussian Splatting reconstructions remain accurate and updatable in real-time under outdoor disaster conditions, and the WIM-plus-POI interaction keeps cognitive load low for responders.

What would settle it

A field experiment in a simulated disaster environment showing inaccurate or non-updating 3D reconstructions or users experiencing high cognitive load while using the interface for decision-making.

Figures

Figures reproduced from arXiv: 2602.21874 by Dimitrios Apostolakis, Georgios Angelidis, Georgios Th. Papadopoulos, Panagiotis Sarigiannidis, Vasileios Argyriou.

Figure 1
Figure 1. Figure 1: Overview of the AR-enabled disaster scene. [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Detailed view of the functionalities. (e.g., navigation and hazard avoidance), so UI errors or la￾tency can translate into operational mistakes and safety risks. Performance therefore matters beyond raw throughput [14]. Latency and unstable frame timing can cause spatial hints to lag or drift relative to head motion, undermining distance and alignment judgments and complicating safe coordination. Yet, depi… view at source ↗
Figure 3
Figure 3. Figure 3: Snapshot of the 1.144 million debug points of the 3DGS [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
read the original abstract

A user-centered AR interface for disaster response is presented in this work that uses 3D Gaussian Splatting (3DGS) to visualize detailed scene reconstructions, while maintaining situational awareness and keeping cognitive load low. The interface relies on a lightweight interaction approach, combining World-in-Miniature (WIM) navigation with semantic Points of Interest (POIs) that can be filtered as needed, and it is supported by an architecture designed to stream updates as reconstructions evolve. User feedback from a preliminary evaluation indicates that this design is easy to use and supports real-time coordination, with participants highlighting the value of interaction and POIs for fast decision-making in context. Thorough user-centric performance evaluation demonstrates strong usability of the developed interface and high acceptance ratios.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The manuscript presents a user-centered augmented reality (AR) interface for disaster response that employs 3D Gaussian Splatting (3DGS) to create detailed outdoor scene reconstructions. It features a lightweight interaction design combining World-in-Miniature (WIM) navigation with filterable semantic Points of Interest (POIs), supported by an architecture for streaming updates. The authors claim that user feedback from a preliminary evaluation shows the interface is easy to use, supports real-time coordination, and demonstrates strong usability with high acceptance ratios.

Significance. If the usability claims are substantiated with rigorous evaluation, this could represent a meaningful advance in applying AR for real-time disaster response by balancing detailed visualization with low cognitive load. The integration of 3DGS with WIM and POIs addresses key challenges in situational awareness for responders, potentially improving decision-making in dynamic outdoor environments.

major comments (1)
  1. [Abstract] Abstract: The abstract inconsistently describes the evaluation as both 'preliminary' and 'thorough,' yet provides no details on participant numbers, expertise (e.g., actual responders vs. proxies), tasks performed, quantitative metrics (such as SUS scores, error rates, or NASA-TLX), statistical analyses, or comparison baselines. This absence makes the central claim of 'strong usability' and 'high acceptance ratios' rest on unquantified qualitative feedback, which is insufficient to support the performance assertions.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive comment on the abstract. We agree that the wording is inconsistent and will revise it for clarity and accuracy while preserving the preliminary scope of the evaluation.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The abstract inconsistently describes the evaluation as both 'preliminary' and 'thorough,' yet provides no details on participant numbers, expertise (e.g., actual responders vs. proxies), tasks performed, quantitative metrics (such as SUS scores, error rates, or NASA-TLX), statistical analyses, or comparison baselines. This absence makes the central claim of 'strong usability' and 'high acceptance ratios' rest on unquantified qualitative feedback, which is insufficient to support the performance assertions.

    Authors: We acknowledge the inconsistency, which resulted from an editing oversight when combining sentences. The evaluation is preliminary in nature, as stated in the body of the paper (Section 5), and we will revise the abstract to remove 'thorough' and consistently describe it as preliminary. We will add a brief summary of the evaluation setup, including the number of participants, their background (proxies with relevant domain knowledge), the tasks performed (simulated disaster coordination scenarios), and the qualitative feedback plus basic acceptance ratings collected. As this is an early-stage user study focused on interface design and feasibility rather than a comprehensive empirical validation, we did not collect or report standardized quantitative instruments such as SUS or NASA-TLX scores, nor statistical comparisons. These details are elaborated in the Evaluation section; the abstract will now better signpost them without overstating the results. We believe the revised abstract will adequately support the claims for a preliminary study. revision: yes

Circularity Check

0 steps flagged

No derivation chain present; claims rest on system architecture and external user feedback with no self-referential reductions.

full rationale

The manuscript describes an AR interface architecture combining 3D Gaussian Splatting reconstructions with WIM navigation and POI filtering, plus an update-streaming backend. Usability claims are grounded in a described preliminary user evaluation whose feedback is reported qualitatively. No equations, fitted parameters, uniqueness theorems, or ansatzes appear anywhere in the text. No step reduces by construction to its own inputs, and no self-citation is used to justify a central premise. The noted inconsistency between 'preliminary' and 'thorough' evaluation labels is a rhetorical issue, not a circularity in any derivation.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claims rest on the validity of a preliminary user study and the assumption that 3DGS can deliver real-time outdoor reconstructions without explicit error bounds or failure modes discussed.

axioms (1)
  • domain assumption Preliminary user feedback is representative of real disaster responders under operational conditions
    All usability and acceptance claims depend on this unverified transfer from test participants to field use.

pith-pipeline@v0.9.0 · 5443 in / 1131 out tokens · 38579 ms · 2026-05-15T19:36:44.850423+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages · 1 internal anchor

  1. [1]

    Triffid: Autonomous robotic aid for increasing first responders effi- ciency,

    J. Cani, P. Koletsis, K. Foteinos, I. Kefaloukos, L. Argyriou, M. Fale- lakis, I. Del Pino, A. Santamaria-Navarro, M. ˇCech, O. Severaet al., “Triffid: Autonomous robotic aid for increasing first responders effi- ciency,” in2025 6th International Conference in Electronic Engineering & Information Technology (EEITE). IEEE, 2025, pp. 1–9

  2. [2]

    Theoretical underpinnings of situation awareness: A critical review,

    M. R. Endsley, D. J. Garlandet al., “Theoretical underpinnings of situation awareness: A critical review,”Situation awareness analysis and measurement, vol. 1, no. 1, pp. 3–21, 2000

  3. [3]

    The use of decision support in search and rescue: A systematic literature review,

    W. Nasar, R. Da Silva Torres, O. E. Gundersen, and A. T. Karlsen, “The use of decision support in search and rescue: A systematic literature review,”ISPRS International Journal of Geo-Information, vol. 12, no. 5,

  4. [4]

    Available: https://www.mdpi.com/2220-9964/12/5/182

    [Online]. Available: https://www.mdpi.com/2220-9964/12/5/182

  5. [5]

    Analysis of common operational picture and situational awareness during multiple emergency response scenar- ios

    K. Steen-Tveit and J. Radianti, “Analysis of common operational picture and situational awareness during multiple emergency response scenar- ios.” inISCRAM, 2019

  6. [6]

    Safe-ar: Reducing risk while augmenting reality,

    R. R. Lutz, “Safe-ar: Reducing risk while augmenting reality,” in2018 IEEE 29th International Symposium on Software Reliability Engineering (ISSRE), 2018, pp. 70–75

  7. [7]

    Rescuear: Augmented reality sup- ported collaboration for uav driven emergency response systems,

    A. Agrawal and J. Cleland-Huang, “Rescuear: Augmented reality sup- ported collaboration for uav driven emergency response systems,”arXiv preprint arXiv:2110.00180, 2021

  8. [8]

    Combining 2d and 3d visualization with visual analytics in the environmental domain,

    M. Vuckovic, J. Schmidt, T. Ortner, and D. Cornel, “Combining 2d and 3d visualization with visual analytics in the environmental domain,”Information, vol. 13, no. 1, 2022. [Online]. Available: https://www.mdpi.com/2078-2489/13/1/7

  9. [9]

    Emergency response using hololens for building evacuation,

    S. Sharma, S. T. Bodempudi, D. Scribner, J. Grynovicki, and P. Grazaitis, “Emergency response using hololens for building evacuation,” inInter- national Conference on Human-Computer Interaction. Springer, 2019, pp. 299–311

  10. [10]

    Virtual and augmented reality in the disaster management technology: a literature review of the past 11 years,

    S. Khanal, U. S. Medasetti, M. Mashal, B. Savage, and R. Khadka, “Virtual and augmented reality in the disaster management technology: a literature review of the past 11 years,”Frontiers in Virtual Reality, vol. 3, p. 843195, 2022

  11. [11]

    Application of augmented reality, mobile devices, and sensors for a combat entity quantitative assessment supporting decisions and situational awareness development,

    M. Chmielewski, K. Sapiejewski, and M. Sobolewski, “Application of augmented reality, mobile devices, and sensors for a combat entity quantitative assessment supporting decisions and situational awareness development,”Applied Sciences, vol. 9, no. 21, p. 4577, 2019

  12. [12]

    Hancko, A

    D. Hancko, A. Majlingova, and D. Ka ˇc´ıkov´a, “Integrating virtual reality, augmented reality, mixed reality, extended reality, and simulation-based systems into fire and rescue service training: Current practices and future directions,”Fire, vol. 8, no. 6, 2025. [Online]. Available: https://www.mdpi.com/2571-6255/8/6/228

  13. [13]

    Employing virtual reality to support decision making in emergency management,

    G. E. Beroggi, L. Waisel, and W. A. Wallace, “Employing virtual reality to support decision making in emergency management,” Safety Science, vol. 20, no. 1, pp. 79–88, 1995, the International Emergency Management and Engineering Society. [Online]. Available: https://www.sciencedirect.com/science/article/pii/092575359400068E

  14. [14]

    Enhancing the explanatory power of usability heuristics,

    J. Nielsen, “Enhancing the explanatory power of usability heuristics,” inProceedings of the SIGCHI Conference on Human Factors in Computing Systems, ser. CHI ’94. New York, NY , USA: Association for Computing Machinery, 1994, p. 152–158. [Online]. Available: https://doi.org/10.1145/191666.191729

  15. [15]

    Reducing the cognitive load of decision- makers in emergency management through augmented reality,

    M. Mirbabaie and J. Fromm, “Reducing the cognitive load of decision- makers in emergency management through augmented reality,” 2019

  16. [16]

    3d gaussian splatting for real-time radiance field rendering

    B. Kerbl, G. Kopanas, T. Leimk ¨uhler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering.”ACM Trans. Graph., vol. 42, no. 4, pp. 139–1, 2023

  17. [17]

    Nerf: Representing scenes as neural radiance fields for view synthesis,

    B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,”Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021

  18. [18]

    A survey of 3d reconstruction: The evolution from multi-view geometry to nerf and 3dgs,

    S. Liu, M. Yang, T. Xing, and R. Yang, “A survey of 3d reconstruction: The evolution from multi-view geometry to nerf and 3dgs,”Sensors, vol. 25, no. 18, p. 5748, 2025

  19. [19]

    Analyzing 3d gaussian splatting and neural radiance fields: A comparative study on complex scenes and sparse views,

    C. Blanchard, L. Gupta, and S. Nanisetty, “Analyzing 3d gaussian splatting and neural radiance fields: A comparative study on complex scenes and sparse views,”cs. tornto. edu, 2023

  20. [20]

    Compact 3d gaussian representation for radiance field,

    J. C. Lee, D. Rho, X. Sun, J. H. Ko, and E. Park, “Compact 3d gaussian representation for radiance field,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, pp. 21 719–21 728

  21. [21]

    Self-supervised visual learning in the low-data regime: a comparative evaluation,

    S. Konstantakos, J. Cani, I. Mademlis, D. I. Chalkiadaki, Y . M. Asano, E. Gavves, and G. T. Papadopoulos, “Self-supervised visual learning in the low-data regime: a comparative evaluation,”Neurocomputing, vol. 620, p. 129199, 2025

  22. [22]

    Advances in diffusion models for image data augmentation: A review of methods, models, evaluation metrics and future research directions,

    P. Alimisis, I. Mademlis, P. Radoglou-Grammatikis, P. Sarigiannidis, and G. T. Papadopoulos, “Advances in diffusion models for image data augmentation: A review of methods, models, evaluation metrics and future research directions,”Artificial Intelligence Review, vol. 58, no. 4, p. 112, 2025

  23. [23]

    Illicit object de- tection in x-ray imaging using deep learning techniques: A comparative evaluation,

    J. Cani, C. Diou, S. Evangelatos, V . Argyriou, P. Radoglou-Grammatikis, P. Sarigiannidis, I. Varlamis, and G. T. Papadopoulos, “Illicit object de- tection in x-ray imaging using deep learning techniques: A comparative evaluation,”IEEE Access, 2026

  24. [24]

    Virtual and augmented reality technologies for emergency management in the built environments: A state-of-the-art review,

    Y . Zhu and N. Li, “Virtual and augmented reality technologies for emergency management in the built environments: A state-of-the-art review,”Journal of safety science and resilience, vol. 2, no. 1, pp. 1–10, 2021

  25. [25]

    Com- paring the effectiveness of fire extinguisher virtual reality and video training,

    R. Lovreglio, X. Duan, A. Rahouti, R. Phipps, and D. Nilsson, “Com- paring the effectiveness of fire extinguisher virtual reality and video training,”Virtual Reality, vol. 25, no. 1, pp. 133–145, 2021

  26. [26]

    A novel earthquake education system based on virtual reality,

    X. GONG, Y . LIU, Y . JIAO, B. W ANG, J. ZHOU, and H. YU, “A novel earthquake education system based on virtual reality,”IEICE Transactions on Information and Systems, vol. E98.D, no. 12, pp. 2242– 2249, 2015

  27. [27]

    Hybrid 3d rendering of large map data for crisis management,

    D. Tully, A. El Rhalibi, C. Carter, and S. Sudirman, “Hybrid 3d rendering of large map data for crisis management,”ISPRS International Journal of Geo-Information, vol. 4, no. 3, pp. 1033–1054, 2015

  28. [28]

    Challenges of using drones and virtual/augmented reality for disaster risk management,

    D. Velev, P. Zlateva, L. Steshina, and I. Petukhov, “Challenges of using drones and virtual/augmented reality for disaster risk management,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 42, pp. 437–440, 2019

  29. [29]

    Large-scale photorealistic outdoor 3d scene reconstruction from uav imagery using gaussian splatting techniques,

    C. Maikos, G. Angelidis, and G. T. Papadopoulos, “Large-scale photorealistic outdoor 3d scene reconstruction from uav imagery using gaussian splatting techniques,” 2026. [Online]. Available: https://arxiv.org/abs/2602.20342

  30. [30]

    Integrating cognitive load theory and concepts of human–computer interaction,

    N. Hollender, C. Hofmann, M. Deneke, and B. Schmitz, “Integrating cognitive load theory and concepts of human–computer interaction,” Computers in Human Behavior, vol. 26, no. 6, pp. 1278–1288, 2010, online Interactivity: Role of Technology in Behavior Change. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0747563210001718

  31. [31]

    Enhancing emergency response: The critical role of interface design in mining emergency robots,

    R. Bakzadeh, K. M. Joao, V . Androulakis, H. Khaniani, S. Shao, M. Hassanalian, and P. Roghanchi, “Enhancing emergency response: The critical role of interface design in mining emergency robots,” Robotics, vol. 14, no. 11, 2025. [Online]. Available: https://www.mdpi. com/2218-6581/14/11/148

  32. [32]

    Virtual and augmented reality for disaster risk reduction,

    M. Migliorini, L. Licata, and D. Strumendo, “Virtual and augmented reality for disaster risk reduction,” in1st Croatian Conference on Earthquake Engineering, 2021, p. 8

  33. [33]

    Virtual reality on a wim: interactive worlds in miniature,

    R. Stoakley, M. J. Conway, and R. Pausch, “Virtual reality on a wim: interactive worlds in miniature,” inProceedings of the SIGCHI conference on Human factors in computing systems, 1995, pp. 265–272

  34. [34]

    Navigation and locomotion in virtual worlds via flight into hand-held miniatures,

    R. Pausch, T. Burnette, D. Brockway, and M. E. Weiblen, “Navigation and locomotion in virtual worlds via flight into hand-held miniatures,” inProceedings of the 22nd annual conference on Computer graphics and interactive techniques, 1995, pp. 399–400

  35. [35]

    Number of people required for usability evaluation: the 10±2 rule,

    W. Hwang and G. Salvendy, “Number of people required for usability evaluation: the 10±2 rule,”Commun. ACM, vol. 53, no. 5, p. 130–133, May 2010. [Online]. Available: https://doi.org/10.1145/ 1735223.1735255

  36. [36]

    Balancing performance and comfort in virtual reality: A study of fps, latency, and batch values,

    A. Geris, B. Cukurbasi, M. Kilinc, and O. Teke, “Balancing performance and comfort in virtual reality: A study of fps, latency, and batch values,” Software: Practice and Experience, vol. 54, no. 12, pp. 2336–2348, 2024

  37. [37]

    Exploring the effects of image persistence in low frame rate virtual environments,

    D. J. Zielinski, H. M. Rao, M. A. Sommer, and R. Kopper, “Exploring the effects of image persistence in low frame rate virtual environments,” in2015 IEEE Virtual Reality (VR), 2015, pp. 19–26

  38. [38]

    Multi- layer gaussian splatting for immersive anatomy visualization,

    C. Kleinbeck, H. Schieber, K. Engel, R. Gutjahr, and D. Roth, “Multi- layer gaussian splatting for immersive anatomy visualization,”IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, pp. 2353–2363, 2025

  39. [39]

    Unity gaussian splatting,

    A. Pranckevi ˇcius24, “Unity gaussian splatting,” https://github.com/ aras-p/UnityGaussianSplatting, 2024

  40. [40]

    Mixed reality toolkit 3 (mrtk3),

    Microsoft, “Mixed reality toolkit 3 (mrtk3),” https://github.com/ MixedRealityToolkit/MixedRealityToolkit-Unity, 2023, version 3.0, ac- cessed: December 2025

  41. [41]

    Passthrough over link,

    Meta Platforms, Inc., “Passthrough over link,” https: //developers.meta.com/horizon/documentation/native/android/ mobile-passthrough-over-link/, accessed: December 2025

  42. [42]

    Survey on hand gesture recognition from visual input,

    M. Linardakis, I. Varlamis, and G. T. Papadopoulos, “Survey on hand gesture recognition from visual input,”IEEE Access, 2025

  43. [43]

    Visual Hand Gesture Recognition with Deep Learning: A Comprehensive Review of Methods, Datasets, Challenges and Future Research Directions

    K. Foteinos, M. Linardakis, P. Radoglou-Grammatikis, V . Argyriou, P. Sarigiannidis, I. Varlamis, and G. T. Papadopoulos, “Visual hand gesture recognition with deep learning: A comprehensive review of methods, datasets, challenges and future research directions,”arXiv preprint arXiv:2507.04465, 2025

  44. [44]

    Distributed maze exploration using multiple agents and optimal goal assignment,

    M. Linardakis, I. Varlamis, and G. T. Papadopoulos, “Distributed maze exploration using multiple agents and optimal goal assignment,”IEEE Access, vol. 12, pp. 101 407–101 418, 2024

  45. [45]

    Towards open and expandable cognitive ai architectures for large-scale multi-agent human- robot collaborative learning,

    G. T. Papadopoulos, M. Antona, and C. Stephanidis, “Towards open and expandable cognitive ai architectures for large-scale multi-agent human- robot collaborative learning,”IEEE access, vol. 9, pp. 73 890–73 909, 2021

  46. [46]

    User profile-driven large-scale multi-agent learning from demonstration in federated human-robot collaborative environments,

    G. T. Papadopoulos, A. Leonidis, M. Antona, and C. Stephanidis, “User profile-driven large-scale multi-agent learning from demonstration in federated human-robot collaborative environments,” inInternational Conference on Human-Computer Interaction. Springer, 2022, pp. 548– 563

  47. [47]

    Tornado: Foundation models for robots that handle small, soft and deformable objects,

    M. Moutousi, A. El Saer, N. Nikolaou, A. Sanfeliu, A. Garrell, L. Bl ´aha, M. ˇCech, E. K. Markakis, I. Kefaloukos, M. Lagomarsinoet al., “Tornado: Foundation models for robots that handle small, soft and deformable objects,” in2025 6th International Conference in Electronic Engineering & Information Technology (EEITE). IEEE, 2025, pp. 1–13

  48. [48]

    The invisible arms race: digital trends in illicit goods trafficking and ai-enabled responses,

    I. Mademlis, M. Mancuso, C. Paternoster, S. Evangelatos, E. Finlay, J. Hughes, P. Radoglou-Grammatikis, P. Sarigiannidis, G. Stavropoulos, K. V otiset al., “The invisible arms race: digital trends in illicit goods trafficking and ai-enabled responses,”IEEE Transactions on Technology and Society, vol. 6, no. 2, pp. 181–199, 2024

  49. [49]

    Multimodal explainable artificial intelligence: A comprehensive review of methodological advances and future research directions,

    N. Rodis, C. Sardianos, P. Radoglou-Grammatikis, P. Sarigiannidis, I. Varlamis, and G. T. Papadopoulos, “Multimodal explainable artificial intelligence: A comprehensive review of methodological advances and future research directions,”IEEe Access, vol. 12, pp. 159 794–159 820, 2024

  50. [50]

    Exploring energy landscapes for minimal counterfactual explanations: Applications in cybersecurity and beyond,

    S. Evangelatos, E. Veroni, V . Efthymiou, C. Nikolopoulos, G. T. Papadopoulos, and P. Sarigiannidis, “Exploring energy landscapes for minimal counterfactual explanations: Applications in cybersecurity and beyond,”IEEE Transactions on Artificial Intelligence, 2025