BCPNNs are inherently interpretable models, and this work supplies a taxonomy, sixteen explanation primitives, and configuration artifacts to make their decisions auditable without post-hoc tools.
Feature attri- bution explanations for spiking neural networks
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.AI 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Native Explainability for Bayesian Confidence Propagation Neural Networks: A Framework for Trusted Brain-Like AI
BCPNNs are inherently interpretable models, and this work supplies a taxonomy, sixteen explanation primitives, and configuration artifacts to make their decisions auditable without post-hoc tools.