Recognition: 2 theorem links
· Lean TheoremDeep Arguing
Pith reviewed 2026-05-12 04:08 UTC · model grok-4.3
The pith
Deep neural networks construct argumentation graphs in which data points support their predicted label and attack alternatives.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By having the network output an argumentation graph over the input examples, with edges encoding support for the correct class and attacks on wrong classes, and then applying differentiable semantics to compute the final prediction, the model learns representations and reasoning steps together. The resulting graph serves as a faithful case-based explanation because the support and attack relations directly determine the output through the semantics.
What carries the argument
The argumentation graph over data points, with support and attack edges learned from features and processed by differentiable argumentation semantics that compute the overall label assignment.
If this is right
- The model reaches accuracy levels competitive with ordinary deep networks on both tabular and imaging classification tasks.
- Every prediction is accompanied by an explicit graph showing which training cases support or attack the assigned label.
- Constraints on the graph structure during training simultaneously raise predictive performance and the quality of the explanations.
- The same end-to-end pipeline applies without modification to different data modalities.
Where Pith is reading between the lines
- Inspecting the learned attack relations could surface systematic biases by revealing which groups of examples consistently undermine certain predictions.
- The approach could be combined with existing attribution methods to cross-check whether the argumentative explanations align with gradient or perturbation-based importance scores.
- Extending the same structure to regression or structured prediction tasks might yield case-based explanations for continuous outputs.
Load-bearing premise
That the support and attack relations the network learns actually mirror its internal decision process instead of being a separate structure that may not match how the features influence the output.
What would settle it
A controlled test in which a data point identified as strongly supportive in the argumentation graph is removed or its features altered, yet the model's prediction and confidence remain unchanged.
Figures
read the original abstract
Deep learning has become the dominant approach for creating high capacity, scalable models across diverse data modalities. However, because these models rely on a large number of learned parameters, tightly couple feature extraction with task objectives, and often lack explicit reasoning mechanisms, it is difficult for humans to understand how they arrive at their predictions. Understanding what representations emerge and why they arise from the training data remains an open challenge. We introduce Deep Arguing, a novel neurosymbolic approach that integrates deep learning with argumentation construction and reasoning for interpretable classification with different data modalities. In our approach deep neural networks construct an argumentation structure wherein data points support their assigned label and attack different ones. Using differentiable argumentation semantics for reasoning, the model is trained end-to-end to jointly learn feature representation and argumentative interactions. This results in argumentation structures providing faithful case-based explanations for predictions. Structure constraints over the argumentation graph guide learning, improving both interpretability and predictive performance. Experiments with tabular and imaging datasets show that Deep Arguing achieves performance competitive with standard baselines whilst offering interpretable argumentative reasoning.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Deep Arguing, a neurosymbolic architecture in which a deep neural network constructs an argumentation graph over data points (arguments support their predicted label and attack alternatives). Differentiable argumentation semantics are used to reason over this graph, enabling end-to-end training that jointly optimizes feature representations and argumentative interactions. The resulting structures are claimed to deliver faithful case-based explanations while structure constraints improve both interpretability and predictive performance. Experiments on tabular and imaging datasets are reported to achieve performance competitive with standard baselines.
Significance. If the faithfulness claim holds and the imposed graph constraints demonstrably improve both accuracy and interpretability without hidden trade-offs, the work would provide a concrete bridge between high-capacity neural models and symbolic reasoning, addressing a central challenge in explainable AI. The use of differentiable semantics for joint learning of representations and interactions is a technically interesting direction that could generalize across modalities.
major comments (2)
- [Abstract / Experiments] Abstract and Experiments section: the central claim that the constructed argumentation structures provide 'faithful case-based explanations' is not supported by any quantitative faithfulness metric (e.g., agreement between the semantics-derived label and the network's internal activations, or ablation showing that removing the graph alters predictions in the predicted manner). Training merely aligns the auxiliary structure to the network output; without an independent verification test, it remains possible that the graph is an imposed constraint whose output is post-hoc aligned rather than a faithful reflection of the model's reasoning.
- [Experiments] Experiments section: performance is described only as 'competitive with standard baselines' with no tables, numerical results, error bars, specific baselines, or statistical tests. This prevents assessment of whether the structure constraints deliver the claimed performance improvement or merely preserve accuracy while adding constraints, directly undermining the dual claim of improved interpretability and predictive performance.
minor comments (1)
- [Abstract] The abstract states that 'structure constraints over the argumentation graph guide learning' but does not specify the exact form of these constraints or how they are enforced during back-propagation.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. The comments highlight important areas for strengthening the presentation of faithfulness claims and experimental details. We address each major comment below and will revise the manuscript accordingly.
read point-by-point responses
-
Referee: [Abstract / Experiments] Abstract and Experiments section: the central claim that the constructed argumentation structures provide 'faithful case-based explanations' is not supported by any quantitative faithfulness metric (e.g., agreement between the semantics-derived label and the network's internal activations, or ablation showing that removing the graph alters predictions in the predicted manner). Training merely aligns the auxiliary structure to the network output; without an independent verification test, it remains possible that the graph is an imposed constraint whose output is post-hoc aligned rather than a faithful reflection of the model's reasoning.
Authors: We thank the referee for this observation. In Deep Arguing, the argumentation graph is not an auxiliary or post-hoc construct: the neural network produces the graph, and the final classification is computed directly via differentiable argumentation semantics over the support and attack relations. The explanations are therefore faithful by construction, as the predicted label is a direct function of the learned argumentative interactions rather than an independent alignment step. The joint end-to-end training under structure constraints further ensures that the graph reflects the model's reasoning. To provide quantitative support as requested, we will add an ablation study (performance with vs. without the argumentation module) and a faithfulness metric (e.g., agreement rate between semantics-derived labels and direct network outputs) to the Experiments section. revision: yes
-
Referee: [Experiments] Experiments section: performance is described only as 'competitive with standard baselines' with no tables, numerical results, error bars, specific baselines, or statistical tests. This prevents assessment of whether the structure constraints deliver the claimed performance improvement or merely preserve accuracy while adding constraints, directly undermining the dual claim of improved interpretability and predictive performance.
Authors: We acknowledge that the current version does not present experimental results with sufficient detail. Although the manuscript includes experiments on tabular and imaging datasets, we agree that the lack of explicit tables, numerical values, error bars, named baselines, and statistical tests limits evaluation of the claimed benefits. In the revised manuscript we will expand the Experiments section with comprehensive tables reporting accuracies (with standard deviations), comparisons against specific baselines (e.g., standard DNNs and other neurosymbolic methods), error bars across multiple runs, and statistical significance tests. This will allow direct assessment of whether the structure constraints improve or maintain predictive performance while enhancing interpretability. revision: yes
Circularity Check
No significant circularity; derivation self-contained via explicit neurosymbolic design
full rationale
The paper defines a method in which a neural network explicitly constructs an argumentation graph (data points as arguments) and applies differentiable semantics to produce the label prediction, with end-to-end training aligning the two. The claim that the resulting structures provide 'faithful case-based explanations' follows directly from this construction rather than from any hidden reduction or self-referential fit. No equations, self-citations, or uniqueness theorems are invoked in the abstract or described text that would make a central result equivalent to its inputs by definition. The approach is presented as an imposed auxiliary structure whose outputs are aligned by training, which is a standard design choice rather than circularity. External benchmarks (competitive performance on tabular/imaging data) are referenced without reducing to internal parameters.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Differentiable versions of standard argumentation semantics exist and preserve the intended support/attack semantics during gradient-based training.
invented entities (1)
-
Argumentation structure constructed by the DNN
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
deep neural networks construct an argumentation structure wherein data points support their assigned label and attack different ones. Using differentiable argumentation semantics...
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Structure constraints over the argumentation graph guide learning, improving both interpretability and predictive performance.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Optuna: A next-generation hyperparameter optimization framework
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, page 2623–2631, New York, NY , USA, 2019. Association for Computing Machinery
work page 2019
-
[2]
Gradual Semantics Accounting for Varied-Strength Attacks
Leila Amgoud and Dragan Doder. Gradual Semantics Accounting for Varied-Strength Attacks. In18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2019), pages 1270–1278, Montréal, Canada, May 2019. IFAAMAS : International Foundation for Autonomous Agents and Multiagent Systems and SIGAI : ACM’s Special Interest Group on Artificia...
work page 2019
-
[3]
Hamed Ayoobi, Nico Potyka, and Francesca Toni. Protoargnet: Interpretable image classification with super-prototypes and argumentation [technical report], 2023
work page 2023
-
[4]
Sparx: Sparse argumentative explanations for neural networks
Hamed Ayoobi, Nico Potyka, and Francesca Toni. Sparx: Sparse argumentative explanations for neural networks. InECAI 2023, pages 149–156. IOS Press, 2023
work page 2023
-
[5]
Logic tensor networks.Artificial Intelligence, 303:103649, February 2022
Samy Badreddine, Artur d’Avila Garcez, Luciano Serafini, and Michael Spranger. Logic tensor networks.Artificial Intelligence, 303:103649, February 2022
work page 2022
-
[6]
Pietro Baroni, Antonio Rago, and Francesca Toni. How many properties do we need for gradual argumentation? InProceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI’18/IAAI’18/EA...
work page 2018
- [7]
- [8]
-
[9]
Data-empowered argumentation for dialectically explainable predictions
Oana Cocarascu, Andria Stylianou, Kristijonas ˇCyras, and Francesca Toni. Data-empowered argumentation for dialectically explainable predictions. InECAI 2020, pages 2449–2456. IOS Press, 2020
work page 2020
-
[10]
Explanatory predictions with artificial neural networks and argumentation
Oana Cocarascu, Kristijonas ˇCyras, and Francesca Toni. Explanatory predictions with artificial neural networks and argumentation. InProceedings of the 2ndWorkshop onExplainable Artificial Intelligence (XAI 2018), May 2018
work page 2018
-
[11]
Neuro-symbolic learning of answer set programs from raw data
Daniel Cunnington, Mark Law, Jorge Lobo, and Alessandra Russo. Neuro-symbolic learning of answer set programs from raw data. InProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI ’23, 2023
work page 2023
-
[12]
Deep symbolic learning: Discovering symbols and rules from perceptions
Alessandro Daniele, Tommaso Campari, Sagar Malhotra, and Luciano Serafini. Deep symbolic learning: Discovering symbols and rules from perceptions. InProceedings of the Thirty- Second International Joint Conference on Artificial Intelligence, IJCAI-2023, page 3597–3605. International Joint Conferences on Artificial Intelligence Organization, August 2023
work page 2023
-
[13]
Artur S. d’Avila Garcez, Dov M. Gabbay, and Luis C. Lamb. Value-based argumentation frame- works as neural-symbolic learning systems.Journal of Logic and Computation, 15(6):1041– 1058, 2005
work page 2005
-
[14]
Artur S. d’Avila Garcez, Dov M. Gabbay, and Luis C. Lamb. A neural cognitive model of argumentation with application to legal inference and decision making.Journal of Applied Logic, 12(2):109–127, June 2014. 10
work page 2014
-
[15]
Object-centric case- based reasoning via argumentation
Gabriel de Olim Gaul, Adam Gould, Avinash Kori, and Francesca Toni. Object-centric case- based reasoning via argumentation. In Timotheus Kampik, Antonio Rago, Kristijonas Cyras, and Oana Cocarascu, editors,Proceedings of the 3rd International Workshop on Argumentation for eXplainable AI (ArgXAI 2025) co-located with the 28th European Conference on Artific...
work page 2025
-
[16]
Phan Minh Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.Artificial Intelligence, 77(2):321–357, 1995
work page 1995
-
[17]
Glioma grading clinical and mutation features, 2022
Kevin Camphausen Erdal Tasci. Glioma grading clinical and mutation features, 2022
work page 2022
-
[18]
On interpretability of artificial neural networks: A survey, 2020
Fenglei Fan, Jinjun Xiong, Mengzhou Li, and Ge Wang. On interpretability of artificial neural networks: A survey, 2020
work page 2020
-
[19]
Argumentative large language models for explainable and contestable decision-making, 2024
Gabriel Freedman, Adam Dejl, Deniz Gorur, Xiang Yin, Antonio Rago, and Francesca Toni. Argumentative large language models for explainable and contestable decision-making, 2024
work page 2024
-
[20]
Artur d’Avila Garcez and Luís C. Lamb. Neurosymbolic ai: the 3rd wave.Artificial Intelligence Review, 56(11):12387–12406, March 2023
work page 2023
-
[21]
Bryce Goodman and Seth Flaxman. European union regulations on algorithmic decision making and a “right to explanation”.AI Magazine, 38(3):50–57, September 2017
work page 2017
-
[22]
Preference-Based Abstract Argumentation for Case-Based Reasoning
Adam Gould, Guilherme Paulino-Passos, Seema Dadhania, Matthew Williams, and Francesca Toni. Preference-Based Abstract Argumentation for Case-Based Reasoning. InKR, pages 394–404, 8 2024
work page 2024
-
[23]
Neuro-argumentative learning with case-based reasoning
Adam Gould and Francesca Toni. Neuro-argumentative learning with case-based reasoning. In Leilani H. Gilpin, Eleonora Giunchiglia, Pascal Hitzler, and Emile van Krieken, editors, Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, volume 284 ofProceedings of Machine Learning Research, pages 1090–1106. PMLR, 08–10 Sep 2025
work page 2025
-
[24]
Deep residual learning for im- age recognition
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im- age recognition. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016
work page 2016
-
[25]
Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Representations, 2017
work page 2017
-
[26]
Learning multiple layers of features from tiny images
Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto, Computer Science Department, 2009
work page 2009
- [27]
-
[28]
The mnist database of handwritten digits.http://yann
Yann LeCun. The mnist database of handwritten digits.http://yann. lecun. com/exdb/mnist/
-
[29]
Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, Himabindu Lakkaraju, and Haoyi Xiong. $\mathcal{M}^4$: A unified XAI benchmark for faithfulness evaluation of feature attribution methods across metrics, modalities and models. InThirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023
work page 2023
-
[30]
Least squares quantization in pcm.IEEE transactions on information theory, 28(2):129–137, 1982
Stuart Lloyd. Least squares quantization in pcm.IEEE transactions on information theory, 28(2):129–137, 1982
work page 1982
-
[31]
Decoupled weight decay regularization
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. InInternational Conference on Learning Representations, 2019
work page 2019
-
[32]
J MacQueen. Multivariate observations. InProceedings ofthe 5th Berkeley symposium on mathematical statisticsand probability, volume 1, pages 281–297. University of California press Oakland, CA, USA, 1967. 11
work page 1967
-
[33]
Neural probabilistic logic programming in deepproblog.Artificial Intelligence, 298:103504, 2021
Robin Manhaeve, Sebastijan Duman ˇci´c, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Neural probabilistic logic programming in deepproblog.Artificial Intelligence, 298:103504, 2021
work page 2021
-
[34]
Not all neuro- symbolic concepts are created equal: Analysis and mitigation of reasoning shortcuts
Emanuele Marconato, Stefano Teso, Antonio Vergari, and Andrea Passerini. Not all neuro- symbolic concepts are created equal: Analysis and mitigation of reasoning shortcuts. In Thirty-seventh Conference on Neural Information Processing Systems, 2023
work page 2023
- [35]
-
[36]
Frank Nielsen and Ke Sun. Guaranteed bounds on information-theoretic measures of univariate mixtures using piecewise log-sum-exp inequalities.Entropy, 18(12):442, December 2016
work page 2016
-
[37]
Curran Associates Inc., Red Hook, NY , USA, 2019
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.PyTorch: an imperative style, high-performan...
work page 2019
-
[38]
F. Pedregosa, G. Varoquaux, A. Gramfort, V . Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V . Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python.Journal of Machine Learning Research, 12:2825–2830, 2011
work page 2011
-
[39]
Deep differentiable logic gate networks
Felix Petersen, Christian Borgelt, Hilde Kuehne, and Oliver Deussen. Deep differentiable logic gate networks. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022
work page 2022
-
[40]
Continuous dynamical systems for weighted bipolar argumentation
Nico Potyka. Continuous dynamical systems for weighted bipolar argumentation. In Michael Thielscher, Francesca Toni, and Frank Wolter, editors,Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, KR 2018, Tempe, Arizona, 30 October - 2 November 2018, pages 148–157. AAAI Press, 2018
work page 2018
-
[41]
Extending modular semantics for bipolar weighted argumentation
Nico Potyka. Extending modular semantics for bipolar weighted argumentation. InProceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’19, page 1722–1730, Richland, SC, 2019. International Foundation for Autonomous Agents and Multiagent Systems
work page 2019
-
[42]
Nico Potyka. Interpreting neural networks as quantitative argumentation frameworks.Proceed- ings of the AAAI Conference on Artificial Intelligence, 35(7):6463–6470, May 2021
work page 2021
-
[43]
A roadmap for neuro-argumentative learning
Maurizio Proietti and Francesca Toni. A roadmap for neuro-argumentative learning. InNeSy, pages 1–8, 2023
work page 2023
-
[44]
P. Rita S. Moro. Bank marketing, 2014
work page 2014
-
[45]
The graph neural network model.IEEE Transactions on Neural Networks, 20(1):61–80, 2009
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model.IEEE Transactions on Neural Networks, 20(1):61–80, 2009
work page 2009
-
[46]
Analyzing differentiable fuzzy logic operators.Artificial Intelligence, 302:103602, January 2022
Emile van Krieken, Erman Acar, and Frank van Harmelen. Analyzing differentiable fuzzy logic operators.Artificial Intelligence, 302:103602, January 2022
work page 2022
-
[47]
Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. InInternational Conference on Learning Representations, 2018
work page 2018
-
[48]
Wenguan Wang, Yi Yang, and Fei Wu. Towards data-and knowledge-driven ai: A survey on neuro-symbolic computing.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(2):878–899, February 2025
work page 2025
-
[49]
Transparent classification with multilayer logical perceptrons and random binarization, 2019
Zhuo Wang, Wei Zhang, Ning Liu, and Jianyong Wang. Transparent classification with multilayer logical perceptrons and random binarization, 2019. 12
work page 2019
-
[50]
Scalable rule-based representation learning for interpretable classification
Zhuo Wang, Wei Zhang, Ning Liu, and Jianyong Wang. Scalable rule-based representation learning for interpretable classification. In A. Beygelzimer, Y . Dauphin, P. Liang, and J. Wortman Vaughan, editors,Advances in Neural Information Processing Systems, 2021
work page 2021
-
[51]
Shuhei Watanabe. Tree-structured parzen estimator: Understanding its algorithm components and their roles for better empirical performance, 2023
work page 2023
- [52]
-
[53]
Tom Nuno Wolf, Fabian Bongratz, Anne-Marie Rickmann, Sebastian Pölsterl, and Christian Wachinger. Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning.Proceedings of the AAAI Conference on Artificial Intelligence, 38(6):5921–5929, March 2024
work page 2024
-
[54]
Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017
Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017
work page 2017
-
[55]
A semantic loss function for deep learning with symbolic knowledge
Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. A semantic loss function for deep learning with symbolic knowledge. In Jennifer Dy and Andreas Krause, editors,Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5502–5511. PMLR, 10–15 Jul 2018
work page 2018
-
[56]
A survey on neural network interpretability
Yu Zhang, Peter Tiˇno, Aleš Leonardis, and Ke Tang. A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence, 5(5):726–742, 2021
work page 2021
-
[57]
Dags with no tears: Contin- uous optimization for structure learning
Xun Zheng, Bryon Aragam, Pradeep K Ravikumar, and Eric Xing. Dags with no tears: Contin- uous optimization for structure learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors,Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018
work page 2018
-
[58]
A survey on graph structure learning: Progress and opportunities, 2021
Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, and Shu Wu. A survey on graph structure learning: Progress and opportunities, 2021
work page 2021
-
[59]
Kristijonas ˇCyras, Antonio Rago, Emanuele Albini, Pietro Baroni, and Francesca Toni. Ar- gumentative xai: A survey. InProceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-2021. International Joint Conferences on Artificial Intelligence Organization, August 2021
work page 2021
-
[60]
Abstract argumentation for case-based reasoning
Kristijonas ˇCyras, Ken Satoh, and Francesca Toni. Abstract argumentation for case-based reasoning. InFifteenth International Conference on the Principles of Knowledge Representation and Reasoning. AAAI Press, 2016. A Minimality condition The minimality condition is described in Section 4.1. Here we provide an illustration of the scenarios where minimalit...
work page 2016
-
[61]
The output layer for the edge weight function had 64 neurons
Then the base score and edge weight functions are made up of a four layer MLP with the first hidden layer having 64 neurons, the second hidden layer 48 neurons and the third hidden layer 32 neurons. The output layer for the edge weight function had 64 neurons. The ResNet was pre-trained on the CIFAR-10 data using a linear layer at the end that was removed...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.