Interpreting Neural Networks as Quantitative Argumentation Frameworks

Authors: Nico Potyka6463-6470

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that an interesting class of feed-forward neural networks can be understood as quantitative argumentation frameworks. This connection creates a bridge between research in Formal Argumentation and Machine Learning. We generalize the semantics of feed-forward neural networks to acyclic graphs and study the resulting computational and semantical properties in argumentation graphs.
Researcher Affiliation Academia Nico Potyka University of Stuttgart, Universit atsstraße 32, 70569 Stuttgart, Germany, nico.potyka@ipvs.uni-stuttgart.de
Pseudocode No The paper describes mathematical definitions and procedures for MLP-based semantics and continuous MLP-based semantics, but it does not present them in a structured pseudocode or algorithm block.
Open Source Code Yes The example can be found in the Java library Attractor1 (Potyka 2018b) in the folder examples/divergence. The Java library Attractor (Potyka 2018b) contains an implementation of the Runge-Kutta method RK4 for this purpose. (https://sourceforge.net/projects/attractorproject/)
Open Datasets No The paper is theoretical and illustrates concepts with examples rather than conducting experiments on conventional datasets with training/testing splits. Therefore, there is no mention of a publicly available dataset for training.
Dataset Splits No The paper does not use conventional datasets with training, validation, and test splits. Its analysis is primarily theoretical, with illustrative examples rather than empirical evaluation requiring such splits.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the computations or generate the illustrative examples.
Software Dependencies No The paper mentions "The Java library Attractor" but does not specify a version number for it or any other software dependencies.
Experiment Setup No The paper is primarily theoretical and uses illustrative examples to demonstrate concepts (e.g., convergence properties). It does not describe an experimental setup with hyperparameters or training configurations for a machine learning model.