Deep Learning for Abstract Argumentation Semantics

Authors: Dennis Craandijk, Floris Bex

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that the AGNN can almost perfectly predict the acceptability under different semantics and scales well for larger argumentation frameworks.
Researcher Affiliation Collaboration 1 National Police Lab AI, Netherlands Police 2 Information and Computing Sciences, Utrecht University 3 Institute for Law, Technology and Society, Tilburg University {d.f.w.craandijk, f.j.bex}@uu.nl
Pseudocode No The paper describes the AGNN model's operation and message passing steps in text, but it does not include a dedicated pseudocode block or algorithm section.
Open Source Code Yes We publish our code at https://github.com/Dennis Craandijk/ DL-Abstract-Argumentation.
Open Datasets No We generate a variety of challenging argumentation frameworks by sampling from the following AF generators from the International Competition on Computational Models of Argumentation [Gaggl et al., 2020]: AFBench Gen2, AFGen Benchmark Generator, Grounded Generator, Scc Generator, Stable Generator.
Dataset Splits Yes We generate a test and validation dataset of size 1000 with AFs containing |A| = 25 arguments, and a training dataset of a million AFs where the number of arguments per AF is sampled randomly between 5 |A| 25 (to accelerate the learning).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No Ground-truth labels are determined based on extensions obtained with the sound and complete µ-toksia solver [Niskanen and J arvisalo, 2019]. A specific version for µ-toksia is not provided, nor for AdamW, LSTM, or the overall framework (e.g., PyTorch version).
Experiment Setup Yes The dimensions of the embedding and all hidden neural layers are d = 128. The model is run for T = 32 message passing steps. We train our model in batches containing 50 graphs (approximately 750 nodes) using the Adam W optimiser [Loshchilov and Hutter, 2019] with a cosine cyclical learning rate [Smith, 2017] between 2e 4 and 1e 7, ℓ2 regularisation of 1e 9 and clip the gradients by global norm with a 0.5 clipping ratio [Pascanu et al., 2013].