Enforcement Heuristics for Argumentation with Deep Reinforcement Learning

Authors: Dennis Craandijk, Floris Bex5573-5581

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our Graph Neural Network (GNN) architecture EGNN can learn a near optimal enforcement heuristic for all common argument-fixed enforcement problems, including problems for which no other (symbolic) solvers exist. We demonstrate that EGNN outperforms other GNN baselines and on enforcement problems with high computational complexity performs better than state-of-the-art symbolic solvers with respect to efficiency. ... Section discusses the experimental setup (data, training parameters), and Section discusses the results.
Researcher Affiliation Collaboration Dennis Craandijk1,2 and Floris Bex2,3 1 National Police-lab AI, Netherlands Police 2 Department Information and Computing Sciences, Utrecht University 3 Tilburg Institute for Law, Technology and Society, Tilburg University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We publish our code at https://github.com/Dennis Craandijk/DLAbstract-Argumentation.
Open Datasets Yes We sample AFs uniformly from all AF families implemented in the following generators from ICCMA (Gaggl et al. 2020): AFBench Gen2, AFGen Benchmark Generator, Grounded Generator, Scc Generator, Stable Generator. ... We generate training instances with |A| from (3, 4, 5, ..., 9) and 1000 validation instances containing |A| = 10 arguments to train the network.
Dataset Splits Yes We generate training instances with |A| from (3, 4, 5, ..., 9) and 1000 validation instances containing |A| = 10 arguments to train the network.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or memory used for experiments.
Software Dependencies No The paper mentions external tools like 'ยต-toksia solver (Niskanen and J arvisalo 2020a)', 'Pakota', and 'Maadoita', but does not provide specific version numbers for these or other software dependencies like deep learning frameworks (e.g., Python, PyTorch versions).
Experiment Setup No The 'Experimental Setup' section describes the data generation and models used but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations.