Instantiations and Computational Aspects of Non-Flat Assumption-based Argumentation

Authors: Tuomo Lehtonen, Anna Rapberger, Francesca Toni, Markus Ulbricht, Johannes P. Wallner

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An empirical evaluation shows that the former outperforms the latter on many instances, reflecting the lower complexity of BAF reasoning. This result is in contrast to flat ABA, where direct approaches dominate instantiation-based solvers.
Researcher Affiliation Academia 1University of Helsinki, Department of Computer Science 2Imperial College London, Department of Computing 3Leipzig University, Sca DS.AI Dresden/Leipzig 4Graz University of Technology, Institute of Software Technology
Pseudocode Yes Listing 1: Program Πgen arg ... Algorithm 1: Credulous acceptance, complete semantics
Open Source Code Yes We present an evaluation of the algorithms proposed in Sections 4 and 5, named ABABAF3 and ASPFORABA4, respec- tively. ... 3Available at https://bitbucket.org/lehtonen/ababaf. 4Available at https://bitbucket.org/coreo-group/aspforaba.
Open Datasets Yes Lacking a standard benchmark library for non-flat ABA, we generated two benchmark sets adapted from flat ABA benchmarks [J arvisalo et al., 2023].
Dataset Splits No The paper describes how benchmark instances were generated but does not specify a training, validation, or test split for any dataset used in the experiments. The benchmarks are used for evaluation, not for training/validation of a machine learning model.
Hardware Specification Yes We used 2.50 GHz Intel Xeon Gold 6248 machines under a per-instance time limit of 600 seconds and memory limit of 32 GB.
Software Dependencies Yes We used CLINGO (version 5.5.1) [Gebser et al., 2016] for ASPFORABA and for generating the arguments in ABABAF. We used PYSAT (version 0.1.7) [Ignatiev et al., 2018] with GLUCOSE (version 1.0.3) [Audemard and Simon, 2009; E en and S orensson, 2003] as the SAT solver in ABABAF.
Experiment Setup Yes We used 2.50 GHz Intel Xeon Gold 6248 machines under a per-instance time limit of 600 seconds and memory limit of 32 GB. Lacking a standard benchmark library for non-flat ABA, we generated two benchmark sets adapted from flat ABA benchmarks [J arvisalo et al., 2023]. Set 1 has the following parameters: number of atoms in {80, 120, 160, 200}, ratio of atoms that are assumptions in {0.2, 0.4}, ratio of assumptions occurring as rule heads in {0.2, 0.5}, and both number of rules deriving any given atom and rule size (number of atoms in the body of rule) selected at random from the interval [1, n] for n {1, 2, 5}. We call the maximum rules per atom mr and maximum rule size ms; instances with ms = 1 are additive. For benchmark set 2, we limited mr and ms to {2, 5}, and generated instances a certain distance from atomic. For this, a slack parameter specifies how many atoms in each rule body can be non-assumptions. Here slack is 0, 1 or 2, the first resulting in atomic ABAFs. We generated 5 instances for each combination of parameters for both benchmark sets.