Skeptical Reasoning with Preferred Semantics in Abstract Argumentation without Computing Preferred Extensions

Authors: Matthias Thimm, Federico Cerutti, Mauro Vallati

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present the results of an experimental evaluation that shows that this new approach significantly outperforms the state of the art. We compare the runtime performance of our new approaches with the best solvers from ICCMA19 [Bistarelli et al., 2020] on the benchmark data of ICCMA19 and ICCMA17 [Gaggl et al., 2020], and on an additional set of particularly hard problems. 6 Experiments The goal of our experimental evaluation is to show that our algorithms solve the decision problem of skeptical acceptance wrt. preferred semantics (also denoted as DS-PR in the ICCMA competitions) and the computation of the ideal extension (SE-ID) faster than the state-of-the-art algorithms.
Researcher Affiliation Academia 1Institute for Web Science and Technologies, University of Koblenz-Landau, Germany 2Department of Information Engineering, University of Brescia, Italy 3School of Computing and Engineering, University of Huddersfield, United Kingdom
Pseudocode Yes Algorithm 1 Cegartix algorithm for skeptical acceptance wrt. preferred semantics [Dvoˇr ak et al., 2014]. Algorithm 2 CDAS algorithm for skeptical acceptance wrt. preferred semantics Algorithm 3 CDIS algorithms for computing the ideal extension
Open Source Code Yes The resulting system has been called Fudge.7 The source code is available at http://taas.tweetyproject.org.
Open Datasets Yes 1. For DS-PR we used the benchmark data instances A2 A5 from ICCMA17 (350 instances) and the whole benchmark data set from ICCMA19 (326 instances), both with the prescribed query arguments. 2. For SE-ID we used the benchmark data instances D1 D5 from ICCMA17 (350 instances) and the whole benchmark data set from ICCMA19 (326 instances). 3. For both DS-PR and SE-ID, we additionally generated 252 random graphs (WS-hard) of the Watts-Strogatz graph model [Watts and Strogatz, 1998] using the AFBench Gen2 suite [Cerutti et al., 2016].
Dataset Splits No The paper evaluates performance on benchmark data instances, not on train/validation/test splits of a single dataset, as it concerns a decision problem and computation task rather than a machine learning task involving explicit data partitioning for model training.
Hardware Specification Yes The experiments were conducted on a dedicated server with an Intel Xeon CPU (2.9 GHz) and 128 GB RAM running Ubuntu 20.04.1.
Software Dependencies Yes We implemented our algorithms CDAS (Algorithm 2) and CDIS (Algorithm 3) in C++ using standard data structures and used Glucose 4.1 in its non-parallel version [Audemard and Simon, 2018] for all SAT calls.
Experiment Setup Yes We set a 600s CPU-time cutoff time, which is the same used for the ICCMA competitions. We implemented our algorithms CDAS (Algorithm 2) and CDIS (Algorithm 3) in C++ using standard data structures and used Glucose 4.1 in its non-parallel version [Audemard and Simon, 2018] for all SAT calls. These graphs have a number of arguments between 300 and 600 and were generated using parameter values between 10 and 40 for -WS base Degree, between 0.2 and 0.6 for -WS beta, and between 0.2 and 0.6 for -BA WS prob Cycles, see [Cerutti et al., 2016] for a detailed description of these parameters.