Exploiting Parallelism for Hard Problems in Abstract Argumentation

Authors: Federico Cerutti, Ilias Tachmazidis, Mauro Vallati, Sotirios Batsakis, Massimiliano Giacomin, Grigoris Antoniou

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical Analysis The solvers have been run on a cluster with computing nodes equipped with 2.4 Ghz Dual Core AMD Opteron TM, 8 GB of RAM and Linux operating system. As in the International Planning Competition (IPC) (Jim enez et al. 2012), a cutoff of 900 seconds was imposed to compute the preferred extensions for each AF. No limit was imposed on the RAM usage, but a run fails at saturation of the available memory. Moreover, we adopted the IPC speed score, also borrowed from the planning community, which is defined as follows. For each AF, each system gets a score of 1/(1 + log10(T/T )), where T is its execution time and T the best execution time among the compared systems, or a score of 0 if it fails in that case. Runtimes below 0.01 sec get by default the maximal score of 1. In our experimental analysis, IPC score is normalised to 100. For each solver we recorded the overall result: success (if it finds each preferred extension), crashed, timed-out or ran out of memory. As shown in (Cerutti, Giacomin, and Vallati 2014a), most of the state-of-the-art approaches for enumerating preferred extensions hardly solve large (w.r.t. the number of arguments) frameworks. In this work, we focus on extremely large AFs; the largest as far as we know that have ever been used for testing solvers. We randomly generated a set of 200 AFs, varying the number of SCCs between 90 80 SCCs is the upper-bound for experiments in (Cerutti et al. 2014a) and 210, the number of arguments between 2,700 and 8,400, and considering different uniformly distributed probabilities of attacks, either between arguments or between different SCCs, leading to AFs with a number of attacks between approximately 100 thousands and 2 millions. AFs were generated using AFBench Gen (Cerutti, Giacomin, and Vallati 2014b).
Researcher Affiliation Academia Federico Cerutti Dept. Computing Science University of Aberdeen, UK Ilias Tachmazidis, Mauro Vallati, Sotirios Batsakis School of Computing and Engineering University of Huddersfield, UK Massimiliano Giacomin Dept. of Information Engineering University of Brescia, I Grigoris Antoniou School of Computing and Engineering University of Huddersfield, UK
Pseudocode Yes Algorithm 1 Computing preferred labellings of an AF P-PREF(Γ) 1: Input: Γ = A, R 2: Output: Ep 2L(Γ) 3: return P-SCC-REC(Γ, A); Algorithm 2 Computing preferred labellings of an AF in C P-SCC-REC(Γ, C) 1: Input: Γ = A, R , C A 2: Output: Ep 2L(Γ) 3: (Lab, U) = GROUNDED(Γ, C) 4: Ep := {Lab} 5: Γ = Γ U 6: L:= (L1 := {S1 1, . . . , S1 k}, . . . , Ln := {Sn 1 , . . . , Sn h}) = SCCS-LIST(Γ) 7: M := {. . . , (Si, Bi), . . .} = GREEDY(L, C) 8: for l {1, . . . , n} do 9: El := {ES1 l := (), . . . , ESk l := ()}; Algorithm 3 Greedy computation of base cases GREEDY(L, C) 1: Input: L = (L1, . . . , Ln := {Sn 1 , . . . , Sn h}), C A 2: Output: M = {. . . , (Si, Bi), . . .} 3: M := 4: for S Sn i=1 Li do in parallel 5: B := B-PR(Γ S, S C) 6: M = M {(S, B)} 7: end for 8: return M
Open Source Code No The paper cites a technical report (Cerutti et al. 2014b) at http://arxiv.org/abs/1411.2800, but does not explicitly state that the source code for the methodology described in this paper is openly available or provide a link to a code repository.
Open Datasets No The paper states "We randomly generated a set of 200 AFs... AFs were generated using AFBench Gen (Cerutti, Giacomin, and Vallati 2014b)." This describes a data generation tool, not a publicly available or open dataset with a specific access link, DOI, or repository.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or test sets. It states that "200 AFs" were randomly generated but does not describe how they were partitioned for experimental evaluation.
Hardware Specification Yes The solvers have been run on a cluster with computing nodes equipped with 2.4 Ghz Dual Core AMD Opteron TM, 8 GB of RAM and Linux operating system.
Software Dependencies No The paper mentions "Linux operating system" and that R-PREF exploits a "SAT solver as a NP-oracle", but it does not provide specific version numbers for any key software components, libraries, or solvers used in the experiments.
Experiment Setup Yes The paper specifies a cutoff time for the experiments: "a cutoff of 900 seconds was imposed to compute the preferred extensions for each AF."