Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Justifying Argument Acceptance with Collective Attacks: Discussions and Disputes

Authors: Giovanni Buraglio, Wolfgang Dvorak, Matthias König, Markus Ulbricht

IJCAI 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we studied discussion games with a special emphasis on the existence of collective attacks. We advocated for the use of SETAFs as an abstract argumentation formalism to represent conflicts arising from a given knowledge base. We introduced discussion games for SETAFs, generalizing their AF counterparts by characterizing credulous acceptance for preferred semantics (as a base case for many other semantics). Due to the increased expressive power of SETAFs, they yield more concise justifications in the presence of collective attacks. In particular, we investigated how the discussion game s length relates to the corresponding admissible set. Further, we introduced the notion of concise preferred discussions, a notion that is genuine for SETAFs and finds no counter-part in AFs. We show that this notion can be employed to reduce the size of discussions even further. To round up our investigation, we presented SETAF dispute trees in Section 5 and compared them to SETAF discussion games. Applying these notions to ABA demonstrated the improvements our proposal achieves.
Researcher Affiliation Academia Giovanni Buraglio1 , Wolfgang Dvoˇr ak1 , Matthias K onig1 and Markus Ulbricht2 1Institute of Logic and Computation, TU Wien 2Sca DS.AI Dresden/Leipzig, Leipzig University EMAIL, EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link about open-sourcing code for the described methodology.
Open Datasets No The paper uses illustrative examples (e.g., Example 1.1, Example 3.1) rather than empirical evaluation on datasets. Therefore, no information about dataset availability for training is provided.
Dataset Splits No The paper does not describe empirical experiments that would involve dataset splits for training, validation, or testing.
Hardware Specification No The paper does not provide any specific hardware details used for running experiments, as its contribution is theoretical.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or system-level training settings.