Explaining Arguments’ Strength: Unveiling the Role of Attacks and Supports

Authors: Xiang Yin, Nico Potyka, Francesca Toni

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a probabilistic algorithm to efficiently approximate RAEs. Finally, we show the application value of RAEs in fraud detection and large language models case studies. ... We conducted experiments with randomly generated QBAFs of increasing size. Figure 4 shows, for cyclic QBAFs (see arxiv.org/abs/2404.14304 for acyclic QBAFs), how the absolute difference (y-axis) between estimates at every 10-th iteration evolves with an increasing number of samples (x-axis), pointing to convergence within a few hundreds iterations.
Researcher Affiliation Academia 1Department of Computing, Imperial College London, UK 2School of Computer Science and Informatics, Cardiff University, UK {xy620, ft}@imperial.ac.uk, potykan@cardiff.ac.uk
Pseudocode Yes Algorithm 1 An Approximation Algorithm for RAEs
Open Source Code No The paper does not provide any specific repository links or explicit statements about the release of source code for the methodology described in this paper. Mentions of 'arxiv.org/abs/2404.14304' refer to proofs, concrete values, and additional experiments, not source code.
Open Datasets Yes We take the QBAF from [Chi et al., 2021], shown in Figure 5, where argument 1 ( It is a fraud case ) is the topic argument, and arguments 2 48 represent evidence for or against this case.
Dataset Splits No The paper does not specify training/validation/test dataset splits in the context of machine learning model training. The experiments involve applying an algorithm to pre-defined structures (QBAFs) or generated data, not training predictive models.
Hardware Specification No We give hardware specifications and additional experiments for runtime, acyclic and differently-sized QBAFs in arxiv.org/abs/2404.14304.
Software Dependencies Yes We obtained the following arguments and confidence values (which we use as base scores). β (0.6): Learning a foreign language requires cognitive maturity, which children lack. Hence, it s difficult for them to excel. γ (0.9): Studies show that young children possess higher neuroplasticity, making language learning more effective. δ (0.7): Children immersed in a foreign language environment from an early age have better language acquisition. We used the QE semantics (σQE) to compute the strength of arguments and visualize the QBAF and strengths in Figure 7. ... we generate a non-tree QBAF by Chat GPT(GPT-3.5) [Open AI, 2022], for the claim It is easy for children to learn a foreign language well (topic argument ), prompted to create arguments satisfying the following requirements
Experiment Setup Yes Thus, we apply the approximate Algorithm 1, setting the sample size N to 1000. ... We set the base score for each argument to 0.5 in line with [Chi et al., 2021].