Probabilistic Belief Contraction Using Argumentation

Authors: Kinzang Chhogyal, Abhaya Nayak, Zhiqiang Zhuang, Abdul Sattar

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we first classify different belief states by their stability, and then exploit the quantitative nature of probabilities and combine it with the basic ideas of argumentation theory to determine the mixture proportions. We, therefore, propose a novel approach to probabilistic belief contraction using argumentation. (...) Section 2 introduces probabilistic belief states and Section 3 briefly reviews probabilistic belief contraction and its problems. In Section 4, we give an overview of argumentation theory and our argumentation framework for determining the mixture proportion is presented in Section 5 followed by the discussion and conclusion in Section 6.
Researcher Affiliation Academia 1 Griffith University, Brisbane, Australia | 2 Macquarie University, Sydney, Australia
Pseudocode No The paper describes theoretical concepts and calculations using mathematical notation and natural language but does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about releasing open-source code or links to a code repository.
Open Datasets No The paper is theoretical and does not use or mention any datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not involve dataset splits (training, validation, or test) as no empirical experiments are conducted.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the specific hardware used to run experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies or versions for implementation or experimentation.
Experiment Setup No The paper describes a theoretical framework and does not include details about an experimental setup, hyperparameters, or training configurations.