Updating Probability Intervals with Uncertain Inputs

Authors: Karim Tabia

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We place ourselves in the framework of Jeffrey s rule of conditioning and propose extensions of this conditioning for the interval-based setting. More precisely, we first extend Jeffrey s rule to credal sets then propose extensions of Jeffrey s rule to three common conditioning rules for probability intervals (robust, Dempster and geometric conditionings). This section answers two fundamental questions: i) does updating a credal set with another credal result in a credal set and ii) can this update be done by manipulating only the vertices of the credal sets as in the case of conditioning with hard evidence. The following are the first main results of this paper (proofs are provided as supplementary material): Proposition 1.
Researcher Affiliation Academia Karim Tabia CRIL, Universit e d Artois & CNRS, France tabia@cril.fr
Pseudocode No The paper describes mathematical propositions and definitions but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about releasing open-source code or links to code repositories for the described methodology.
Open Datasets No The paper uses illustrative examples (e.g., 'Table 1: Example of probability intervals encoding initial information (a) and new uncertain inputs (b).') and does not mention or provide access information for any publicly available or open datasets.
Dataset Splits No The paper is theoretical and does not conduct experiments involving data splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not discuss the hardware used for any experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers.
Experiment Setup No The paper describes theoretical concepts and computational examples, and therefore does not include details on experimental setup such as hyperparameters or training settings.