Submodel Decomposition Bounds for Influence Diagrams

Authors: Junkyu Lee, Radu Marinescu, Rina Dechter12147-12157

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the upper-bounds from the proposed algorithms ST-GDD(i) and ST-WMBMM(i) with the state-of-the-art methods in the following three experiments: (1) synthetic IDs with perfect recall comparing against the stateof-the-art decomposition bounds JGDID(i) (Lee, Ihler, and Dechter 2018) and WMBEID(i) (Lee et al. 2019), (2) upper bounds of the LIMIDs comparing against the error-bounds presented by (Mauá 2016), and (3) a case study that evaluates the upper bounds on large scale problems adopted from an online-planning domain.
Researcher Affiliation Collaboration Junkyu Lee1,2, Radu Marinescu2, Rina Dechter1 1 University of California Irvine 2 IBM Research
Pseudocode Yes Algorithm 1 Hierarchical Message Passing over TST
Open Source Code No The paper does not contain any statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes BN instances are existing Bayesian networks used in the UAI-2006 probabilistic inference competitions which we converted to IDs
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions controlling complexity with an 'i-bound parameter' but does not provide specific experimental setup details, such as concrete hyperparameter values, training configurations, or other system-level settings used in the experiments.