Abductive Reasoning in Logical Credal Networks

Authors: Radu Marinescu, Junkyu Lee, Debarun Bhattacharjya, Fabio Cozman, Alexander Gray

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An extensive empirical evaluation demonstrates the effectiveness of our algorithms on both random LCN instances as well as LCNs derived from more realistic use-cases. In this section, we empirically evaluate the proposed exact and approximate schemes for MAP and MMAP inference in LCNs.
Researcher Affiliation Collaboration Radu Marinescu IBM Research, Ireland; Junkyu Lee IBM Research, USA; Debarun Bhattacharjya IBM Research, USA; Fabio Cozman Universidade de São Paulo, Brazil; Alexander Gray Centaur AI Institute, USA
Pseudocode Yes Algorithm 1 Depth-First Search for MAP and Marginal MAP Inference in LCNs; Algorithm 2 Limited Discrepancy Search for MAP and Marginal MAP Inference in LCNs; Algorithm 3 Simulated Annealing for MAP and Marginal MAP Inference in LCNs; Algorithm 4 Approximate MAP and Marginal MAP Inference in LCNs
Open Source Code Yes The open-source implementation of LCNs is available at: https://github.com/IBM/LCN
Open Datasets Yes We experimented with a set of more realistic LCNs which were first introduced in [18]. These LCNs were derived from real-world Bayesian networks [23]; [23] Anthony Constantinou, Yang Liu, Kiattikun Chobtham, Zhigao Guo, and Neville Kitson. The bayesys data and bayesian network repository. Technical report, Bayesian Artificial Intelligence research lab, Queen Mary University of London, London, UK, 2020.
Dataset Splits No The paper does not describe explicit training, validation, or test dataset splits for the LCN instances used in its experiments. It describes how LCN instances were generated or derived, and then evaluated for inference, rather than partitioning a single dataset into these subsets.
Hardware Specification Yes We ran all experiments on a 3.0GHz Intel Core processor with 128GB of RAM.
Software Dependencies Yes All competing algorithms were implemented2 in Python 3.10 and used the ipopt 3.14 solver [22] with default settings to handle the non-linear constraint programs.
Experiment Setup Yes The maximum discrepancy value use by algorithms LDS and ALDS was set to δ = 3, while algorithms SA and ASA used up to 30 flips over a single iteration.