Approximate Inference in Logical Credal Networks

Authors: Radu Marinescu, Haifeng Qian, Alexander Gray, Debarun Bhattacharjya, Francisco Barahona, Tian Gao, Ryan Riegel

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several classes of LCNs demonstrate clearly that ARIEL yields high quality solutions compared with exact inference and scales to much larger problems than previously considered.
Researcher Affiliation Industry Radu Marinescu1 , Haifeng Qian2 , Alexander Gray1 , Debarun Bhattacharjya1 , Francisco Barahona1 , Tian Gao1 and Ryan Riegel1 1IBM Research 2AWS AI Labs radu.marinescu@ie.ibm.com
Pseudocode Yes Algorithm 1 App Roximate Inf Erence for LCNs (ARIEL)
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the methodology described.
Open Datasets Yes Table 3 displays the results obtained on LCNs derived from real-world Bayesian networks with binary variables [Constantinou et al., 2020]. ... Specifically, we consider a binary molecular classification task using imprecise expert knowledge as well as molecular fingerprinting data [Fern andez de Gortari et al., 2017].
Dataset Splits No The paper does not specify training, validation, or test dataset splits. It mentions generating random LCNs and using real-world LCNs but no split details.
Hardware Specification Yes We ran all experiments on a 2.2GHz Intel Core processor with 32GB of RAM.
Software Dependencies Yes The competing algorithms were implemented in Python 3.8 and used the ipopt 3.12 solver [W achter and Biegler, 2006] with default settings to handle the non-linear constraint programs.
Experiment Setup No The paper mentions a maximum of 10 iterations and a 10^-6 threshold for convergence but does not provide other specific hyperparameters or detailed system-level training settings.