Encoding Probabilistic Graphical Models into Stochastic Boolean Satisfiability
Authors: Cheng-Han Hsieh, Jie-Hong R. Jiang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that, by using our encoding, SSAT-based solving can complement existing PGM tools, especially in answering complex queries. |
| Researcher Affiliation | Academia | Cheng-Han Hsieh1 and Jie-Hong R. Jiang1,2 1Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, Taiwan 2Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan |
| Pseudocode | No | The paper describes algorithms verbally and with mathematical formulas, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: "The proposed encoding algorithms were implemented in the python language." However, no link or explicit statement of public release for the authors' own code is provided. Links are given only for third-party solvers and tools used in the evaluation. |
| Open Datasets | Yes | The experimented benchmarks include the DQMR and Grid networks collected from [Sang et al., 2005] 2 and the UAI-06 networks from the UAI-2006 Probabilistic Inference Challenge. 2https://www.cs.rochester.edu/users/faculty/kautz/Cachet/ |
| Dataset Splits | No | The paper does not provide specific details on training, validation, and test dataset splits (e.g., percentages, counts, or explicit mention of standard splits for their experimental setup). |
| Hardware Specification | Yes | The experiments were conducted on a Linux machine with Intel Xeon CPU E5-2630 at 2.30 GHz and 204 GB RAM. |
| Software Dependencies | No | The paper mentions "python language" and "The two-level logic minimizer espresso1" but does not provide specific version numbers for Python or any of its libraries/packages used, nor a version for espresso. |
| Experiment Setup | Yes | The timeout (TO) limit was set to 3600 seconds and the memory-out (MO) limit 16 GB. For MAP computation, in each benchmark Bayesian network 10 variables were given values randomly as the evidence, and 5, 10, and 20 XY variables, defined in Section 2.2, were chosen randomly. For SDP computation...the decision variable was randomly chosen, 10 variables were given values at random as the evidence, and the threshold was set to 0.2. The numbers of unobserved variables were set to 5, 10, 20, and 30 to generate SDP instances. |