Open-Universe Weighted Model Counting
Authors: Vaishak Belle
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Preliminary Evaluations: To answer these questions, we chose 5 problems that emphasize an OU environment. The particulars of the problems are discussed below, and the empirical behavior is presented in Figure 1. Experiments were run on OS X, using a 1.3 GHz Intel i5 processor with 4GB RAM. |
| Researcher Affiliation | Academia | Vaishak Belle University of Edinburgh vaishak@ed.ac.uk |
| Pseudocode | Yes | Algorithm 1 provides a pseudocode for Pr in terms of WMC. |
| Open Source Code | No | The paper refers to using existing WMC software like C2D, WFOMC, and Prob Log, but does not state that their own implementation code is open-source or provide a link for it. |
| Open Datasets | Yes | Example 18: We now consider a parameterized version of Pearl s (1988) Alarm Bayesian network. ... A more glaring diļ¬erence is observed when we consider the Grades problem from (Heckerman, Meek, and Koller 2004). |
| Dataset Splits | No | The paper mentions specific problems and datasets but does not provide details on training, validation, or test dataset splits (percentages, sample counts, or explicit splitting methodology). |
| Hardware Specification | Yes | Experiments were run on OS X, using a 1.3 GHz Intel i5 processor with 4GB RAM. |
| Software Dependencies | No | The paper mentions using 'C2D WMC solver (Darwiche 2004)', 'WFOMC lifted WMC solver (Van den Broeck 2013)', and 'Prob Log probabilistic programming language (Fierens et al. 2011)', but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper describes conceptual setups for problems (e.g., 'leave the neighbor predicate open', 'using the default settings' for BLOG) but does not provide specific experimental setup details such as hyperparameter values (learning rates, batch sizes, epochs, optimizers) or detailed training configurations for their own method. |