Collapsed Variational Inference for Sum-Product Networks
Authors: Han Zhao, Tameem Adel, Geoff Gordon, Brandon Amos
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on a set of 20 benchmark data sets to compare the performance of the proposed collapsed variational inference method with maximum likelihood estimation (Gens & Domingos, 2012). Table 2 shows the average joint log-likelihood scores of different parameter learning algorithms on 20 data sets. |
| Researcher Affiliation | Academia | School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA Machine Learning Lab, University of Amsterdam, Amsterdam, the Netherlands; Radboud University |
| Pseudocode | Yes | Algorithm 1 CVB-SPN Input: Initial β, prior hyperparameter , training instances {xd}D d=1. Output: Locally optimal β . |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is available or will be released. |
| Open Datasets | Yes | The 20 real-world data sets used in the experiments have been widely used (Rooshenas & Lowd, 2014) to assess the modeling performance of SPNs. |
| Dataset Splits | Yes | Table 1. Statistics of data sets and models. ... Train Valid Test. For both MLE-SPN and CVB-SPN we use a held-out validation set to pick the best solution during the optimization process. |
| Hardware Specification | Yes | All experiments are run on a server with Intel Xeon CPU E5 2.00GHz. |
| Software Dependencies | No | The paper mentions 'Learn SPN (Gens & Domingos, 2013)' as a tool used, but does not provide specific version numbers for any software libraries, frameworks, or dependencies used in the experiments. |
| Experiment Setup | Yes | We fix the projection margin to 0.01, i.e., w = max{w, 0.01} to avoid numerical issues. We implement both methods with backtracking line search to automatically adjust the learning rate at each iteration. In all experiments, the maximum number of iterations is fixed to 50 for both methods. |