Learning General Latent-Variable Graphical Models with Predictive Belief Propagation
Authors: Borui Wang, Geoffrey Gordon6118-6126
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate its performance on both synthetic and real datasets, and showed that it learns different types of latent graphical models efficiently and achieves superior inference performance. |
| Researcher Affiliation | Academia | Borui Wang Computer Science Department Stanford University wbr@cs.stanford.edu Geoffrey Gordon Machine Learning Department Carnegie Mellon University ggordon@cs.cmu.edu |
| Pseudocode | Yes | See Appendix A.3 for a pseudocode summary of our algorithm and see Appendix A.5 for the proof of consistency of our algorithm. |
| Open Source Code | No | The paper does not provide a direct link to its source code or explicitly state that the code for their method is open-source or publicly available. |
| Open Datasets | Yes | Pen-Based Recognition of Handwritten Digits dataset in the UCI machine learning repository (Asuncion and Newman 2007). |
| Dataset Splits | Yes | Here we use 7000 samples as our training set, 494 samples as our validation set, and the other 3498 samples as our testing set. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions using 'ridge regression' but does not specify the version numbers of any software libraries or dependencies used (e.g., Python, PyTorch, scikit-learn versions). |
| Experiment Setup | Yes | In our experiment, we use Gaussian radial basis function kernel embeddings with bandwidth parameter σ = 10 as our feature vectors, and use ridge regression (Friedman, Hastie, and Tibshirani 2009) with regularization parameter λ = 0.1 for S1A and S1B. |