Variational Inference and Model Selection with Generalized Evidence Bounds
Authors: Liqun Chen, Chenyang Tao, Ruiyi Zhang, Ricardo Henao, Lawrence Carin Duke
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evidence is provided to validate our claims. To compare the performance of our new bound and its predecessors, we empirically evaluate the sharpness of these bounds on a toy distribution, and benchmark them on a series of VI tasks. |
| Researcher Affiliation | Academia | Affiliation: Electrical & Computer Engineering, Duke University, Durham, NC 27708, USA. Correspondence to: Chenyang Tao <chenyang.tao@duke.edu>, Liqun Chen <liqun.chen@duke.edu>. |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | No | Details of the experimental setup are in the SM, and source code is available (upon publication) from https: //www.github.com/Liqun Chen0606/glbo. |
| Open Datasets | Yes | on the MNIST dataset. We further evaluate GLBO on the more complex Celeb A face dataset (Liu et al., 2015). We use ten datasets from the UCI Machine Learning Repository (Lichman, 2013)... |
| Dataset Splits | No | We use a random 90%/10% split for training and testing, and use test root mean squared error (RMSE) and log-likelihood (LL) for evaluation. This only mentions train/test, not validation. Other sections don't specify splits either. |
| Hardware Specification | No | The paper does not provide specific details regarding the CPU, GPU, or other hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions several models and implementations but does not specify the version numbers of any software dependencies or libraries used for the experiments. |
| Experiment Setup | Yes | The encoders and decoders are implemented with L {1, 2} neural network layers and leveraging K {5, 50} posterior samples. We choose the VR-Max estimator for RVB and set GLBO to CLBO(x; T, K) with T = 200. For CLBO and R enyi we fixed T = 2. |