Understanding and Accelerating Particle-Based Variational Inference
Authors: Chang Liu, Jingwei Zhuo, Pengyu Cheng, Ruiyi Zhang, Jun Zhu
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show the improved convergence by the acceleration framework and enhanced sample accuracy by the bandwidth-selection method. |
| Researcher Affiliation | Academia | 1Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center, Tsinghua-Fuzhou Inst. for Data Tech., THBI Lab, Tsinghua University, Beijing, 100084, China 2Dept. of Elec. & Comp. Engineering, Duke University, NC, USA. |
| Pseudocode | Yes | Algorithm 1 The acceleration framework with Wasserstein Accelerated Gradient (WAG) and Wasserstein Nesterov s method (WNes) |
| Open Source Code | Yes | Detailed experimental settings and parameters are provided in Appendix E, and codes are available at https://github.com/chang-ml-thu/AWGF. |
| Open Datasets | Yes | We test all methods on BNNs for a fixed number of iterations, following the settings of Liu & Wang (2016), and present results in Table 1. We observe that WAG and WNes acceleration methods outperform the WGD and PO for all the four Par VIs on the Kin8nm dataset (one of the UCI datasets (Asuncion & Newman, 2007)). We follow the same settings as Ding et al. (2014), including the ICML dataset4 and the Expanded-Natural parameterization (Patterson & Teh, 2013). The particle size is fixed at 20. Inference results are evaluated by the conventional hold-out perplexity (the lower the better). 4https://cse.buffalo.edu/ changyou/code/ SGNHT.zip |
| Dataset Splits | No | The paper mentions evaluating results by 'test accuracy' and 'log-likelihood' and refers to datasets like Kin8nm (UCI) and ICML dataset, which typically have standard splits. It also mentions 'holdout perplexity'. However, it does not explicitly state the percentages, counts, or specific methodology for the training, validation, and test splits used in its own experiments. |
| Hardware Specification | No | The paper does not explicitly provide details about the specific hardware (e.g., CPU, GPU models, memory, or cloud instances) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) that are needed to replicate the experiments. |
| Experiment Setup | Yes | Detailed experimental settings and parameters are provided in Appendix E. We follow the same settings as Liu & Wang (2016) and Chen et al. (2018a). The particle size is fixed at 20. |