Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Towards Understanding the Dynamics of Gaussian-Stein Variational Gradient Descent
Authors: Tianle Liu, Promit Ghosal, Krishnakumar Balasubramanian, Natesh Pillai
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct simulations to compare Gaussian SVGD dynamics with different kernels and the performance of the algorithms mentioned in the previous section. We consider three settings, Bayesian logistic regression, and Gaussian and Gaussian mixture targets. Here we present the results for Bayesian logistic regression as it involves a non-Gaussian but unimodal target and is one of the typical setups such that GVI is preferred in practice. |
| Researcher Affiliation | Academia | Tianle Liu Department of Statistics Harvard University Cambridge, MA 02138 EMAIL Promit Ghosal Department of Mathematics Massachusetts Institute of Technology Waltham, MA 02453 EMAIL Krishnakumar Balasubramanian Department of Statistics University of California, Davis Davis, CA 95616 EMAIL Natesh S. Pillai Department of Statistics Harvard University Cambridge, MA 02138 EMAIL |
| Pseudocode | Yes | Algorithm 1 Density-based Gaussian SVGD. ... Algorithm 2 Particle-based Gaussian SVGD. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code or a link to a code repository. |
| Open Datasets | No | The paper describes generating data for Bayesian logistic regression and using Gaussian/Gaussian mixture targets, e.g., "Xi i.i.d. N(0, Id)", and "target is Gaussian N(ยต, ฮฃ) where ยต Unif([0, 1]10)". However, it does not provide concrete access information (link, citation to a public dataset) for a publicly available dataset used for training. |
| Dataset Splits | No | The paper does not provide specific details about training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software components with their version numbers. |
| Experiment Setup | Yes | The largest safe step sizes are 0.02, 0.1, 2, 0.8, 0.02, 0.2, 4, 4. ... In Figure 1a the same step size 0.01 is specified for all algorithms while for Figure 1b we choose the largest safe step size for each algorithm. |