Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Stein Variational Gradient Descent as Gradient Flow
Authors: Qiang Liu
NeurIPS 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper develops the ο¬rst theoretical analysis on SVGD. We establish that the empirical measures of the SVGD samples weakly converge to the target distribution, and show that the asymptotic behavior of SVGD is characterized by a nonlinear Fokker-Planck equation known as Vlasov equation in physics. We develop a geometric perspective that views SVGD as a gradient ο¬ow of the KL divergence functional under a new metric structure on the space of distributions induced by Stein operator. |
| Researcher Affiliation | Academia | Qiang Liu Department of Computer Science Dartmouth College Hanover, NH 03755 EMAIL |
| Pseudocode | Yes | Algorithm 1 Stein Variational Gradient Descent [1] |
| Open Source Code | No | The paper is theoretical and does not mention releasing source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with datasets, so no dataset availability information for training is provided. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments with datasets, so no validation split information is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe experiments, so no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe experiments, so no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and does not describe experiments, so no experimental setup details like hyperparameters or training settings are provided. |