Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Nonlinear Stein Variational Gradient Descent for Learning Diversified Mixture Models

Authors: Dilin Wang, Qiang Liu

ICML 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that our method provides an effective mechanism for diversitypromoting learning, achieving substantial improvement over existing methods.
Researcher Affiliation Academia 1Department of Computer Science, UT Austin. Correspondence to: Dilin Wang <EMAIL>, Qiang Liu <EMAIL>.
Pseudocode Yes Algorithm 1 Nonlinear SVGD for Learning Mixture Models
Open Source Code Yes Our code is avalable at https://github. com/dilinwang820/nonlinear_svgd.
Open Datasets Yes We evaluate our approach on the MNIST dataset (Le Cun et al., 1998), which consists of 70,000 handwritten digits of 28-by-28 pixel size.
Dataset Splits Yes We take 50% of the normal data for training and the rest for testing.
Hardware Specification No The paper mentions 'Google Cloud for their support' but does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software components or libraries used (e.g., 'Adam with a constant learning rate' is mentioned, but no framework or programming language versions like PyTorch 1.x or Python 3.x are specified).
Experiment Setup Yes In our experiments, all methods are optimized using Adagrad with a constant learning rate of 0.05. For each model, we train 50, 000 iterations with a mini-batch size of 256. We clip the logarithm values of the variance σi to [ 3, 3] to avoid singularities.