Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Scalable Stochastic Gradient Riemannian Langevin Dynamics in Non-Diagonal Metrics

Authors: Hanlin Yu, Marcelo Hartmann, Bernardo Williams, Arto Klami

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the two proposed metrics against other commonly-used SGRLD algorithms applicable for arbitrary network structures explained in Section 2. We refer to these methods by their metrics, using Identity for Welling & Teh (2011), RMSprop for the p SGLD method of Li et al. (2016) and Wenzel for Wenzel et al. (2020). The gist of the results is that one of the non-diagonal metrics is the best in terms of log-probability in all cases, the horseshoe prior is clearly better, and the non-diagonal metrics help more when the posterior is challenging. Even though the numerical differences are somewhat small, the standard deviations (computed over 10 runs) are even smaller and the differences between samplers are reliable.
Researcher Affiliation Academia Hanlin Yu EMAIL Department of Computer Science, University of Helsinki, Finland Marcelo Hartmann EMAIL Department of Computer Science, University of Helsinki, Finland Bernardo Williams EMAIL Department of Computer Science, University of Helsinki, Finland Arto Klami EMAIL Department of Computer Science, University of Helsinki, Finland
Pseudocode No The paper describes the algorithms and update rules using mathematical equations (e.g., Equation 3 for SGRLD update rule) and prose, but does not include any explicitly labeled 'Algorithm' or 'Pseudocode' blocks with structured steps.
Open Source Code Yes The code that can be used to reproduce all experiments can be found in https: //github.com/ksnxr/SSGRLDNDM.
Open Datasets Yes We use fully connected neural networks of size 784-N-N-10 on MNIST dataset (Le Cun et al., 2010), where we use N [400, 800, 1200], with setup inspired by Li et al. (2016) and Korattikara et al. (2015). We use the CIFAR10 data (Krizhevsky & Hinton, 2009) and Google Res Net-20 as implemented by Fortuin et al. (2021).
Dataset Splits No The paper mentions using a "separate validation set to tune the hyperparameters" and evaluating on "test data", implying standard train/validation/test splits for MNIST and CIFAR10. However, it does not provide specific percentages, sample counts, or explicit citations for the exact splits used, adhering to the strict definition of specific dataset split information.
Hardware Specification Yes For MNIST experiments, the code was run on a single Intel Xeon Gold 6230 CPU @ 2.10GHz core. For CIFAR10 experiments, the code was run on a single NVIDIA Tesla V100-SXM2-32GB GPU with 10 Intel Xeon Gold 6230 CPU @ 2.10GHz cores.
Software Dependencies No Concerning neural network experiments, our implementations for all methods are built on top of Fortuin et al. (2021). The paper refers to a library and a TensorFlow component but does not specify version numbers for these or other software dependencies.
Experiment Setup Yes For RMSprop, Wenzel and Shampoo we use λ = 0.99 and ϵ = 1e 8, matching the choices of Fortuin et al. (2022), whereas for Monge we use λ = 0.9 based on good performance in preliminary experiments. For all methods we select constant learning rate based on performance (log-probability) on a separate validation set, and for Monge we additionally select α2 that controls the metric within the same process. ... We run the samplers for a total of 400 epochs using a batch size of 100. The first 1000 steps are treated as burn-in, and the actual samples used for evaluation are collected after that with a thinning interval of 100 steps. Additional details, like the learning rates for each case, are provided in the Appendix A.4 and A.5.