Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Optimization Guarantees for Square-Root Natural-Gradient Variational Inference

Authors: Navish Kumar, Thomas Möllenhoff, Mohammad Emtiyaz Khan, Aurelien Lucchi

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate the effectiveness of natural gradient methods and highlight their advantages over algorithms that use Euclidean or Wasserstein geometries. ... We present empirical results showcasing the fast convergence of NGD, attributed to its Newton-like update. These results are illustrated in Figure 2 and Figure 3.
Researcher Affiliation Academia Navish Kumar EMAIL University of Basel, Basel, Switzerland Department of Mathematics and Computer Science Thomas Möllenhoff EMAIL RIKEN Center for AI Project, Tokyo, Japan Mohammad Emtiyaz Khan EMAIL RIKEN Center for AI Project, Tokyo, Japan Aurelien Lucchi EMAIL University of Basel, Basel, Switzerland Department of Mathematics and Computer Science
Pseudocode Yes Algorithm 1 Square-Root Variational Newton (SR-VN)
Open Source Code No The paper does not contain any explicit statements about releasing the code or provide a link to a code repository for the methodology described in this work. The provided URL in the paper is for datasets, not code.
Open Datasets Yes Datasets. We consider eight different LIBSVM datasets (Chang & Lin, 2011), consisting of five small and three large-scale datasets. The description of these datasets is provided in Table 2 of Appendix F. ... 1Available at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
Dataset Splits Yes Here, we show results for two small-scale datasets (see Figure 2), namely Diabetes-scale (n = 768, d = 8, ntrain = 614) and Mushrooms (n = 8124, d = 112, ntrain = 64, 99). For large-scale datasets (see Figure 3), we show MNIST (n = 70, 000, d = 784, ntrain = 60, 000), Covtype-scale (n = 581, 012, d = 54, ntrain = 500, 000), and Phishing (n = 11, 055, d = 68, ntrain = 8, 844) datasets. ... Table 2: Dataset Statistics and Model Hyperparameters (includes N, d, Ntrain)
Hardware Specification Yes All experiments are performed on NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies No The paper mentions using 'modern automatic-differentiation methods' and 'LIBSVM datasets' but does not specify any software names with version numbers that would be required to reproduce the experiments.
Experiment Setup Yes For all experiments, we first use grid search to tune model hyper-parameters, where the search is performed in a specific range of values. The resultant values were then fixed during our experiments. The statistics of the datasets and the model hyper-parameters used are given in Table 2.