Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Multilevel Monte Carlo Variational Inference

Authors: Masahiro Fujisawa, Issei Sato

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we confirm that our method achieves faster convergence and reduces the variance of the gradient estimator compared with other methods through experimental comparisons with baseline methods using several benchmark datasets. In this section, we carried out experiments to analyze the optimization and prediction performance of our method using three models, which have become benchmark experiments in the context of variance reduction: hierarchical linear regression (HLR), Bayesian logistic regression (BLR), and Bayesian neural network (BNN) regression (Miller et al., 2017; Buchholz et al., 2018), and we compared the results with those of existing methods.
Researcher Affiliation Academia Masahiro Fujisawa EMAIL Graduate School of Frontier Sciences The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8561, Japan Center for Advanced Intelligence Project RIKEN 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan Issei Sato EMAIL Graduate School of Information Science and Technology The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
Pseudocode Yes Algorithm 1 Multilevel Monte Carlo Variational Inference Algorithm 2 Multilevel Monte Carlo Variational Inference in Unbounded Case of T (ϵ; λ)
Open Source Code No The paper mentions that the framework can be easily implemented in modern inference libraries like Pytorch and uses external tools like 'R package randtoolbox' and 'optuna', but it does not provide concrete access to the source code for the specific methodology described in the paper itself.
Open Datasets Yes We applied BLR to the breast cancer dataset in the UCI Machine Learning Repository1. ...We applied BNN regression to the wine-quality-red dataset, which is included in the wine-quality dataset in the UCI Machine Learning Repository2. ...We also applied this model to the Fashion-MNIST dataset for a multilabel classification task. 1. https://archive.ics.uci.edu/ml/datasets/Breast+Cancer 2. https://archive.ics.uci.edu/ml/datasets/Wine+Quality
Dataset Splits Yes In each experiment, the dataset was randomly divided into training data and test data at the ratio of 8:2. Each experiment was repeated 10 times.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) are provided in the paper.
Software Dependencies No We note that our framework can be easily implemented in modern inference libraries such as Pytorch (Paszke et al., 2019). ...As described by Buchholz et al. (2018), the RQMC samples were generated using the R package randtoolbox3. ...The initial learning rate and the hyperparameters of the learning rate scheduler (i.e., β and r) were optimized through the Tree-structured Parzen Estimator (TPE) sampler in optuna4 (Akiba et al., 2019)... While software names are mentioned, specific version numbers for Pytorch or randtoolbox are not provided, nor for optuna framework itself.
Experiment Setup Yes For the optimization of variational free energy, we used Adam for the MCand RQMC-based methods and the SGD optimizer with the learning rate scheduler η for our method. Furthermore, we adopted a step-based decay function as the learning rate scheduler. The initial learning rate and the hyperparameters of the learning rate scheduler (i.e., β and r) were optimized through the Tree-structured Parzen Estimator (TPE) sampler in optuna4 (Akiba et al., 2019) in 50 trials, where β and r are the decay and drop-rate parameters, respectively. The selected parameters are summarized in Appendix G (Table 3).