Decentralized Langevin Dynamics for Bayesian Learning

Authors: Anjaly Parayil, He Bai, Jemin George, Prudhvi Gurram

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of the proposed algorithm is evaluated on a wide variety of machine learning tasks. The empirical results show that the performance of individual agents with locally available data is on par with the centralized setting with considerable improvement in the convergence rate. We apply the proposed algorithm to perform decentralized Bayesian learning for Gaussian mixture modeling, logistic regression, and classification and empirically compare our proposed algorithm to centralized ULA (C-ULA).
Researcher Affiliation Collaboration Anjaly Parayil1, He Bai2, Jemin George1, and Prudhvi Gurram1,3 1CCDC Army Research Laboratory, Adelphi, MD 20783, USA 2Oklahoma State University, Stillwater, OK 74078, USA 3Booz Allen Hamilton, Mc Lean, VA 22102, USA
Pseudocode Yes Algorithm 1 Decentralized ULA (D-ULA)
Open Source Code No The paper does not provide explicit statements or links to open-source code for the described methodology.
Open Datasets Yes We compare the performance of D-ULA and C-ULA for Bayesian inference of logistic regression models using a9a dataset available at the UCI machine learning repository 1. The dataset contains 32561 observations and 123 parameters. ... 1http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/ datasets/binary/a9a and For this, we use the MNIST data set containing 60000 gray scale images of 10 digits (0-9) for training and 10000 images for testing. ... SVHN data set is similar to MNIST, but with color images of 10 digits (0-9)... 2http://ufldl.stanford.edu/housenumbers/
Dataset Splits No The paper describes training and testing splits (e.g., 'random 80% of data for training and the remaining 20% for testing' for a9a, and '60000 gray scale images... for training and 10000 images for testing' for MNIST) but does not explicitly mention a separate validation set split.
Hardware Specification No The paper mentions running experiments on a 'network of five agents' but does not provide specific hardware details such as CPU/GPU models, memory specifications, or accelerator types.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., libraries, frameworks, or programming languages with their versions) used in the experiments.
Experiment Setup No Additional details of all the experiments including step sizes and number of epochs are provided in the Supplementary material (see S6). The main paper does not explicitly provide these specific details.