Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates
Authors: Adil SALIM, Dmitry Kovalev, Peter Richtarik
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the efficiency of our sampling technique through numerical simulations on a Bayesian learning task. We illustrate the promise of our approach numerically by performing experiments with SPLA. In our simulations, we represent the functional F = H+EU as a function of CPU time while running the algorithms. |
| Researcher Affiliation | Academia | King Abdullah University of Science and Technology, Thuwal, Saudi Arabia. Also affiliated with Moscow Institute of Physics and Technology, Dolgoprudny, Russia. |
| Pseudocode | Yes | SPLA (see Algorithm 1 in Section 4). Algorithm 1 Stochastic Proximal Langevin Algorithm (SPLA) |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the methodology or a link to a code repository. |
| Open Datasets | Yes | Four real life graphs from the dataset [25] are considered : the Facebook graph (4,039 nodes and 88,234 edges, extracted from the Facebook social network), the Youtube graph (1,134,890 nodes and 2,987,624 edges, extracted from the social network included in the Youtube website), the Amazon graph (the 334,863 nodes represent products linked by and 925,872 edges) and the DBLP graph (a co-authorship network of 317,080 nodes and 1,049,866 edges). [25] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection. http: //snap.stanford.edu/data, June 2014. |
| Dataset Splits | No | The paper mentions using specific graphs for experiments but does not provide details on training, validation, or test dataset splits (e.g., percentages, sample counts, or methodology for splitting). |
| Hardware Specification | Yes | The plots in Figure 2 provide simulations of the algorithms on our machine (using one thread of a 2,800 MHz CPU and 256GB RAM). |
| Software Dependencies | No | The paper does not mention specific software libraries or tools with version numbers used for the experiments. |
| Experiment Setup | Yes | In Figure 1, we take γ = 10 and do 105 iterations of both algorithms. The batch parameter n is equal to 400. The parameters λ and σ are chosen such that the log likelihood term and the Total Variation regularization term have the same weight. |