Constrained Langevin Algorithms with L-mixing External Random Variables
Authors: Yuping Zheng, Andrew Lamperski
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we obtain a deviation of O(T 1/2 log T) in 1-Wasserstein distance for non-convex losses with L-mixing data variables and polyhedral constraints (which are not necessarily bounded). This improves on the previous bound for constrained problems and matches the best-known bound for unconstrained problems. |
| Researcher Affiliation | Academia | Yuping Zheng Department of Electrical and Computer Engineering University of Minnesota, Twin Cities Minneapolis, MN 55455 Andrew Lamperski Department of Electrical and Computer Engineering University of Minnesota, Twin Cities Minneapolis, MN 55455 |
| Pseudocode | No | The paper defines the constrained Langevin algorithm with mathematical equations (e.g., equation (1) 'xk η xf(xk, zk) + 2η') and auxiliary processes, but it does not contain structured pseudocode or algorithm blocks clearly labeled as 'Algorithm' or 'Pseudocode'. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. There are no explicit statements about code release, repository links, or mentions of code in supplementary materials. |
| Open Datasets | No | The paper does not discuss or use any datasets, as it is a purely theoretical work. Therefore, no information about publicly available or open datasets is provided. |
| Dataset Splits | No | The paper does not describe any experimental setup involving dataset splits for training, validation, or testing, as it is a purely theoretical paper. |
| Hardware Specification | No | The paper does not provide specific hardware details, as it is a purely theoretical work and does not involve running experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers, as it is a purely theoretical work and does not describe any experimental implementation. |
| Experiment Setup | No | The paper does not provide specific experimental setup details, such as concrete hyperparameter values or training configurations, as it is a theoretical paper that does not describe empirical experiments. |