Efficient constrained sampling via the mirror-Langevin algorithm
Authors: Kwangjun Ahn, Sinho Chewi
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also corroborate our theoretical findings with numerical experiments. We also perform a numerical experiment to compare the practical performance of MLA with PLA. ... we plot the error θk θ 2 in Figure 2, averaged over 10 trials. |
| Researcher Affiliation | Academia | Kwangjun Ahn Department of EECS Massachusetts Institute of Technology Cambridge, MA 02139 kjahn@mit.edu Sinho Chewi Department of Mathematics Massachusetts Institute of Technology Cambridge, MA 02139 schewi@mit.edu |
| Pseudocode | Yes | The mirror-Langevin algorithm (MLA): Xk+1/2 := arg min x Q [ η V (Xk), x + Dφ(x, Xk)] , (MLA:1) Xk+1 := φ (Wη) , where d Wt = 2 [ 2φ (Wt)] 1/2 d Bt , W0 = φ(Xk+1/2) . (MLA:2) |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | No | The paper uses a synthetically generated dataset: 'we generate 1000 i.i.d. pairs (Xi, Yi) where Xi is sampled uniformly from the ℓ1 ball and Yi is generated from Xi according to (5.1) with θ = θ .' |
| Dataset Splits | No | The paper describes generating synthetic data for numerical experiments but does not provide specific details on training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | We generate 30 samples using both MLA and PLA (both with step size η = 0.005). At each iteration, we average the samples to obtain an estimate θk for the posterior mean, and we plot the error θk θ 2 in Figure 2, averaged over 10 trials. We implement MLA:2 by performing 10 inner iterations of an Euler-Maruyama discretization. |