Conditionally Strongly Log-Concave Generative Models
Authors: Florentin Guth, Etienne Lempereur, Joan Bruna, Stéphane Mallat
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical results are shown for physical fields such as the φ4 model and weak lensing convergence maps with higher resolution than in previous works. |
| Researcher Affiliation | Academia | 1D epartement d informatique, Ecole Normale Sup erieure, Paris, France 2Courant Institute of Mathematical Sciences and Center for Data Science, New York University, USA 3Coll ege de France, Paris, France, and Flatiron Institute, New York, USA. |
| Pseudocode | Yes | Algorithm 1 Score matching for exponential families with CSLC distributions. Algorithm 2 MALA sampling from CSLC distributions. |
| Open Source Code | Yes | The code to reproduce our numerical experiments is available at https://github.com/Elempereur/WCRG. |
| Open Datasets | Yes | We used down-sampled versions of the simulated convergence maps from the Columbia Lensing Group (http://columbialensing.org/; Zorrilla Matilla et al., 2016; Gupta et al., 2018) as training data. |
| Dataset Splits | No | The paper mentions generating images of specific sizes and using them as training data, but it does not provide explicit train/validation/test splits, percentages, or sample counts. |
| Hardware Specification | No | The paper mentions that |
| Software Dependencies | No | The paper mentions "Py Torch" and "Py Wavelets" but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The MALA step sizes δj are adjusted to obtain an optimal acceptance rate of 0.57. Depending on the scale j, the stationary distribution is reached in Tj 20 400 iterations from a white noise initialization. We used a qualitative stopping criterion according to the quality of the matching of the histograms and power spectrum. |