Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Infinite-Dimensional Diffusion Models
Authors: Jakiw Pidstrigach, Youssef Marzouk, Sebastian Reich, Sven Wang
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate these results both theoretically and empirically, by applying the algorithms to data distributions on manifolds and to distributions arising in Bayesian inverse problems or simulation-based inference." (Abstract) and "In this section, we illustrate our results through numerical experiments. We sample functions defined on [0, 1], and we discretize this spatial domain into a uniform grid with D = 256 evenly spaced points." (Section 7. Numerical Illustrations) |
| Researcher Affiliation | Academia | Jakiw Pidstrigach [...] Institut für Mathematik Universität Potsdam", "Youssef Marzouk [...] Massachusetts Institute of Technology", "Sebastian Reich [...] Institut für Mathematik Universität Potsdam", "Sven Wang [...] Massachusetts Institute of Technology" |
| Pseudocode | Yes | "Algorithm 1 Training" and "Algorithm 2 Sampling" are explicitly provided in Section 4. |
| Open Source Code | No | The paper mentions implementation details like "Our experiments were implemented in JAX, and we used the U-Net architecture from Song et al. (2021) for the neural network." (Appendix A), but does not provide an explicit statement about releasing its own source code or a link to a repository. |
| Open Datasets | No | The paper uses simulated data generated for the numerical experiments, stating "We sample functions defined on [0, 1]..." and "We draw N = 50 000 samples from παdata". There is no concrete access information (link, DOI, or repository) provided for these generated datasets, nor does it refer to established public datasets. |
| Dataset Splits | No | The paper describes generating training samples (e.g., "We draw N = 50 000 samples from παdata" in Section 7.2), but it does not specify how these samples are split into training, validation, or test sets for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions the software framework used: "Our experiments were implemented in JAX..." (Appendix A). |
| Software Dependencies | No | The paper states "Our experiments were implemented in JAX..." (Appendix A), but it does not provide specific version numbers for JAX or any other software libraries, frameworks, or dependencies used in the experiments. |
| Experiment Setup | Yes | The paper provides several experimental setup details in Appendix A, including: "We used the time-change function β(t) as in Song et al. (2021), i.e., β(t) = 0.001 + t(20 - 0.001).", "We discretized the unit interval [0, 1] into M = 1000 evenly spaced points for training and generation.", and "We added εreg Id onto the covariance matrices for numerical stability, where εreg = 0.0001." |