On Calibrating Diffusion Probabilistic Models
Authors: Tianyu Pang, Cheng Lu, Chao Du, Min Lin, Shuicheng Yan, Zhijie Deng
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on multiple datasets to empirically validate our proposal. |
| Researcher Affiliation | Collaboration | 1Sea AI Lab, Singapore 2Department of Computer Science, Tsinghua University 3Qing Yuan Research Institute, Shanghai Jiao Tong University |
| Pseudocode | No | No pseudocode or algorithm blocks are provided in the main text or appendices. |
| Open Source Code | Yes | Our code is available at https://github.com/thudzj/Calibrated-DPMs. |
| Open Datasets | Yes | We evaluate our calibration tricks on the CIFAR-10 [25] and Celeb A 64 64 [27] datasets |
| Dataset Splits | No | The paper refers to using "training data" but does not provide specific train/validation/test splits (percentages, counts, or citations to predefined splits). |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments (e.g., GPU/CPU models, memory, or cloud instances). |
| Software Dependencies | No | The paper mentions software like PyTorch and Tensorflow checkpoints, and the Adam optimizer, but does not specify their version numbers or other software dependencies with explicit versions. |
| Experiment Setup | Yes | In accordance with the recommendation, we set the end time of DPM-Solver to 10 3 when the number of sampling steps is less than 15, and to 10 4 otherwise. Additional details can be found in Lu et al. [29]. |