Rethinking Conditional Diffusion Sampling with Progressive Guidance
Authors: Anh-Dung Dinh, Daochang Liu, Chang Xu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6 Experiments", "Table 1: Pro G helps to achieve better IS/FID/s FID in general.", "Extensive experiments are conducted on CIFAR10, Image Net (64x64, 128x128, 256x256). |
| Researcher Affiliation | Academia | Anh-Dung Dinh School of Computer Science The University of Sydney dinhanhdung1996@gmail.com", "Daochang Liu School of Computer Science The University of Sydney daochang.liu@sydney.edu.au", "Chang Xu School of Computer Science The University of Sydney c.xu@sydney.edu.au |
| Pseudocode | No | The paper does not include a dedicated section or figure explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Source code is available at: https://github.com/dungdinhanh/prog-guided-diffusion. |
| Open Datasets | Yes | Extensive experiments are conducted on CIFAR10, Image Net (64x64, 128x128, 256x256). |
| Dataset Splits | No | The paper mentions the use of datasets like ImageNet and CIFAR10, but it does not explicitly provide specific details on the train/validation/test dataset splits (e.g., percentages or sample counts) used for its own experiments. |
| Hardware Specification | No | The paper mentions that 'The AI training platform supporting this work was provided by High-Flyer AI (Hangzhou High-Flyer AI Fundamental Research Co., Ltd.)' but does not specify any particular hardware components like GPU or CPU models. |
| Software Dependencies | No | The paper references various models and frameworks (e.g., ADM, IDDPM, CLIP) but does not provide specific software dependencies with version numbers (e.g., Python version, library versions) for reproducibility. |
| Experiment Setup | Yes | Setup. Extensive experiments are conducted on CIFAR10, Image Net (64x64, 128x128, 256x256). We denote Progressive Guidance (Pro G) as our proposed method, which is first evaluated on ADM [11] and IDDPM [3] to verify our claims on improving the performance of the vanilla guidance method." and "Table 5: γ sensitivity comparision." and "When increasing the guidance scale, our proposed method mostly has a slower degeneration rate in FID and Recall than the vanilla guidance. |