ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion Trajectories
Authors: Zijian Zhang, Zhou Zhao, Jun Yu, Qi Tian
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive qualitative and quantitative experiments on image synthesis demonstrate the feasibility and effectiveness of Shift DDPMs. In this section, we conduct several conditional image synthesis experiments with Shift DDPMs. We further evaluate Shift DDPMs on CIFAR-10 (Krizhevsky and Hinton 2009). Table 1 shows Inception Score, FID, negative log likelihood for these models. We conduct text-to-image (text2img) experiments on CUB dataset (Wah et al. 2011). |
| Researcher Affiliation | Collaboration | Zijian Zhang1, Zhou Zhao1*, Jun Yu2, Qi Tian3 1 Department of Computer Science and Technology, Zhejiang University 2 School of Computer Science and Technology, Hangzhou Dianzi University 3 Huawei Cloud & AI |
| Pseudocode | Yes | Algorithm 1: Training; Algorithm 2: Sampling |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available (e.g., on GitHub, a supplementary website, or an appendix). |
| Open Datasets | Yes | We conduct some experiments, shown in Figure 1. Concretely, we train an unconditional DDPM and a conditional one on MNIST (Le Cun et al. 1998), respectively. We further evaluate Shift DDPMs on CIFAR-10 (Krizhevsky and Hinton 2009). We conduct the experiments of attribute-to-image (Yan et al. 2016) on LFW dataset (Huang et al. 2008). We conduct image inpainting experiments using Irregular Mask Dataset (Liu et al. 2018) with three image datasets: Celeb AHQ (Liu et al. 2015), LSUN-church (Yu et al. 2015) and Places2 (Zhou et al. 2017). We conduct text-to-image (text2img) experiments on CUB dataset (Wah et al. 2011). |
| Dataset Splits | No | The paper mentions training and testing on datasets but does not explicitly specify the proportions or sizes for training, validation, and test splits (e.g., "80/10/10 split" or specific sample counts for each split), nor does it reference a specific, standard validation split for the datasets used. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments, such as specific GPU or CPU models, memory, or cloud computing instance types. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed to reproduce the experiments. |
| Experiment Setup | No | The paper states, "Full implementation details of all experiments can be found in Appendix B." While details exist, they are deferred to an appendix and are not present in the main text of the paper. |