UDPM: Upsampling Diffusion Probabilistic Models
Authors: Shady Abu-Hussein, Raja Giryes
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We formalize the Markovian diffusion processes of UDPM and demonstrate its generation capabilities on the popular FFHQ, AFHQv2, and CIFAR10 datasets. In this section, we present the evaluation of UDPM under multiple scenarios. We tested our method on CIFAR10, FFHQ, and AFHQv2 datasets. |
| Researcher Affiliation | Academia | Shady Abu-Hussein Department of Electrical Engineering Tel Aviv University shady.abh@gmail.com Raja Giryes Department of Electrical Engineering Tel Aviv University raja@tauex.tau.ac.il |
| Pseudocode | Yes | Algorithm 1 UDPM training algorithm; Algorithm 2 UDPM sampling algorithm |
| Open Source Code | Yes | Our code is available online: https://github.com/shadyabh/UDPM/ |
| Open Datasets | Yes | We tested our method on CIFAR10 [22], FFHQ [19], and AFHQv2 [6] datasets. |
| Dataset Splits | No | The paper does not explicitly provide percentages or sample counts for train/validation/test splits for the datasets used. While it mentions picking the model with the best FID score, implying a validation step, the split details are not specified. |
| Hardware Specification | Yes | We train all models using a single NVIDIA RTX A6000 GPU. |
| Software Dependencies | No | The paper mentions optimizer (ADAM) and network architectures (UNet, VGG) but does not provide specific version numbers for software libraries or dependencies (e.g., PyTorch version, TensorFlow version). |
| Experiment Setup | Yes | We set L = 3 and fix γ = 2 for all datasets. We also use a uniform box filter of size 2 × 2 as the downsampling kernel w... In our tests we set {αl}L l=1 = {0.5, 0.2, 10−3} and {σl}L l=0 = {0.1, 0.2, 0.3} for all datasets. ... train the network for 600K training steps using ADAM [21] with learning rate and batch size set to 10−4 and 64, respectively. ... We also set λfid = (1, 1, 0), λper = (4, 4, 0), and λadv = (0.2, 0.5, 1). ... The specific implementation details are presented in Table 3. (Tables 3, 4, 5 provide detailed hyperparameters). |