Score-based Generative Models with Lévy Processes

Authors: EUN BI YOON, Keehun Park, Sungwoong Kim, Sungbin Lim

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show LIM allows for faster and more diverse sampling while maintaining high fidelity compared to existing diffusion models across various image datasets such as CIFAR10, Celeb A, and imbalanced dataset CIFAR10LT.
Researcher Affiliation Collaboration Eunbi Yoon1 , Keehun Park2, Sungwoong Kim3 , Sungbin Lim1, 4, 5 1Department of Statistics, Korea University 2Artificial Intelligence Graduate School, UNIST 3Department of Artificial Intelligence, Korea University 4LG AI Research 5SNU-LG AI Research Center
Pseudocode No The paper contains theoretical derivations and descriptions of methods, but no clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not contain any explicit statement about releasing code or a link to a code repository for the described methodology.
Open Datasets Yes Our experimental results show LIM allows for faster and more diverse sampling while maintaining high fidelity compared to existing diffusion models across various image datasets such as CIFAR10, Celeb A, and imbalanced dataset CIFAR10LT. We also conducted performance comparisons between diffusion model [39] and LIM using the ADM architecture [9] on Image Net (64 64). For comparing the performance on a high-resolution dataset, we chose the DDPM architecture [15], and trained LIM and diffusion model [39] on the Celeb A-HQ dataset (256 256). LSUN-Church 256 256, and LSUNBedroom 256 256.
Dataset Splits No The paper mentions using datasets for training and testing but does not provide specific details on validation dataset splits (e.g., percentages, counts, or explicit references to standard validation sets for reproducibility).
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running its experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes We employ continuous time step throughout our experiments and set NFE to a fixed value of 500 in all cases. Given that Lévy processes exhibit a higher occurrence of extreme values compared to Gaussian noise, we opt for the smooth L1 Loss instead of the L2 Loss to ensure stable training. We use the modified VP-SDE to fit the Lévy process and quadratic timestep during reverse sampling. We use the U-net architecture as in DDPM [15] for training and apply the loss (11) to train st(xt; ).