Elucidating the design space of classifier-guided diffusion generation

Authors: Jiajun Ma, Tianyang Hu, Wenjia Wang, Jiacheng Sun

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on Image Net validate our proposed method, showing that state-of-the-art (SOTA) diffusion models (DDPM, EDM, Di T) can be further improved (up to 20%) using off-the-shelf classifiers with barely any extra computational cost.
Researcher Affiliation Collaboration Jiajun Ma1,2, Tianyang Hu3 , Wenjia Wang1,2, Jiacheng Sun3 1Hong Kong University of Science and Technology 2Hong Kong University of Science and Technology (Guangzhou) 3Huawei Noah s Ark Lab
Pseudocode Yes The guided sampling algorithm is outlined in Algorithm 1 and the hyper-parameter settings can be found in Appendix C.3.
Open Source Code Yes The code is available at https://github.com/Alex Ma OLS/Elu CD/tree/main.
Open Datasets Yes Extensive experiments on Image Net validate our proposed method... All models generate 50,000 Image Net 128x128 samples with 250 DDPM steps.
Dataset Splits No The paper mentions generating samples for “evaluation” (e.g., “We generate 50,000 Image Net 128x128 samples for evaluation.”), which implies a testing phase, but does not provide specific training/validation splits or percentages for the datasets used.
Hardware Specification Yes Sampling time is recorded as GPU hours on NVIDIA V100.
Software Dependencies No The paper mentions “Pytorch Res Net checkpoints” and “PyTorch Res Net-50 and Res Net-101”, but does not provide specific version numbers for PyTorch or other software dependencies.
Experiment Setup No The paper states that “the hyper-parameter settings can be found in Appendix C.3.” but does not provide specific hyperparameter values or training configurations in the main text.