Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback

Authors: TaeHo Yoon, Kibeom Myoung, Keon Lee, Jaewoong Cho, Albert No, Ernest Ryu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that censoring can be accomplished with extreme human feedback efficiency and that labels generated with a mere few minutes of human feedback are sufficient. We conduct experiments within multiple setups demonstrating how minimal human feedback enables removal of target concepts.
Researcher Affiliation Collaboration 1Department of Mathematical Science, Seoul National University 2Interdisciplinary Program in Artificial Intelligence, Seoul National University 3Department of Electronic and Electrical Engineering, Hongik University 4KRAFTON
Pseudocode Yes Algorithm 1 Reward model ensemble, Algorithm 2 Imitation learning of reward model
Open Source Code Yes Code available at: https://github.com/tetrzim/diffusion-human-feedback.
Open Datasets Yes MNIST [11], LSUN [47], Image Net [10], Image Net1k [10]
Dataset Splits No The paper does not explicitly state specific train/validation/test splits with percentages or counts for the datasets used in its experiments.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud instance specifications used for running its experiments.
Software Dependencies No The paper mentions software components like ResNet18 architecture and torchvision.models DEFAULTS, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We train the diffusion model for 100,000 iterations using the Adam W [27] optimizer with β1 = 0.9 and β2 = 0.999, using learning rate 10 4, EMA with rate 0.9999 and batch size 256. We use 1,000 DDPM steps.