AdjointDEIS: Efficient Gradients for Diffusion Models

Authors: Zander W. Blasingame, Chen Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we demonstrate the effectiveness of Adjoint DEIS for guided generation with an adversarial attack in the form of the face morphing problem. 5 Experiments To illustrate the efficacy of our technique, we examine an application of guided generation in the form of the face morphing attack.
Researcher Affiliation Academia Zander W. Blasingame Clarkson University blasinzw@clarkson.edu Chen Liu Clarkson University cliu@clarkson.edu
Pseudocode Yes Algorithm 1 Adjoint DEIS-2M. Algorithm 2 Di M Framework.
Open Source Code No Our code will be released at https: //github.com/zblasingame/Adjoint DEIS. Our code for Adjoint DEIS will be available here at https://github.com/zblasingame/ Adjoint DEIS.
Open Datasets Yes We run our experiments on SYN-MAD 2022 [51] morphed pairs that are constructed from the Face Research Lab London dataset [52], more details in Appendix H.4. The SYN-MAD 2022 dataset used in this paper can be found at https://github.com/ marcohuber/SYN-MAD-2022.
Dataset Splits No The paper describes the datasets used and how a subset was selected (489 bona fide image pairs) but does not provide explicit train/validation/test splits (e.g., percentages or counts) for their experiments.
Hardware Specification Yes All of the main experiments were done on a single NVIDIA Tesla V100 32GB GPU. On average, the guided generation experiments for our approach took between 6 8 hours for the whole dataset of face morphs with a batch size of 8. Some additional follow-up work for the camera-ready version used an NVIDIA H100 Tensor Core 80GB GPU with a batch size of 16.
Software Dependencies No The paper mentions various models and tools used (e.g., DDIM solver, Arc Face, dlib) but does not provide specific version numbers for these software components.
Experiment Setup Yes In our experiments, we used a learning rate of 0.01, N = 20 sampling steps, M = 20 steps for Adjoint DEIS, and 50 optimization steps for gradient descent.