Mirror Diffusion Models for Constrained and Watermarked Generation
Authors: Guan-Horng Liu, Tianrong Chen, Evangelos Theodorou, Molei Tao
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiment We evaluate the performance of Mirror Diffusion Model (MDM) on common constrained generation problems, such as ℓ2-ball and simplex constrained sets with dimensions d ranging from 2 to 20. ... We compare MDM against standard unconstrained diffusion models, such as DDPM [2], and their constrained counterparts, such as Reflected Diffusion [18], using the same time-embedded fully-connected network and 1000 sampling time steps. Evaluation metrics include Sliced Wasserstein distance [70] and constraint violation. |
| Researcher Affiliation | Academia | Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou , Molei Tao Georgia Institute of Technology, USA {ghliu, tianrong.chen, evangelos.theodorou, mtao}@gatech.edu |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper states it adopted ADM and EDM as backbones and implemented Reflected Diffusion, providing links to these third-party or baseline codebases. It does not explicitly state that the source code for their proposed MDM methodology is made openly available or provide a direct link to it. |
| Open Datasets | Yes | We first test out MMD on FFHQ [72] and AFHQv2 [73] on unconditional 64 × 64 image generation. ... For d = 2, we consider the Gaussian Mixture Model (with variance 0.05) and the Spiral shown respectively in Figures 1 and 3. For d = {6, 8, 20}, we place d isotropic Gaussians, each with variance 0.05, at the corner of each dimension, and reject samples outside the constrained sets. ... Simplices constrained sets: We consider Dirichlet distributions [48], Dir(α), with various concentration parameters α detailed in Table 7. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits (e.g., specific percentages or sample counts) needed to reproduce the experiment. |
| Hardware Specification | Yes | All experiments are conducted on two TITAN RTXs and one RTX 2080. |
| Software Dependencies | No | The paper states 'All methods are implemented in Py Torch [76]' and mentions using 'geomloss' and 'ot' packages, but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We compare MDM against standard unconstrained diffusion models, such as DDPM [2], and their constrained counterparts, such as Reflected Diffusion [18], using the same time-embedded fully-connected network and 1000 sampling time steps. ... For constrained generation, all methods are trained with Adam W [77] and an exponential moving average with the decay rate of 0.99. As standard practices, we decay the learning rate by the decay rate 0.99 every 1000 steps. ... For constrained generation, all networks take (y, t) as inputs and follow out = out_mod(norm(y_mod( y ) + t_mod(timestep_embedding( t )))), where timestep_embedding( ) is the standard sinusoidal embedding. ... All Linear s have 128 hidden dimension. We use group normalization [79] for all norm. |