Variational Annealing of GANs: A Langevin Perspective
Authors: Chenyang Tao, Shuyang Dai, Liqun Chen, Ke Bai, Junya Chen, Chang Liu, Ruiyi Zhang, Georgiy Bobashev, Lawrence Carin Duke
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We consider a wide range of synthetic and real-world tasks to validate our models experimentally, and benchmark them against competing baselines. (Section 5, Experiments) and We provide solid empirical evidence to validate our claims. (Section 6, Conclusion) |
| Researcher Affiliation | Collaboration | 1Electrical & Computer Engineering, Duke University, Durham, NC, USA 2School of Mathematical Sciences, Fudan University, Shanghai, China 3Computer Science & Technology, Tsinghua University, Beijing, China 4RTI International, Research Triangle Park, NC, USA. |
| Pseudocode | No | The paper describes algorithms and procedures, such as the variational annealing reformulation in equation (9) and the DAE estimator in equation (10), but it does not present any formal pseudocode blocks or explicitly labeled algorithm sections. |
| Open Source Code | Yes | Code for our experiments are available from https://github.com/sdai654416/LGAN. (Section 5, Experiments) |
| Open Datasets | Yes | on the Cifar10 dataset (Section 5.1), small-scale Cifar10 (32x32) and large-scale Celeb A (128x128) datasets (Section 5.1), kidney distribution (Section 5.2), Open AI gym tasks (Section 5.3). |
| Dataset Splits | No | The paper mentions using CIFAR-10 and other datasets but does not explicitly state the train/validation/test splits (e.g., percentages or counts) or reference standard splits for reproducibility. It discusses training iterations and test negative log likelihood but lacks details on how the data was partitioned into these subsets. |
| Hardware Specification | Yes | All experiments are implemented with Tensor Flow and run on a single NVIDIA TITAN-X GPU. (Section 5, Experiments) |
| Software Dependencies | No | All experiments are implemented with Tensor Flow... (Section 5). While TensorFlow is mentioned, a specific version number is not provided, nor are other software dependencies with their versions. |
| Experiment Setup | Yes | We vary the regularization parameter λ (Section 5.1), we trained our model with negative annealing (Section 5.1), consider three different dynamic annealing schemes: positive monotonic annealing (PMA), negative monotonic annealing (NMA) and oscillatory annealing (OA). (Section 5.1), We assign a Gaussian prior w N(0, α 1) for the regression weights w, and a Gamma prior α Γ(1, 10 2) for the precision parameter α. (Section 5.3) |