Multi-marginal Wasserstein GAN
Authors: Jiezhang Cao, Langyuan Mo, Yifan Zhang, Kui Jia, Chunhua Shen, Mingkui Tan
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on toy and real-world datasets demonstrate the effectiveness of MWGAN. |
| Researcher Affiliation | Academia | Jiezhang Cao , Langyuan Mo , Yifan Zhang, Kui Jia, Chunhua Shen, Mingkui Tan South China University of Technology, Peng Cheng Laboratory, The University of Adelaide {secaojiezhang, selymo, sezyifan}@mail.scut.edu.cn {mingkuitan, kuijia}@scut.edu.cn, chunhua.shen@adelaide.edu.au |
| Pseudocode | Yes | Algorithm 1 Multi-marginal WGAN. |
| Open Source Code | Yes | The source code of our method is available: https://github.com/caojiezhang/MWGAN. |
| Open Datasets | Yes | Datasets. We conduct experiments on three datasets. Note that all images are resized as 128 128. ... (ii) Celeb A [33] contains 202,599 face images, where each image has 40 binary attributes. ... (iii) Style painting [51]. |
| Dataset Splits | No | The paper does not explicitly provide the training, validation, and test splits for the main datasets (Toy, Celeb A, Style painting) used for the GAN model training. It only mentions a 90% training and 10% testing split for an auxiliary classifier trained on Celeb A. |
| Hardware Specification | Yes | All experiments are conducted based on Py Torch, with an NVIDIA TITAN X GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | We use Adam [29] with β1=0.5 and β2=0.999 and set the learning rate as 0.0001. We train the model 100k iterations with batch size 16. We set α=10, τ=10 and Lf to be the number of target domains in Loss (7). |