Adversarial Learning of Semantic Relevance in Text to Image Synthesis
Authors: Miriam Cha, Youngjune L. Gwon, H. T. Kung3272-3279
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach using the Oxford-102 flower dataset, adopting the inception score and multi-scale structural similarity index (MS-SSIM) metrics to assess discriminability and diversity of the generated images. The empirical results indicate greater diversity in the generated images, especially when we gradually select more negative training examples closer to a positive example in the semantic space. |
| Researcher Affiliation | Academia | Miriam Cha, Youngjune L. Gwon, H. T. Kung John A. Paulson School of Engineering and Applied Sciences Harvard University, Cambridge, MA 02138 |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (e.g., clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | No | The paper includes a citation for an external GitHub repository (https://github.com/reedscot/icml2016) in the context of building their GAN models based on the GAN-INT-CLS architecture, but this is a reference to a third-party base architecture, not to the authors' own source code for the methodology described in this paper. There is no explicit statement or link indicating that their own code is open-source. |
| Open Datasets | Yes | We evaluate our models using Oxford-102 flower dataset (Nilsback and Zisserman 2008). |
| Dataset Splits | Yes | Following Reed et al. (2016b), we split the dataset into 82 training-validation and 20 test classes, and resize all images to 64 64 3. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It discusses software and training parameters but not the hardware. |
| Software Dependencies | No | The paper mentions using the ADAM optimizer and a character-level Conv Net with RNN, but does not provide specific version numbers for these software components or any other libraries or frameworks. For instance, it mentions "ADAM optimizer (Kingma and Ba 2014)" and "char-CNN-RNN" but without version numbers. |
| Experiment Setup | Yes | We perform mini-batch stochastic gradient ascent with a batch size N = 64 for 600 epochs. We use the ADAM optimizer (Kingma and Ba 2014) with a momentum of 0.5 and a learning rate of 0.0002 as suggested by Radford et al. (2015). We use number of outer samples M = 1000. We increase β from 0.6 to 1 by 0.1 for every 100 epoch. We stay with max β once it is reached. |