Mining GOLD Samples for Conditional GANs
Authors: Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate that the proposed methods outperform corresponding baselines for all three applications on different image datasets. and In this section, we demonstrate the effectiveness of the GOLD estimator for three applications: example re-weighting, rejection sampling, and active learning. We conduct experiments on one synthetic point dataset and six image datasets: MNIST [25], FMNIST [54], SVHN [36], CIFAR-10 [23], STL-10 [11], and LSUN [55]. |
| Researcher Affiliation | Collaboration | Sangwoo Mo KAIST swmo@kaist.ac.kr, Chiheon Kim Kakao Brain chiheon.kim@kakaobrain.com, Sungwoong Kim Kakao Brain swkim@kakaobrain.com, Minsu Cho POSTECH mscho@postech.ac.kr, Jinwoo Shin KAIST, AItrics jinwoos@kaist.ac.kr |
| Pseudocode | No | The paper does not contain any explicit pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments on one synthetic point dataset and six image datasets: MNIST [25], FMNIST [54], SVHN [36], CIFAR-10 [23], STL-10 [11], and LSUN [55]. |
| Dataset Splits | Yes | We use training data to train ACGAN and test data to evaluate the fitting capacity, except LSUN that we use validation data for both training and evaluation due to the class imbalance of the training data. and We train the model for 100 epochs, and choose the model with the best fitting capacity on the validation set (of size 100), to compute the GOLD estimator for the query acquisition. |
| Hardware Specification | No | The paper mentions 'GPU support from Brain Cloud team at Kakao Brain' but does not specify any exact GPU models, CPU models, or other detailed hardware specifications used for the experiments. |
| Software Dependencies | No | The paper refers to various software components and models (e.g., InfoGAN, ACGAN, Le Net) but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We set the balancing factor to λc = 0.1 in most of our experiments but lower the value when training c GANs on small datasets. For all experiments on example re-weighting and rejection sampling, we choose the default value λc = 0.1. For experiments on active learning, we choose λc = 0.01 and λc = 0 for synthetic/MNIST and FMNIST/SVHN datasets, respectively. We train the model for 20 and 200 epochs for 1-channel and 3-channel images, respectively. We use the baseline loss (1) for the first half of epochs and the re-weighting scheme for the next half of epochs. We simply choose β = 1 for the discriminator loss and β = 0 for the generator loss. |