Generative-Discriminative Complementary Learning

Authors: Yanwu Xu, Mingming Gong, Junxiang Chen, Tongliang Liu, Kun Zhang, Kayhan Batmanghelich6526-6533

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition to the extensive empirical studies, we also theoretically show that our model can retrieve the true conditional distribution from the complementarily-labeled data. ... Empirically, we conduct comprehensive experiments on benchmark datasets, including MNIST, CIFAR10, CIFAR100, and VGG Face; demonstrating that our model gives accurate classification prediction and generates high-quality images.
Researcher Affiliation Academia 1Department of Biomedical Informatics, University of Pittsburgh, {yanwuxu, mig73, juc91, kayhan}@pitt.edu, 2UBTECH Sydney AI Centre, School of Computer Science, tongliang.liu@sydney.edu.au 3Department of Philosophy, Carnegie Mellon University, kunz1@cmu.edu
Pseudocode No The paper describes the proposed method mathematically and textually but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is at https://github.com/xuyanwu/Complementary GAN.
Open Datasets Yes After introducing the implementation details, we evaluate our methods on three datasets, including MNIST (Le Cun and Cortes 2010), CIFAR10, CIFAR100 (Krizhevsky, Nair, and Hinton ), and VGGFACE2 (Cao et al. 2018).
Dataset Splits Yes MNIST... 60K training images and 10K testing images... CIFAR10 dataset... 60K training samples and 10K test samples... CIFAR100 dataset contains 100 classes and each class has 500 images in average and 10.000 testing images of 100 classes in total. VGGCAE2... We selected 80% data as the training set S and the rest 20% as the testing set.
Hardware Specification Yes We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
Software Dependencies No The paper mentions using "Pytorch" and the "Adam" optimizer but does not specify version numbers for these or other key software components, making the software environment not fully reproducible.
Experiment Setup Yes We trained our CCGAN model... using Adam (Kingma and Ba 2014) with learning rate 2e 4, β1 = 0.0, β2 = 0.999 for both D and G network, where we train 2 steps of D and 1 step of G in each iteration for 10,000 iteration in total. ... We adopted data augmentation for all datasets except MNIST, where we first resized all images to 32 32 resolution, employed random croppings to change the image into 28 28 and then applied zero-padding to turn the image back with 32 32 resolution.