Learning from Label Proportions with Generative Adversarial Networks

Authors: Jiabin Liu, Bo Wang, Zhiquan Qi, YingJie Tian, Yong Shi

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Several experiments on benchmark datasets demonstrate vivid advantages of the proposed approach.
Researcher Affiliation Collaboration Jiabin Liu Samsung Research China Beijing Beijing 100028, China liujiabin008@126.com Bo Wang University of International Business and Economics Beijing 100029, China wangbo@uibe.edu.cn Zhiquan Qi Yingjie Tian Yong Shi University of Chinese Academy of Sciences Beijing 100190, China qizhiquan@foxmail.com, {tyj,yshi}@ucas.ac.cn
Pseudocode Yes Algorithm 1: LLP-GAN Training Algorithm
Open Source Code Yes Code is available at https://github.com/liujiabin008/LLP-GAN.
Open Datasets Yes Four benchmark datasets, MNIST, SVHN, CIFAR-10, and CIFAR-100 are investigated in our experiments.
Dataset Splits Yes In the experimental setting, the training data is equally divided into five minibatches, with 10,000 images in each one, and the test data with exactly 1,000 images in every category.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes To keep up the same settings in previous work, bag size is fixed as 16, 32, 64, and 128.