DeGAN: Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier
Authors: Sravanti Addepalli, Gaurav Kumar Nayak, Anirban Chakraborty, Venkatesh Babu Radhakrishnan3130-3137
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that data from a related domain can be leveraged to achieve stateof-the-art performance for the tasks of Data-free Knowledge Distillation and Incremental Learning on benchmark datasets. |
| Researcher Affiliation | Academia | Sravanti Addepalli, Gaurav Kumar Nayak, Anirban Chakraborty, R. Venkatesh Babu Department of Computational and Data Sciences Indian Institute of Science, Bangalore, India {sravantia, gauravnayak, anirban, venky}@iisc.ac.in |
| Pseudocode | No | The paper includes block diagrams (Figure 1) describing the architecture and framework, but no pseudocode or explicitly labeled algorithm blocks were found. |
| Open Source Code | No | The paper mentions using "the implementation of DCGAN from Singh (2019) as reference to implement the De GAN" but does not explicitly state that the authors are releasing their own source code for the methodology described in this paper. |
| Open Datasets | Yes | We use the benchmark datasets, CIFAR-10 (Krizhevsky and Hinton 2009), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017) and CIFAR-100 to demonstrate state-of-the-art results for the task of Data-Free Knowledge Distillation. ... CIFAR-10 (Krizhevsky and Hinton 2009) is a 10-class labelled dataset... CIFAR-100 (Krizhevsky and Hinton 2009) consists of 100 labelled classes... SVHN (Netzer et al. 2011) is a publicly available colour dataset... Fashion MNIST (Xiao, Rasul, and Vollgraf 2017) is a grayscale image dataset... |
| Dataset Splits | Yes | A train-validation split of 80-20 is considered for this purpose. An early-stopping condition based on validation accuracy is set as the convergence criteria. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were provided in the paper. It does not mention any type of hardware. |
| Software Dependencies | No | The paper states "We use Py Torch framework for all our implementations." but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The learning rate for training the GAN is set to 0.0002 and a fixed number of epochs are trained (200 in all cases) to ensure consistency. The two hyper-parameters in this case are λe and λd. |