CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks

Authors: Sakshi Varshney, Vinay Kumar Verma, P. K. Srijith, Lawrence Carin, Piyush Rai

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on challenging and diverse datasets, we show that the feature-map-transformation approach outperforms state-of-the-art methods for continually-learned GANs, with substantially fewer parameters.
Researcher Affiliation Academia IIT Hyderabad1, Duke Univeristy2, IIT Kanpur3, KAUST Saudi Arabia4 cs16resch01002@iith.ac.in, vv65@duke.edu, srijith@cse.iith.ac.in larry.carin@kaust.edu.sa, piyush@cse.iitk.ac.in
Pseudocode No The paper describes the proposed method in detail with mathematical formulations but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is publicly available at https://github.com/sakshivarshney/CAM-GAN.
Open Datasets Yes For continual data generation, we consider 7 datasets from perceptually distant domains: Celeb A (T0) [50], Flowers (T1) [51], Cathedrals (T2) [52], Cat (T3) [53] Brain-MRI images (T4) [54], Chest X-ray (T5) [55] and Anime faces (T6)1. ... We also experiment on four task sequences comprising four Imagenet types of data [56]: (i) fish, (ii) bird, (iii) snake and (iv) dog.
Dataset Splits No The paper mentions training models on tasks sequentially but does not provide specific details on how the datasets were split into training, validation, and test sets, including percentages, sample counts, or explicit partitioning methodologies.
Hardware Specification No The paper does not provide specific hardware details, such as exact GPU/CPU models, processor types, or memory amounts, used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper states that 'Architecture and evaluation details are discussed in the Supplemental Material' but does not include concrete hyperparameter values, training configurations, or other system-level settings in the main text of the paper.