Target-Aware Generative Augmentations for Single-Shot Adaptation

Authors: Kowshik Thopalli, Rakshith Subramanyam, Pavan K. Turaga, Jayaraman J. Thiagarajan

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using experiments on a variety of benchmarks, distribution shifts and image corruptions, we find that Si STA produces significantly improved generalization over existing baselines in face attribute detection and multi-class object recognition. Furthermore, Si STA performs competitively to models obtained by training on larger target datasets. We perform an extensive evaluation of Si STA using a suite of classification tasks with multiple benchmark datasets, different Style GAN architectures and more importantly, a variety of challenging distribution shifts.
Researcher Affiliation Collaboration 1Lawrence Livermore National Laboratory, Livermore, CA, USA 2Arizona State University, Tempe, AZ, USA. Correspondence to: Kowshik Thopalli <thopalli1@llnl.gov>.
Pseudocode Yes Algorithm 1 Si STA-G
Open Source Code Yes Our codes can be accessed at https://github. com/Rakshith-2905/Si STA.
Open Datasets Yes For our empirical study, we consider the following four datasets: (i) Celeb A-HQ (Karras et al., 2017)...; (ii) AFHQ (Choi et al., 2020)...; (iii) CIFAR-10 (Krizhevsky et al., 2009)...; and (iv) Domain Net (Peng et al., 2019)... Image Net (Russakovsky et al., 2015).
Dataset Splits No The paper mentions 'standard train-test splits' for CIFAR-10 and evaluating on a 'held-out test set', but does not explicitly provide details about a validation dataset split, its size, or how it was used to reproduce the experiment.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types with speeds, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions models (e.g., Style GAN-v2, Res Net-50) and optimizers (Adam, SGD) but does not list specific software dependencies with their version numbers, such as programming language versions or library versions (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes To obtain the source model Fs we fine-tune an Image Net pre-trained Res Net-50 (He et al., 2016) with labeled source data. We use a learning rate of 1e 4, Adam optimizer and train for 30 epochs; (b) Style GAN fine-tuning: We finetune Gs for 300 iterations (M in Algorithm 1) using one-target image with learning rate set to 2e 3 and Adam optimizer with β = 0.99...; (c) Synthetic data curation: The size of the synthetic target dataset Dt, T, was set to 1000 images in all experiments...; (d) Choice of pruning ratio: For all experiments, we set p = 20% for prune-rewind and p = 50% for prune-zero strategies...; (e) SFDA training: For the NRC algorithm, we set both neighborhood and expanded neighborhood sizes at 5 respectively. Finally, we adapt Fs using SGD with momentum 0.9 and learning rate 1e 3.