Universal Adaptive Data Augmentation

Authors: Xiaogang Xu, Hengshuang Zhao

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments with various models are conducted on CIFAR-10, CIFAR-100, Image Net, tiny-Image Net, Cityscapes, and VOC07+12 to prove the significant performance improvements brought by UADA.
Researcher Affiliation Collaboration Xiaogang Xu1 , Hengshuang Zhao2 1Zhejiang Lab 2The University of Hong Kong
Pseudocode Yes Algorithm 1 The training algorithm with our Adaptive Adversarial Data Augmentation ( means can be done parallel )
Open Source Code No No explicit statement or link providing access to the open-source code for the described methodology was found.
Open Datasets Yes For the classification task, the datasets include CIFAR10/CIFAR-100 [Krizhevsky et al., 2009], and Image Net [Deng et al., 2009]/tiny-Image Net [Le and Yang, 2015]. For the semantic segmentation and object detection, experiments are conducted on Cityscapes [Cordts et al., 2016] and VOC07+12 [Everingham et al., 2012].
Dataset Splits Yes CIFAR-10/CIFAR-100 dataset has total 60,000 images. 50,000 images are set as the training set and 10,000 images are set as the test set.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided.
Software Dependencies No The paper mentions 'We use Py Torch [Paszke et al., 2017]' but does not provide a specific version number for PyTorch or any other software dependency.
Experiment Setup Yes δ is set as 1, and ϵ in Eq. 5 is also set as 1 in the classification experiments (except the ablation study). [...] The batch size is 256, the learning rate is 0.1, the weight decay is 0.0001, the momentum is 0.9. Moreover, we adopt the cosine learning rate for training and train the network for 100 epochs.