Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing

Authors: Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang7957-7965

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a collection of experiments to test Saliency Grafting against other baselines. First, we test the prediction performance on standard image classification datasets. Next, to confirm our claim that Saliency Grafting can safely boost the diversity of augmented data, we design and conduct experiments to assess the sample diversity of each augmentation method. We also conduct multiple stress tests to measure the enhancement in generalization capability. Finally, we perform an ablation study to investigate the contribution of each sub-component of Saliency Grafting.
Researcher Affiliation Collaboration Joonhyung Park1, June Yong Yang1, Jinwoo Shin1, Sung Ju Hwang1,2, Eunho Yang 1,2 1Korea Advanced Institute of Science and Technology (KAIST) 2AITRICS
Pseudocode No No pseudocode or algorithm block found.
Open Source Code No No explicit statement or link regarding open-source code for the methodology described in this paper.
Open Datasets Yes CIFAR-100 dataset (Krizhevsky 2009), Tiny-Image Net (Chrabaszcz, Loshchilov, and Hutter 2017), Image Net (Russakovsky et al. 2015)
Dataset Splits Yes For the Pyramid Net-200, we follow the experimental setting of Yun et al. (2019), which trains Pyramid Net-200 for 300 epochs. ... For WRN28-10, the network is trained for 400 epochs as following studies (Kim, Choo, and Song 2020; Verma et al. 2019)., We train Res Net-18 (He et al. 2016b) for 600 epochs ... following one of Tiny-Image Net experimental settings in (Kim, Choo, and Song 2020)., We follow the training protocol in Wong, Rice, and Kolter (2020)
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are mentioned for the experimental setup.
Software Dependencies No The paper does not explicitly list software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes For the Pyramid Net-200, we follow the experimental setting of Yun et al. (2019), which trains Pyramid Net-200 for 300 epochs. ... For WRN28-10, the network is trained for 400 epochs as following studies (Kim, Choo, and Song 2020; Verma et al. 2019)., We train Res Net-18 (He et al. 2016b) for 600 epochs, We train Res Net-50 for 100 epochs. We follow the training protocol in Wong, Rice, and Kolter (2020), which includes cyclic learning rate, regularization on batch normalization layers, and mixed-precision training. This protocol also gradually resizes images during training, beginning with larger batches of smaller images and moving on to smaller batches of larger images later (image-resizing policy).