GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps

Authors: Minsoo Kang, Suhyun Kim

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments on several datasets demonstrate that Guided Mixup provides a good trade-off between augmentation overhead and generalization performance on classification datasets. In this section, we evaluate the performance and efficiency of Guided Mixup. First, we compare the generalization performance of the proposed method against baselines by training classifiers on CIFAR-100 (Krizhevsky 2009), Tiny Image Net (Chrabaszcz, Loshchilov, and Hutter 2017), and Image Net (Deng et al. 2009) datasets.
Researcher Affiliation Academia Minsoo Kang1,2, Suhyun Kim2* 1 Korea University, Republic of Korea 2 Korea Institute of Science and Technology, Republic of Korea
Pseudocode Yes Algorithm 1: Greedy Pairing Algorithm
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described.
Open Datasets Yes We compare the generalization performance of the proposed method against baselines by training classifiers on CIFAR-100 (Krizhevsky 2009), Tiny Image Net (Chrabaszcz, Loshchilov, and Hutter 2017), and Image Net (Deng et al. 2009) datasets. To validate a broader impact on the generalization, we also measure the performance of four Fine-Grained Vision Classification (FGVC) datasets, which are Caltech-UCSD Birds-200-2011 (CUB) (Wah et al. 2011), Stanford Cars (Cars) (Krause et al. 2013), FGVC-Aircraft (Aircraft) (Maji et al. 2013), and Caltech-101 (Caltech) (Fei-Fei, Fergus, and Perona 2006).
Dataset Splits No The paper references various datasets but does not explicitly provide the training/validation/test dataset splits (e.g., percentages or counts) or reference predefined splits with citations for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., exact GPU/CPU models, memory, or processor types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) needed to replicate the experiment.
Experiment Setup Yes After the saliency detection, we employ Gaussian blur to obtain a saliency map that covers the salient region well. We use kernel size as 7 and the sigma of Gaussian distribution as σ=3 in both our methods. Next, we train Res Net-50 on Image Net dataset for 100 epochs following the protocol of Kim et al. (2020) for evaluating both generalization and robustness.