How Does Mixup Help With Robustness and Generalization?
Authors: Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we provide theoretical analysis to demonstrate how using Mixup in training helps model robustness and generalization. ... As an illustration, we compare the robust test accuracy between a model trained with Mixup and a model trained with standard empirical risk minimization (ERM) under adversarial attacks generated by FGSM (Fig. 1a). ... In the following, we present numerical experiments to support the approximation in Eq. (3). ... We confirm this phenomenon in Fig. 3. ... Figures 5 8 show the results of experiments for generalization with various datasets that motivated us to mathematically study Mixup. |
| Researcher Affiliation | Academia | Linjun Zhang Rutgers University linjun.zhang@rutgers.edu Zhun Deng Harvard University zhundeng@g.harvard.edu Kenji Kawaguchi Harvard University kkawaguchi@fas.harvard.edu Amirata Ghorbani Stanford University amiratag@stanford.edu James Zou Stanford University jamesz@stanford.edu |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link for open-source code for the methodology described. |
| Open Datasets | Yes | We train two Wide Res Net-16-8 (Zagoruyko & Komodakis, 2016) architectures on the Street View House Numbers SVHN (Netzer et al., 2011)) dataset... We use the two-moons dataset (Buitinck et al., 2013). ... We adopted the standard image datasets, CIFAR-10 (Krizhevsky & Hinton, 2009), CIFAR-100 (Krizhevsky & Hinton, 2009), Fashion-MNIST (Xiao et al., 2017), and Kuzushiji-MNIST (Clanuwat et al., 2019). |
| Dataset Splits | No | The paper mentions training and testing, but it does not explicitly state percentages or sample counts for validation splits, nor does it refer to predefined validation splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types). |
| Software Dependencies | Yes | All experiments were implemented in Py Torch (Paszke et al., 2019). |
| Experiment Setup | Yes | Stochastic gradient descent (SGD) was used to train the models with mini-batch size = 64, the momentum coefficient = 0.9, and the learning rate = 0.1. |