Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness
Authors: Francesco Pinto, Harry Yang, Ser Nam Lim, Philip Torr, Puneet Dokania
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide thorough analyses and experiments on vision datasets (Image Net & CIFAR-10/100) and compare it with a suite of recent approaches for reliable uncertainty estimation. |
| Researcher Affiliation | Collaboration | Francesco Pinto University of Oxford francesco.pinto@eng.ox.ac.uk Harry Yang Meta AI Ser-Nam Lim Meta AI Philip H.S. Torr University of Oxford Puneet K. Dokania University of Oxford & Five AI Ltd. puneet.dokania@five.ai |
| Pseudocode | Yes | Refer to Algorithm 1 for an overview of the Reg Mixup training procedure. |
| Open Source Code | Yes | Code available at: https://github.com/Francesco Pinto/Reg Mixup |
| Open Datasets | Yes | We train them on CIFAR-10 (C10) and CIFAR-100 (C100) datasets. We employ RN50 to perform experiments on Image Net-1K [Deng et al., 2009] dataset. |
| Dataset Splits | Yes | We also cross-validate the hyperparameters on a 10% split of the test set, which is removed at test time. For further details about the code base and the hyperparameters, refer to Appendix B. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. It only mentions 'high compute requirements' for Image Net-1K experiments. |
| Software Dependencies | No | The paper mentions 'Pytorch image models' in the references but does not provide specific version numbers for any software dependencies. It only states that code is available on GitHub, which might contain such details, but they are not present in the paper's main text. |
| Experiment Setup | Yes | For further details about the code base and the hyperparameters, refer to Appendix B. |