Provable Benefit of Mixup for Finding Optimal Decision Boundaries
Authors: Junsoo Oh, Chulhee Yun
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present several experimental results to support our findings. 6.1. Experiments on Our Setting Dκ First, we provide empirical results on our setting. 6.2. 2D Classification on Synthetic Data 6.3. Classification on CIFAR-10 |
| Researcher Affiliation | Academia | 1Kim Jaechul Graduate School of AI, KAIST. |
| Pseudocode | No | The paper describes theoretical analyses and experimental procedures in text but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | We also conduct experiments on the real-world data CIFAR10 (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions using "training samples" and training for a certain number of epochs but does not specify explicit train/validation/test splits with percentages or counts for validation. |
| Hardware Specification | No | The paper does not specify any hardware components (e.g., CPU, GPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like "Adam (Kingma & Ba, 2014)", "SGD", and models like "VGG19" and "Res Net18" but does not provide specific version numbers for any libraries, frameworks, or solvers used. |
| Experiment Setup | Yes | We train for 1500 epochs using randomly sampled 500 training samples from each Dκ and full gradient descent with learning rate 1 and we choose α = 1 for the hyperparameter of Mixup and Mask Mixup. ... train for 1500 epochs using 500 samples of data points and Adam (Kingma & Ba, 2014) with full batch, learning rate 0.001 and using default hyperparameters of β1 = 0.9, β2 = 0.999. ... train VGG19 (Simonyan & Zisserman, 2014) and Res Net18 (He et al., 2016) for 300 epochs on the training set with batch size 256 using SGD with weigh decay 10 4 and we choose α = 1 for the hyperparameter of Mixup and Cut Mix. Also, we use a learning rate 0.1 at the beginning and divide it by 10 after 100 and 150 epochs. |