Kernel-convoluted Deep Neural Networks with Data Augmentation

Authors: Minjin Kim, Young-geun Kim, Dongha Kim, Yongdai Kim, Myunghee Cho Paik8155-8162

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using CIFAR-10 and CIFAR-100 datasets, our experiments demonstrate that the KCM with the Mixup outperforms the Mixup method in terms of generalization and robustness to adversarial examples.
Researcher Affiliation Academia Minjin Kim, 1 Young-geun Kim, 1 Dongha Kim, 1 Yongdai Kim, 2 Myunghee Cho Paik 1 1 Department of Statistics, Seoul National University 2 School of Data Science, Seoul National University
Pseudocode Yes Algorithm 1 Training Mixup with KCM
Open Source Code Yes The source-code for conducting our experiments of binary classification on the two-moon dataset and CIFAR-10 (cat vs. dog) and multi-class classification on CIFAR-10 is available at https://github.com/MJ1021/kcm-code.
Open Datasets Yes Using CIFAR-10 and CIFAR-100 datasets, our experiments demonstrate that the KCM with the Mixup outperforms the Mixup method in terms of generalization and robustness to adversarial examples. The CIFAR-10 dataset consists of 60000 RGB images in 10 classes, with 6000 images per class. The CIFAR-100 dataset is similar to CIFAR-10, except it has 100 classes containing 600 images each. Both datasets have 50000 training images and 10000 test images.
Dataset Splits No The paper mentions 50000 training images and 10000 test images for CIFAR-10/100, but does not explicitly describe a separate validation split or how it was used to reproduce the experiments. While common practice, it is not explicitly stated within the paper.
Hardware Specification No The paper mentions using 'Res Net-34' which is a model architecture, not specific hardware. No details about CPU, GPU, or other computing resources used for experiments are provided.
Software Dependencies No The paper states 'use the author s official code' and 'add code for the local averaging part'. While it implies software usage, it does not specify any software components with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes To make a direct comparison with the original Mixup using CIFAR-10/100, we adopt the experimental configuration in the Mixup paper (Zhang et al. 2018) and use the author s official code. We use Res Net-34, which is one of the architectures from the official code. For every pixel, the maximum perturbation levels are 0.031 and 0.03 for CIFAR-10 and CIFAR-100, respectively. The number of iterations for I-FGSM is 10. The performance of the methods is measured by the median of test accuracies of the last 10 epochs.