Towards Robust ResNet: A Small Step but a Giant Leap
Authors: Jingfeng Zhang, Bo Han, Laura Wynter, Bryan Kian Hsiang Low, Mohan Kankanhalli
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation on real-world datasets corroborates our analytical findings that a small h can indeed improve both its training and generalization robustness. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, National University of Singapore 2RIKEN Center for Advanced Intelligence Project 3IBM Research, Singapore |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | In this section, we conduct experiments on the vision-based CIFAR-10 dataset [Krizhevsky and Hinton, 2009] and the text-based AG-NEWS dataset [Zhang et al., 2015]. |
| Dataset Splits | No | The paper does not explicitly provide details about a validation dataset split (e.g., percentages or sample counts for a validation set). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions). |
| Experiment Setup | Yes | Unless specified otherwise, the default optimizer is SGD with 0.9 momentum. We train a Res Net using the CIFAR-10 dataset for 80 epochs with an initial learning rate (LR) of 0.1 that is divided by 10 at epochs 40 and 60. We train another Res Net using the AG-NEWS dataset with a fixed LR of 0.1 for 15 epochs. |