Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability

Authors: Mingjie Li, Lingshen He, Zhouchen Lin

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our Res Net with IE-Skips can largely improve the robustness and the generalization ability under adversarial attacks when compared with the vanilla Res Net of the same parameter size. On the MNIST and the CIFAR benchmarks, we conduct experiments to verify the adversarial robustness of IE-Res Nets, which replace the original skip connections with our IE-Skips in Res Nets.
Researcher Affiliation Academia 1Key Lab. of Machine Perception (Mo E), School of EECS, Peking University, Beijing 100871.
Pseudocode Yes Algorithm 1 Forward Propagation of the i-Residual Stage with Our IE-Skips.
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository for the described methodology.
Open Datasets Yes MNIST (Le Cun & Cortes, 2010) and CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 (Krizhevsky et al., 2009).
Dataset Splits No The paper mentions 'training and validation accuracies' in Figure 3, implying a validation set, but does not provide specific details on dataset splits (percentages, sample counts, or explicit splitting methodology) for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'We implement all the experiments with Py Torch (Paszke et al., 2019)' but does not specify the version number of PyTorch or any other software dependencies.
Experiment Setup Yes In the following experiments, we set δ = 8/255 and α = 2/255 on CIFAR if we use the PGD method for adversarial training while α = 1/255 for evaluation. In addtion to these methods, we also run Adam for 50 iterations with learning rate equaling 6 10 4 and c = 10 for C&W adversarial evaluation. As for MNIST, we set δ = 0.15 for FGSM adversarial training or evaluation. we train the Pre Act-Res Nets (He et al., 2016b) and IE-Res Nets with inner step size γ = 0.05 on clean data. different depths for 180 epochs with initial learning rate 0.05, which decays by a factor 5 at the 80th, 120th and 160th epochs.