Defending Against Adversarial Attacks via Neural Dynamic System

Authors: Xiyuan Li, Zou Xin, Weiwei Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct experiments on the CIFAR-10 [16] and MNIST [17] datasets to evaluate the robustness of ASODE under different adversarial attacks. We follow the standard training, validation, and test splits in our experiments. Moreover, we compare the robustness of ASODE with ODE-Net [12], Tis ODE-Net [10], and SODEF [11].
Researcher Affiliation Academia Xiyuan Li School of Computer Science Wuhan University Lee_xiyuan@outlook.com Xin Zou School of Computer Science Wuhan University zouxin2021@gmail.com Weiwei Liu School of Computer Science Wuhan University liuweiwei863@gmail.com
Pseudocode Yes The pseudo code of ASODE algorithm is illustrated in Appendix B, which is from bringing the process above together.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
Open Datasets Yes In this section, we conduct experiments on the CIFAR-10 [16] and MNIST [17] datasets to evaluate the robustness of ASODE under different adversarial attacks.
Dataset Splits Yes We follow the standard training, validation, and test splits in our experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions PyTorch as a framework but does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes During the training of ASODE, we first train the neural ODE for 50 epochs, after which we fix hθ and train feθ for another 100 epochs. We set the parameters α1 = 0.1 and α2 = 0.05 when training ASODE. In the below, the best results are marked in bold. ... To facilitate fair comparison, we set T = 5 based on the original papers of the comparison models.