Towards Robust Neural Networks via Close-loop Control

Authors: Zhuotong Chen, Qianxiao Li, Zheng Zhang

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 NUMERICAL EXPERIMENTS We test our proposed CLC-NN framework under various input data perturbations. Here we briefly summarize our experimental settings, and we refer readers to Appendix B for the details. ... Result Summary: Table 1 and Table 2 show the results for both CIFAR-10 and CIFAR-100 datasets on some neural networks from both standard training and adversarial training respectively.
Researcher Affiliation Collaboration 1Department of Electrical & Computer Engineering, University of California, Santa Barbara, CA 93106 2Department of Mathematics, National University of Singapore, Singapore 3Institute of High Performance Computing, A*STAR, Singapore
Pseudocode Yes Algorithm 1: CLC-NN with the Pontryagin s Maximum Principle.
Open Source Code Yes 1A Pytorch implementation can be found in:https://github.com/zhuotongchen/ Towards-Robust-Neural-Networks-via-Close-loop-Control.git
Open Datasets Yes Result Summary: Table 1 and Table 2 show the results for both CIFAR-10 and CIFAR-100 datasets on some neural networks from both standard training and adversarial training respectively.
Dataset Splits No The paper mentions using CIFAR-10 and CIFAR-100 datasets for experiments but does not explicitly provide the training, validation, or test dataset split percentages or sample counts in the text.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions a 'Pytorch implementation' and the use of 'Adam' optimizer but does not provide specific version numbers for PyTorch or any other software libraries used.
Experiment Setup Yes We use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian function (9) and keep the same hyperparameters (learning rate, maximum iterations) for each model against all perturbations. ... For the numerical results presented in the paper, we choose the maximum iteration that gives the best performance from one of [5, 10, 20, 30, 50].