You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Authors: Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, Bin Dong
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments demonstrate that YOPO can achieve comparable defense accuracy with approximately 1/5 1/4 GPU time of the projected gradient descent (PGD) algorithm. |
| Researcher Affiliation | Academia | Dinghuai Zhang , Tianyuan Zhang Peking University {zhangdinghuai, 1600012888}@pku.edu.cn; Yiping Lu Stanford University yplu@stanford.edu; Zhanxing Zhu School of Mathematical Sciences, Peking University Center for Data Science, Peking University Beijing Institute of Big Data Research zhanxing.zhu@pku.edu.cn; Bin Dong Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University Beijing Institute of Big Data Research dongbin@math.pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 YOPO (You Only Propagate Once) |
| Open Source Code | Yes | Our codes are available at https://github.com/a1600012888/YOPO-You-Only-Propagate-Once |
| Open Datasets | Yes | To demonstrate the effectiveness of YOPO, we conduct experiments on MNIST and CIFAR10. |
| Dataset Splits | No | The paper mentions using MNIST and CIFAR10 but does not specify the training, validation, and test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper mentions "GPU time" but does not specify the type or model of GPUs, CPUs, or any other hardware components used for experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | As a comparison, we test YOPO-3-5 and YOPO-5-3 with a step size of 2/255. |