Accelerated Stochastic Gradient-free and Projection-free Methods

Authors: Feihu Huang, Lue Tao, Songcan Chen

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experimental results on black-box adversarial attack and robust black-box classification demonstrate the efficiency of our algorithms.
Researcher Affiliation Academia 1College of Computer Science & Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China 2MIIT Key Laboratory of Pattern Analysis & Machine Intelligence.
Pseudocode Yes Algorithm 1 Acc-SZOFW Algorithm; Algorithm 2 Acc-SZOFW* Algorithm
Open Source Code Yes Our implementation is based on Py Torch and the code to reproduce our results is publicly available at https://github.com/TLMichael/Acc-SZOFW.
Open Datasets Yes In the experiment, we use the pre-trained DNN models on MNIST (Le Cun et al., 2010) and CIFAR10 (Krizhevsky et al., 2009) datasets as the target black-box models... In the experiment, we use four public real datasets1. These data are from the website https://www.csie. ntu.edu.tw/ cjlin/libsvmtools/datasets/
Dataset Splits No The paper specifies training and testing splits for one task ("half of the samples as training data and the rest as testing data"), but does not explicitly mention a separate validation set or describe how it was used for model selection or hyperparameter tuning. It refers to "pre-trained DNN models" for another task, implying no training/validation was done by the authors on those specific models.
Hardware Specification Yes All of our experiments are conducted on a server with an Intel Xeon 2.60GHz CPU and an NVIDIA Titan Xp GPU.
Software Dependencies No Our implementation is based on Py Torch and the code to reproduce our results is publicly available at https://github.com/TLMichael/Acc-SZOFW. While PyTorch is mentioned, no specific version number is provided for reproducibility.
Experiment Setup Yes In the SAP experiment, we choose ε = 0.3 for MNIST and ε = 0.1 for CIFAR10. In the UAP experiment, we choose ε = 0.3 for both MNIST dataset and CIFAR10 dataset. For fair comparison, we choose the mini-batch size b = 20 for all stochastic zeroth-order methods. We set σ = 10 and θ = 10. For fair comparison, we choose the mini-batch size b = 100 for all stochastic zeroth-order methods.