On the Convergence of Prior-Guided Zeroth-Order Optimization Algorithms
Authors: Shuyu Cheng, Guoqiang Wu, Jun Zhu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks. |
| Researcher Affiliation | Collaboration | Shuyu Cheng, Guoqiang Wu, Jun Zhu Dept. of Comp. Sci. and Tech., BNRist Center, State Key Lab for Intell. Tech. & Sys., Institute for AI, Tsinghua-Bosch Joint Center for ML, Tsinghua University, Beijing, 100084, China Pazhou Lab, Guangzhou, 510330, China |
| Pseudocode | Yes | Algorithm 1 Greedy descent framework |
| Open Source Code | Yes | Our code is available at https://github.com/csy530216/pg-zoo. |
| Open Datasets | Yes | conduct score-based black-box targeted adversarial attacks on 500 images from MNIST |
| Dataset Splits | Yes | We set d = 256 for all test functions and set q such that each iteration of these algorithms costs 11 queries to the directional derivative oracle. ... In this part we set d = 500. ... We conduct score-based black-box targeted adversarial attacks on 500 images from MNIST. The target model is a simple CNN with 2 convolutional layers and 2 fully-connected layers trained on MNIST. The attack uses C&W loss function. We generate 500 images for attack, where target class is randomly selected from other classes. We run for 500 iterations for each image and report the median query number until success. The query budget is 5000. |
| Hardware Specification | No | The acknowledgements section mentions 'NVIDIA NVAIL Program with GPU/DGX Acceleration'. However, this is a general statement about support received and does not specify the exact GPU models, CPU types, or other hardware configurations used to run the experiments described in the paper. |
| Software Dependencies | No | The paper mentions 'Py Torch' once in the context of an efficient implementation for orthogonalization ('torch.linalg.qr'). However, it does not provide specific version numbers for PyTorch or any other software dependencies crucial for replicating the experiments. |
| Experiment Setup | Yes | For f1 and f2, we set ˆL to ground truth value L; for f3, we search ˆL for best performance for each algorithm. We set d = 256 for all test functions and set q such that each iteration of these algorithms costs 11 queries to the directional derivative oracle. ... In this part we set d = 500. ... We conduct score-based black-box targeted adversarial attacks on 500 images from MNIST. The target model is a simple CNN with 2 convolutional layers and 2 fully-connected layers trained on MNIST. The attack uses C&W loss function. We generate 500 images for attack, where target class is randomly selected from other classes. We run for 500 iterations for each image and report the median query number until success. The query budget is 5000. |