ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization
Authors: Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on Image Net and empirically show that ZO-Ada MM converges much faster to a solution of high accuracy compared with 6 state-of-the-art ZO optimization methods. In this section, we demonstrate the effectiveness of ZO-Ada MM by experiments on generating black-box adversarial examples. Our experiments will be performed on Inception V3 [45] using Image Net [46]. |
| Researcher Affiliation | Collaboration | 1University of Minnesota, USA 2MIT-IBM Watson AI Lab, IBM Research, USA 3Northeastern University, USA 4Princeton University, USA |
| Pseudocode | Yes | Algorithm 1 ZO-Ada MM |
| Open Source Code | Yes | Code to reproduce experiments is released at the link https://github.com/ Kaidi Xu/ZO-Ada MM. |
| Open Datasets | Yes | Our experiments will be performed on Inception V3 [45] using Image Net [46]. |
| Dataset Splits | No | The paper mentions selecting 100 random images for experiments, which serves as a test set, but it does not specify explicit training, validation, or testing splits (e.g., percentages or counts) for the dataset used to train the models or for the attack method itself. It refers to Appendix 4 for details on experiment setups, which is not provided in the given text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running its experiments. It mentions using Inception V3, a pre-trained model, but no details on the computational resources used for the adversarial attack experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies (e.g., libraries, frameworks) with version numbers. It mentions Inception V3, which implies a deep learning framework, but no version information. |
| Experiment Setup | No | The paper states, "We refer to Appendix 4 for details on experiment setups." However, Appendix 4 is not provided in the given text. While it mentions some high-level setup like "every method takes the same number of queries per iteration" and "every algorithm is initialized by zero perturbation," it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed system-level training settings in the main content. |