Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
Authors: Minhao Cheng, Simranjit Singh, Patrick H. Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide a convergence analysis of the new algorithm and conduct experiments on several models on MNIST, CIFAR-10 and Image Net. We conduct comprehensive experiments on several datasets and models. |
| Researcher Affiliation | Collaboration | Minhao Cheng1*, Simranjit Singh1 , Patrick Chen1, Pin-Yu Chen2, Sijia Liu2, Cho-Jui Hsieh1 1Department of Computer Science, UCLA, 2IBM Research |
| Pseudocode | Yes | Algorithm 1: Sign-OPT attack; Algorithm 2: SVM-OPT attack |
| Open Source Code | Yes | We provide our implementation publicly2. 2https://github.com/cmhcbb/attackbox |
| Open Datasets | Yes | We evaluate the SIGN-OPT algorithm for attacking black-box models in a hard-label setting on three different standard datasets MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky et al.) and Image Net-1000 (Deng et al., 2009) |
| Dataset Splits | No | The paper mentions sampling examples from a "validation set" for evaluation, but it does not provide specific details on the train/validation/test dataset splits (e.g., exact percentages, sample counts, or explicit references to predefined splits for reproducibility). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU or GPU models, memory specifications) used for running its experiments. It makes no mention of the computational environment in terms of hardware. |
| Software Dependencies | No | The paper mentions using "Foolbox" and "torchvision" but does not provide specific version numbers for these or any other software dependencies, which would be necessary for precise replication. |
| Experiment Setup | Yes | To optimize algorithm 1, we estimate the step size η using the same line search procedure implemented in Cheng et al. (2019). Similar to Cheng et al. (2019), g(θ) in last step of algorithm 1 is approximated via binary search. The initial θ0 in algorithm 1 is calculated by evaluating g(θ) on 100 random directions and taking the best one. After fine tuning on a small set of examples, we found that Q = 200 provides a good balance between the two. Hence, we set the value of Q = 200 for all our experiments in this section. |