Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks
Authors: Chen Ma, Xiangyu Guo, Li Chen, Jun-Hai Yong, Yisen Wang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on the Image Net and CIFAR10 datasets demonstrate that our approach can consume only a small number of queries to achieve the low-magnitude distortion. |
| Researcher Affiliation | Academia | 1 School of Software, BNRist, Tsinghua University, Beijing, China 2 Department of Computer Science and Engineering, University at Buffalo, Buffalo NY, USA 3 Key Lab. of Machine Perception, School of Artificial Intelligence, Peking University, Beijing, China 4 Institute for Artificial Intelligence, Peking University, Beijing, China |
| Pseudocode | Yes | Algorithm 1 Tangent Attack |
| Open Source Code | Yes | The implementation source code is released online at https://github.com/machanic/Tangent Attack. |
| Open Datasets | Yes | Datasets. TA and G-TA are evaluated on two datasets, namely CIFAR-10 and Image Net with the image resolutions of 32 32 3 and 299 299 3, respectively. |
| Dataset Splits | Yes | We randomly select 1,000 correctly classified images from their validation sets for experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions 'Py Torch framework' and 'Num Py version' but does not specify version numbers for these software dependencies. |
| Experiment Setup | Yes | The initial batch size B0 is set to 100, which means the algorithm samples 100 probes for estimating a gradient at the first iteration. The threshold γ that controls the termination of the binary search is set to 1.0 in the CIFAR-10 dataset and 1,000 in the Image Net dataset. The radius ratio r is set to 1.5 in the CIFAR-10 dataset and 1.1 in the Image Net dataset. Besides, we also set r to 1.5 when attacking defense models. |