MALT Powers Up Adversarial Attacks
Authors: Odelia Melamed, Gilad Yehudai, Adi Shamir
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our attack wins over the current state of the art Auto Attack on the standard benchmark datasets CIFAR-100 and Image Net and for a variety of robust models. |
| Researcher Affiliation | Academia | Weizmann Institute of Science, Israel, odelia.melamed@weizmann.ac.il Center for Data Science, New York University, gy2219@nyu.edu Weizmann Institute of Science, Israel, adi.shamir@weizmann.ac.il. |
| Pseudocode | Yes | Algorithm 1 MALT attack algorithm |
| Open Source Code | No | In the final version we will link to a (non-anonymized) Git Hub page with an API for our attack. |
| Open Datasets | Yes | on the standard benchmark datasets CIFAR-100 and Image Net |
| Dataset Splits | No | The paper frequently mentions using 'test datasets' and standard benchmarks like CIFAR-100 and Image Net, but it does not explicitly detail the training, validation, and test splits (e.g., percentages or counts) or explicitly refer to a 'validation set' as part of its own experimental setup. |
| Hardware Specification | Yes | All experiments were done using a GPU Tesla V-100, 16GB. The CIFAR100 attacks ran all on 2 GPUs of Tesla V-100, 16GB. The Image Net attacks ran on 3 GPUs of Tesla V-100, 16GB. |
| Software Dependencies | No | The paper mentions using 'Pytorch library' and that 'newer versions of the Python libraries in use give slightly different results' but does not specify exact version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For MALT, we consider calculating the score for the c = 100 classes with the highest model s confidence and attacking the top a = 9 classes according to this score. All the hyperparameters of APGD and the other attacks used in Auto Attack are set to their default values. |