Constructing Semantics-Aware Adversarial Examples with a Probabilistic Perspective

Authors: Andi Zhang, Mingtian Zhang, Damon Wischik

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experiments, Table 1: Success rate (%) of the methods on MNIST., Table 2: Success rate (%) of the methods on Imagenet.
Researcher Affiliation Academia Andi Zhang Computer Laboratory University of Cambridge az381@cantab.ac.uk, Mingtian Zhang Centre for Artificial Intelligence University College London m.zhang@cs.ucl.ac.uk, Damon Wischik Computer Laboratory University of Cambridge djw1005@cam.ac.uk
Pseudocode Yes Algorithm 1 Sampling from padv by EBM, Algorithm 2 Sampling from padv by diffusion model, Algorithm 3 Rejection Sampling and Sample Refinement, Algorithm 4 Sampling from padv by diffusion model
Open Source Code Yes Code can be found at https://github.com/andiac/ Adv PP.
Open Datasets Yes We use MNIST [25] and Image Net [9] in this work. The MNIST dataset is available under the terms of the Creative Commons Attribution-Share Alike 3.0 license.
Dataset Splits No The paper mentions using images from the MNIST test set and refers to adversarially trained models, but does not explicitly detail the train/validation splits used for its own experiments.
Hardware Specification Yes We conducted our experiments using multiple workstations, each equipped with an NVIDIA RTX 4090 GPU (24GB VRAM) and 64GB of system memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or CUDA versions).
Experiment Setup Yes Our method is evaluated across three hyperparameter configurations: c = 5, c = 10, and c = 20.