Thwarting Adversarial Examples: An $L_0$-Robust Sparse Fourier Transform
Authors: Mitali Bafna, Jack Murtagh, Nikhil Vyas
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We give experimental results on the Jacobian-based Saliency Map Attack (JSMA) and the Carlini Wagner (CW) L0 attack on the MNIST and Fashion-MNIST datasets as well as the Adversarial Patch on the Image Net dataset. |
| Researcher Affiliation | Academia | Mitali Bafna School of Engineering & Applied Sciences Harvard University Cambridge, MA USA mitalibafna@g.harvard.edu Jack Murtagh School of Engineering & Applied Sciences Harvard University Cambridge, MA USA jmurtagh@g.harvard.edu Nikhil Vyas Department of Electrical Engineering and Computer Science MIT Cambridge, MA USA nikhilv@mit.edu |
| Pseudocode | Yes | Algorithm 1 Iterative Hard Thresholding (IHT) [BCDH10]. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | We tested both JSMA and CW on two datasets: the MNIST handwritten digits [Le C98] and the Fashion-MNIST [XRV17] dataset of clothing images. ... We took 700 random images from Image Net and for classification we used pretrained Res Net-50 network [HZRS15]. |
| Dataset Splits | No | The paper mentions 'training datasets' and reports 'test accuracy', but it does not specify explicit training/validation/test splits, percentages, or sample counts for these splits. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, or cloud resources) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' and 'cross-entropy loss' but does not provide specific version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For both datasets we used a neural network composed of a convolutional layer (32 kernels of 3x3), max pooling layer (2x2), convolutional layer (64 kernels of 3x3), max pooling layer (2x2), fully connected layer (128 neurons) with dropout (rate = .25) and an output softmax layer (10 neurons). We used the Adam optimizer with cross-entropy loss and ran it for 10 epochs over the training datasets. For each dataset, we trained our neural network only on images that were projected onto their top-k 2D-DCT coefficients. Here k is a parameter we tuned depending on the dataset (for MNIST k = 40 and for Fashion-MNIST k = 35). |