Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Authors: Yizhen Wang, Somesh Jha, Kamalika Chaudhuri
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments 1 suggest that this classifier may have good robustness properties even for reasonable data set sizes. |
| Researcher Affiliation | Academia | 1University of California, San Diego 2University of Wisconsin-Madison. |
| Pseudocode | Yes | Algorithm 1 Robust_1NN(Sn, r, Δ, δ, x); Algorithm 2 Confident-Label(Sn, Δ, δ, x) |
| Open Source Code | Yes | Code available at: https://github.com/ Eric Yizhen Wang/robust_nn_icml |
| Open Datasets | Yes | We use three datasets Halfmoon, MNIST 1v7 and Abalone with differing data sizes relative to dimension. Halfmoon is a popular 2-dimensional synthetic data set for non-linear classification. The MNIST 1v7 data set is a subset of the 784-dimensional MNIST data. Finally, for the Abalone dataset (Lichman, 2013)... |
| Dataset Splits | Yes | We use a training set of size 2000 and a test set of size 1000 generated with standard deviation σ = 0.2. The MNIST 1v7 data set is a subset of the 784-dimensional MNIST data. For training, we use 1000 images each of Digit 1 and 7, and for test, 500 images of each digit. Finally, for the Abalone dataset (Lichman, 2013), our classification task is to distinguish whether an abalone is older than 12.5 years based on 7 physical measurements. For training, we use 500 and for test, 100 samples. In addition, a validation set with the same size as the test set is generated for each experiment for parameter tuning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions general software like 'kernel classifier', 'neural network', and 'fast-gradient-sign method (FSGM)', but does not specify any software names with version numbers. |
| Experiment Setup | Yes | For simplicity, we set Δ = 0.45, δ = 0.1 and tune r on the validation set; |