On Robustness to Adversarial Examples and Polynomial Optimization

Authors: Pranjal Awasthi, Abhratanu Dutta, Aravindan Vijayaraghavan

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate the effectiveness of these attacks on real data.
Researcher Affiliation Academia Pranjal Awasthi Department of Computer Science Rutgers University pranjal.awasthi@rutgers.edu Abhratanu Dutta Department of Computer Science Northwestern University abhratanudutta2020@u.northwestern.edu Aravindan Vijayaraghavan Department of Computer Science Northwestern University aravindv@northwestern.edu
Pseudocode Yes Figure 1: The SDP-based algorithm for the degree-2 optimization problem. ... Figure 2: Convex program to find a PTF sgn(g(x)) œ F with zero robust empirical error. ... Figure 3: The SDP-based algorithm for Problem (2).
Open Source Code No No statement about open-sourcing code or links to a repository are provided. The paper mentions future work related to making the analysis practical.
Open Datasets Yes We use the MNIST data set
Dataset Splits No The paper does not provide specific percentages or counts for training, validation, and test splits. It mentions dividing a 'test set' into PGDPass and PGDfail for their specific experiments, but not the overall dataset splits for reproduction.
Hardware Specification No The SDP has d + k + 1 vector variables, and takes about 200s per instance on a standard desktop.' The term 'standard desktop' is too vague and lacks specific hardware details.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, specific SDP solvers).
Experiment Setup Yes Our 2-layer neural network has d = 784 input units, k = 1024 hidden units and 10 output units. ... As in [24] we first choose = 0.3 ... We also run the PGD attack on the network with = 0.01.