Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees
Authors: Darshan Thaker, Paris Giampouras, Rene Vidal
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on digit and face classification demonstrate the effectiveness of the proposed approach. |
| Researcher Affiliation | Academia | 1Mathematical Institute for Data Science, Johns Hopkins University, Baltimore, MD USA. Correspondence to: Darshan Thaker <dbthaker@jhu.edu>, Paris Giampouras <parisg@jhu.edu>. |
| Pseudocode | Yes | In Algorithm 1 in the Appendix, we provide the details of the active set homotopy algorithm. |
| Open Source Code | No | The paper does not provide a direct link to the source code for the described methodology or state that it is being released. |
| Open Datasets | Yes | In this section, we present experiments on the Extended Yale B Face dataset and the MNIST dataset. |
| Dataset Splits | No | The paper mentions training networks on MNIST and Yale B datasets but does not explicitly provide the train/validation/test splits, only training parameters like epochs, learning rate, and batch size. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'cvxpy package' with 'SCS solver' and 'Advertorch library' but does not specify their version numbers. |
| Experiment Setup | Yes | The network on MNIST is trained using SGD for 50 epochs with learning rate 0.1, momentum 0.5, and batch size 128. The ℓ PGD adversary (ϵ = 0.3) used a step size α = 0.01 and was run for 100 iterations. |