Adversarial Robustness of Supervised Sparse Coding

Authors: Jeremias Sulam, Ramchandran Muthukumar, Raman Arora

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experiments In this section, we illustrate the robustness certificate guarantees both in synthetic and real data, as well as the trade-offs between constants in our sample complexity result. ... We use the MNIST dataset for this analysis.
Researcher Affiliation Academia Jeremias Sulam Johns Hopkins University jsulam1@jhu.edu Ramchandran Muthukumar Johns Hopkins University rmuthuk1@jhu.edu Raman Arora Johns Hopkins University arora@cs.jhu.edu
Pseudocode No The paper does not contain structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code to reproduce all of our experiments is made available at our github repository.
Open Datasets Yes Image data We now demonstrate that a positive encoder exist for natural images as well. In Figure 1b we similarly depict the value of τs( ), as a function of s, for an encoder computed on MNIST digits and CIFAR images (from a validation set) with learned dictionaries (further details in Section 6). ... We use the MNIST dataset for this analysis.
Dataset Splits Yes In Figure 1b we similarly depict the value of τs( ), as a function of s, for an encoder computed on MNIST digits and CIFAR images (from a validation set) with learned dictionaries (further details in Section 6). ... To address this, we split the data equally into a training set and a development set: the former is used to learn the dictionary, and the latter to provide a high probability bound on the event that τs(x) > τ s . This is to ensure that the random samples of the encoder margin are i.i.d. for measure concentration.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It discusses experimental settings and datasets but omits hardware specifications.
Software Dependencies No The paper mentions using 'Adam [Kingma and Ba, 2014]' for stochastic gradient descent, but does not provide specific version numbers for any software components, libraries, or frameworks used in the experiments.
Experiment Setup Yes We train a model with 256 atoms by minimizing the following regularized empirical risk using stochastic gradient descent (employing Adam [Kingma and Ba, 2014]; the implementation details are deferred to Appendix D)... Recall that ϕD(x) depends on λ, and we train two different models with two values for this parameter (λ = 0.2 and λ = 0.3).