Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
An Analysis of Robustness of Non-Lipschitz Networks
Authors: Maria-Florina Balcan, Avrim Blum, Dravyansh Sharma, Hongyang Zhang
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results provide new robustness guarantees for nearest-neighbor style algorithms, and also have application to contrastive learning, where we empirically demonstrate the ability of such algorithms to obtain high robust accuracy with low abstention rates. Our method significantly outperforms algorithms without the ability to abstain. Experimentally, we show that our proposed algorithm achieves certified adversarial robustness on representations learned by supervised and self-supervised contrastive learning. |
| Researcher Affiliation | Academia | Maria-Florina Balcan EMAIL Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213, USA Avrim Blum EMAIL Toyota Technological Institute at Chicago 6045 S Kenwood Ave, Chicago, IL 60637, USA Dravyansh Sharma EMAIL Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213, USA Hongyang Zhang EMAIL University of Waterloo 200 University Ave W, Waterloo, ON N2L 3G1, Canada |
| Pseudocode | Yes | Algorithm 1 ROBUSTCLASSIFIER(τ, σ)... Algorithm 2 Exponential Forecaster Algorithm (Balcan et al., 2018b)... Algorithm 3 Exact computation of attacks under threat model 2.1 against Algorithm 1... Algorithm 4 Robust classifier in the feature space with point-specific threshold τ A i of don t know... Algorithm 5 Approximate computation of attacks under threat model 2.1 against Algorithm 1 |
| Open Source Code | Yes | Code used in the experiments may be found at the following github link: https://github.com/dravyanshsharma/adversarial-contrastive |
| Open Datasets | Yes | Figure 5 shows the two-dimensional t-SNE visualization of 10,000 features by minimizing loss (1) on the CIFAR10 test data set. Table 1: Natural error Enat and robust error Eadv on the CIFAR-10 data set (Szegedy et al., 2015)... |
| Dataset Splits | Yes | Figure 5 shows the two-dimensional t-SNE visualization of 10,000 features by minimizing loss (1) on the CIFAR10 test data set. Table 1: Natural error Enat and robust error Eadv on the CIFAR-10 data set (Szegedy et al., 2015)... |
| Hardware Specification | Yes | All experiments are run on two GeForce RTX 2080 GPUs. |
| Software Dependencies | No | The paper mentions using the Res Net-18 architecture, Mo Co(v2) and Sim CLR frameworks, but does not provide specific version numbers for any software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | We use the Res Net-18 architecture (He et al., 2016) for representation learning with a two-layer projection head of width 128. The dimension of the representations is 512. We set batch size 512, temperature T = 0.5, and initial learning rate 0.5 which is followed by cosine learning rate decay. We sequentially apply four simple augmentations: random cropping followed by resizing back to the original size, random flipping, random color distortions, and randomly converting image to grayscale with a probability of 0.2. In the linear evaluation protocol, we set batch size 512 and learning rate 1.0 to learn a linear classifier in the feature space by empirical risk minimization. |