Mixed Nash Equilibria in the Adversarial Examples Game

Authors: Laurent Meunier, Meyer Scetbon, Rafael B Pinot, Jamal Atif, Yann Chevaleyre

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our findings with experiments on simulated and real datasets, namely CIFAR-10 an CIFAR-100 (Krizhevsky & Hinton, 2009).
Researcher Affiliation Collaboration Laurent Meunier * 1 2 Meyer Scetbon * 3 Rafael Pinot 4 Jamal Atif 1 Yann Chevaleyre 1 1 Miles Team, LAMSADE, Université Paris-Dauphine, Paris, France 2 Facebook AI Research, Paris, France 3 CREST, ENSAE, Paris, France 4 Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
Pseudocode Yes Algorithm 1 Oracle-based Algorithm; Algorithm 2 Adversarial Training for Mixtures
Open Source Code No The paper does not provide a link to open-source code or explicitly state that the code for the methodology is publicly available.
Open Datasets Yes We validate our findings with experiments on simulated and real datasets, namely CIFAR-10 an CIFAR-100 (Krizhevsky & Hinton, 2009). We sample 1000 training points from this distribution...
Dataset Splits No The paper mentions selecting models to avoid overfitting (Rice et al., 2020), but it does not specify a distinct 'validation' dataset split with percentages or counts.
Hardware Specification Yes We trained our models with a batch of size 1024 on 8 Nvidia V100 GPUs.
Software Dependencies No The paper does not list specific software dependencies with version numbers, such as Python versions, deep learning frameworks (e.g., PyTorch, TensorFlow) with their versions, or other libraries.
Experiment Setup Yes We trained from 1 to 4 Res Net18 (He et al., 2016) models on 200 epochs per model. The attack we used in the inner maximization of the training is an adapted (adaptative) version of PGD for mixtures of classifiers with 10 steps. We trained our models with a batch of size 1024 on 8 Nvidia V100 GPUs.