Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Mixed Nash for Robust Federated Learning

Authors: Wanyun Xie, Thomas Pethick, Ali Ramezani-Kebrya, Volkan Cevher

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results under challenging attacks show that Robust Tailor performs close to an upper bound with perfect knowledge of honest clients. ... Our empirical results demonstrate that Robust Tailor provides high resilience to training-time attacks while maintaining stable performance even under a challenging new mixed attack strategy. ... In this section, we evaluate the resilience of Robust Tailor against tailored attacks. ... For extensive experiments, we train the CNN model on Fashion-MNIST (FMNIST) (Xiao et al., 2017) and CIFAR10 (Krizhevsky & Hinton, 2009) datasets.
Researcher Affiliation Academia Wanyun Xie EMAIL Laboratory for Information and Inference Systems (LIONS), EPFL Thomas Pethick EMAIL Laboratory for Information and Inference Systems (LIONS), EPFL Ali Ramezani-Kebrya EMAIL Department of Informatics, University of Oslo and Visual Intelligence Centre Integreat, Norwegian Centre for Knowledge-driven Machine Learning Volkan Cevher EMAIL Laboratory for Information and Inference Systems (LIONS), EPFL
Pseudocode Yes Algorithm 1 Robust Tailor Algorithm 2 Server s aggregation Algorithm 3 Hypothetical process of aggregation Algorithm 4 Exp3 Algorithm 5 Attack Tailor Algorithm 6 Adversary s attack
Open Source Code No The paper does not contain any explicit statements or links indicating that the authors have made their source code publicly available for the described methodology.
Open Datasets Yes We train a CNN model on MNIST (Lecun et al., 1998) under independent and identically distributed (iid) setting. ... For extensive experiments, we train the CNN model on Fashion-MNIST (FMNIST) (Xiao et al., 2017) and CIFAR10 (Krizhevsky & Hinton, 2009) datasets.
Dataset Splits Yes Both MNIST (Lecun et al., 1998) and FMNIST (Xiao et al., 2017) datasets contain 60000 training samples and 10000 test samples.
Hardware Specification Yes All experiments have been run on a cluster with Xeon-Gold processors and V100 GPUs.
Software Dependencies No The paper mentions training a CNN model and uses standard datasets, but does not provide specific software versions for libraries like TensorFlow, PyTorch, scikit-learn, etc.
Experiment Setup Yes Table 1: Training hyper-parameters for MNIST, FMNIST, and CIFAR10 Hyper-parameter MNIST FMNIST CIFAR10 Learning Rate 0.01 0.003 0.002 Batch Size 50 50 80 Total Iterations 15K 10K 10K K 10 10 10 λ1, λ2 0.3 0.3 0.3 λ1, λ2 0.3 0.3 0.3