Robust Federated Learning via Collaborative Machine Teaching

Authors: Yufei Han, Xiangliang Zhang4075-4082

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental study on real benchmark data sets demonstrates the validity of our method. Empirical Study We involve the following baseline approaches in the study...
Researcher Affiliation Collaboration Yufei Han Nortonlifelock Research Group Campus Sohpia Tech Sophia Antipolis, 06410, France yufei han@symantec.com Xiangliang Zhang Machine Intelligence and Knowledge Engineering Laboratory King Abudullah University of Science and Technology Thuwal,23955,Kingdom of Saudi Arabia xiangliang.zhang@kaust.edu.sa
Pseudocode Yes Algorithm 1: Block-Coordinate Descent for Co MT
Open Source Code No No explicit statement about releasing open-source code for the described methodology or a link to a code repository was found.
Open Datasets Yes 4 large-scale real-world data sets with different application contexts are used to benchmark the involved algorithms (summarized in Table.1)(Chang and Lin 2011). Table 1: Summary of 4 real-world benchmark datasets. Dataset No. of Instances No. of Features IJCNN 49,990 22 SUSY 50,000 18 CPUSMALL 8,192 12 ABALONE 4,177 8
Dataset Splits Yes For each real-world data set, we first randomly extract 60% of the whole data set as the training data. ... To tune the parameter λt, λα and λZ, we adopt 10% of the data as the validation set. The rest of the data instances are used to evaluate the performances of the learned model.
Hardware Specification No All the methods are implemented in Python 2.7 with Numpy and Scipy packages on a 5-core AWS EC2 public cloud server, with one core per teacher. This description lacks specific CPU/GPU model, memory details, or the exact AWS EC2 instance type for full reproducibility.
Software Dependencies No All the methods are implemented in Python 2.7 with Numpy and Scipy packages. While Python version is given, specific versions for Numpy and Scipy are not provided, which are key dependencies.
Experiment Setup Yes To tune the parameter λt, λα and λZ, we adopt 10% of the data as the validation set. ... We set a fixed threshold th = 1e 4 empirically over αk of each teacher for all 4 benchmark databases. ... For each data set, we fix η = 0.10% and choose the most contaminated setting of feature corruption.