Agnostic Federated Learning

Authors: Mehryar Mohri, Gary Sivek, Ananda Theertha Suresh

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 5, we present a series of experiments comparing our solution with existing federated learning solutions. To study the benefits of our AFL algorithm, we carried out experiments with three datasets. In all the experiments, we compare the domain agnostic model with the model trained with U, the uniform distribution over the union of samples, and the models trained on individual domains.
Researcher Affiliation Collaboration Mehryar Mohri 1 2 Gary Sivek 1 Ananda Theertha Suresh 1 1Google Research, New York; 2Courant Institute of Mathematical Sciences, New York, NY.
Pseudocode Yes Figure 3. Pseudocode of the STOCHASTIC-AFL algorithm.
Open Source Code No No explicit statement or link providing access to the open-source code for the described methodology was found. The paper mentions using TensorFlow, but that is a third-party tool.
Open Datasets Yes The Adult dataset is a census dataset from the UCI Machine Learning Repository (Blake, 1998). The Fashion MNIST dataset (Xiao et al., 2017) is an MNIST-like dataset where images are classified into 10 categories of clothing. For conversation, we used the Cornell movie dataset that contain movie dialogues (Danescu-Niculescu-Mizil & Lee, 2011). For documents, we used the Penn Tree Bank (PTB) dataset (Marcus et al., 1993).
Dataset Splits No The paper describes splitting datasets into 'domains' (e.g., 'split this dataset into two domains' or 'split this subset into three domains') but does not provide explicit train/validation/test split percentages, sample counts, or references to predefined standard splits for reproduction.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or processor types) used for running experiments were mentioned in the paper.
Software Dependencies Yes All algorithms were implemented in Tensorflow (Abadi et al., 2015).
Experiment Setup Yes We trained a logistic regression model with just the categorical features and Adagrad optimizer. We then trained a classifier for the three classes using logistic regression and the Adam optimizer. We trained a two-layer LSTM model with momentum optimizer. In all the experiments, we used PERDOMAIN stochastic gradients and set Λ = p.