GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations
Authors: Enmao Diao, Jie Ding, Vahid Tarokh
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental studies demonstrate that GAL can achieve performance close to centralized learning when all data, models, and objective functions are fully disclosed. |
| Researcher Affiliation | Academia | Enmao Diao Department of Electrical and Computer Engineering Duke University Durhm, NC 27705, USA enmao.diao@duke.edu Jie Ding School of Statistics University of Minnesota-Twin Cities Minneapolis, MN 55455, USA dingj@umn.edu Vahid Tarokh Department of Electrical and Computer Engineering Duke University Durhm, NC 27705, USA vahid.tarokh@duke.edu |
| Pseudocode | Yes | Algorithm 1 GAL: Gradient Assisted Learning (from the perspective of the service receiver, Alice) |
| Open Source Code | Yes | Our code is available here 1. and Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We provide source codes in the supplementary material. |
| Open Datasets | Yes | We demonstrate the performance of autonomous local models with UCI datasets downloadable from the scikit-learn package [24], including Diabetes [25], Boston Housing [26], Blob [24], Iris [27], Wine [28], Breast Cancer [29], and QSAR [30] datasets and Our code is available here 1. We use publicly available datasets. |
| Dataset Splits | No | For all the UCI datasets, we train on 80% of the available data and test on the remaining. |
| Hardware Specification | Yes | One Nvidia 1080TI is enough for one experiment run. |
| Software Dependencies | No | We demonstrate the performance of autonomous local models with UCI datasets downloadable from the scikit-learn package [24] |
| Experiment Setup | Yes | Details of learning hyper-parameters are included in Table 9 of the Appendix. We conducted four random experiments for all datasets with different seeds, and the standard errors are shown in the brackets of all tables. |