Collaborative Deep Learning in Fixed Topology Networks
Authors: Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of our algorithms in comparison with the baseline centralized SGD and the recently proposed federated averaging algorithm (that also enables data parallelism) based on benchmark datasets such as MNIST, CIFAR-10 and CIFAR-100. |
| Researcher Affiliation | Academia | 1Department of Mechanical Engineering, Iowa State University, zhjiang, baditya, soumiks@iastate.edu 2Department of Electrical and Computer Engineering , Iowa State University, chinmay@iastate.edu |
| Pseudocode | Yes | Algorithm 1: CDSGD |
| Open Source Code | No | The experiments are performed using Keras and Tensor Flow [27, 28] and the codes will be made publicly available soon. |
| Open Datasets | Yes | Finally, we validate our algorithms performance on benchmark datasets, such as MNIST, CIFAR-10, and CIFAR-100. |
| Dataset Splits | No | The paper mentions "validation accuracy" (e.g., in Figure 1 caption) and discusses the "generalization gap", indicating the use of a validation set. However, it does not specify the actual split percentages or sizes for training, validation, or test sets, nor does it refer to a standard split by citation. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models, memory specifications, or type of computing cluster used for the experiments. It only mentions using Keras and TensorFlow. |
| Software Dependencies | No | The paper mentions that "The experiments are performed using Keras and Tensor Flow," but it does not specify the version numbers for either software dependency. |
| Experiment Setup | Yes | We use a deep convolutional nerual network (CNN) model (with 2 convolutional layers with 32 filters each followed by a max pooling layer, then 2 more convolutional layers with 64 filters each followed by another max pooling layer and a dense layer with 512 units, Re LU activation is used in convolutional layers) to validate the proposed algorithm. We use a fully connected topology with 5 agents and uniform agent interaction matrix except mentioned otherwise. A mini-batch size of 128 and a fixed step size of 0.01 are used in these experiments. |