Learning Efficient Parameter Server Synchronization Policies for Distributed SGD

Authors: Rong Zhu, Sheng Yang, Andreas Pfadler, Zhengping Qian, Jingren Zhou

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present extensive numerical results obtained from experiments performed in simulated cluster environments. In our experiments training time is reduced by 44% on average and learned policies generalize to multiple unseen circumstances.
Researcher Affiliation Industry Rong Zhu*, Sheng Yang, Andreas Pfadler, Zhengping Qian, Jingren Zhou Alibaba Group
Pseudocode Yes Algorithm 1: Unified Synchronization Policy Framework
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of the source code for its methodology.
Open Datasets Yes In each instance, we randomly sample 50% data from the MNIST dataset and run the standard SGD for training.
Dataset Splits No The paper mentions "88% validation accuracy" as a termination criterion but does not specify the size or split methodology for a validation dataset.
Hardware Specification No The paper states, "We implement RLP in a simulated cluster/PS environment." As experiments are conducted in a simulated environment, no specific physical hardware specifications are mentioned for running the experiments.
Software Dependencies No The paper mentions using "standard off-the-shelf deep Q-learning algorithm" and "two-layer neural networks" but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The hyper-parameters for RLP are set as follows: historical size H = 10, replay pool size N = 50, mini-batch size |B| = 32, copy rate c = 5, discount factor γ = 0.8, exploration probability ϵ = 0.1 and learning rate to be 0.01. For the underlying DNN model, we set its batch size to 16 and learning rate to 0.01.