Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data
Authors: Yuki Takezawa, Han Bao, Kenta Niwa, Ryoma Sato, Makoto Yamada
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments, we demonstrate that Momentum Tracking is more robust to data heterogeneity than the existing decentralized learning methods with momentum and can consistently outperform these existing methods when the data distributions are heterogeneous. ... In this section, we present the results of an experimental evaluation of Momentum Tracking and demonstrate that Momentum Tracking is more robust to data heterogeneity than the existing decentralized learning methods with momentum. |
| Researcher Affiliation | Collaboration | Yuki Takezawa EMAIL Kyoto University, Okinawa Institute of Science and Technology; Kenta Niwa EMAIL NTT Communication Science Laboratories |
| Pseudocode | Yes | A Pseudo-Codes The pseudo-codes for Momentum Tracking, QG-DSGDm, and Decent La M are given in the following, where Transmiti j( ) denotes that node i transmits parameters to node j and Receivei j( ) denotes that node i receives parameters from node j. Algorithm 1: Update rules of Momentum Tracking at node i. |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is provided, nor does it include a link to a code repository. |
| Open Datasets | Yes | We evaluated Momentum Tracking using three 10-class image classification tasks: Fashion MNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011), and CIFAR-10 (Krizhevsky, 2009). |
| Dataset Splits | Yes | Following the previous work (Niwa et al., 2020), we distributed the data to nodes such that each node was given data of randomly selected k classes. ... For each comparison method, we used 10% of the training data for validation and individually tuned the step size. |
| Hardware Specification | Yes | All comparison methods were implemented using PyTorch and run on eight GPUs (NVIDIA RTX 3090). |
| Software Dependencies | No | All comparison methods were implemented using PyTorch and run on eight GPUs (NVIDIA RTX 3090). The paper mentions 'PyTorch' but does not provide a specific version number. |
| Experiment Setup | Yes | E Hyperparameter Settings Tables 9, 10, 11, 12, and 13 list the hyperparameter settings for each dataset. We evaluated the performance of each comparison method for different step sizes and selected the step size that achieved the highest accuracy on the validation dataset. Table 9: Experimental settings for Fashion MNIST. Step size {0.005, 0.001, 0.0005} L2 penalty 0.001 Batch size 100 Data augmentation Random Crop Total number of epochs 500 |