Double Stochasticity Gazes Faster: Snap-Shot Decentralized Stochastic Gradient Tracking Methods
Authors: Hao Di, Haishan Ye, Xiangyu Chang, Guang Dai, Ivor Tsang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments validate SS DSGT's superior performance in the general communication network topology and exhibit better practical performance of ASS DSGT on the specified W compared to DSGT. |
| Researcher Affiliation | Collaboration | 1Center for Intelligent Decision-Making and Machine Learning, School of Management, Xi an Jiaotong University, China. 2This work was completed during the internship at SGIT AI Lab, State Grid Corporation of China. 3SGIT AI Lab, State Grid Corporation of China. 4College of Computing and Data Science, NTU, Singapore. 5CFAR and IHPC, Agency for Science, Technology and Research (A*STAR), Singapore. |
| Pseudocode | Yes | Algorithm 1 Snap-Shot Decentralized Stochastic Gradient Tracking |
| Open Source Code | No | The paper does not contain any statements about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The banknote dataset is sourced from the UCI Machine Learning Repository website, while a9a and ijcnn1 are obtained from the LIBSVM website. (Footnotes provide URLs: https://archive.ics.uci.edu/dataset/267/banknote+authentication and https://www.csie.ntu.edu.cn/~cjlin/libsvmtools/datasets/binary.html) |
| Dataset Splits | No | The paper states 'We utilize m = 20 agents, distributing the data randomly and equally among them,' but does not provide explicit training/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits). |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In Table 1, the paper lists 'Batch Size' and 'v' (regularization coefficient) as hyperparameters with specific values (e.g., Batch Size for banknote is 30, v is 10^-2). It also states 'hyperparameters are fine-tuned for optimal performance'. |