Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Authors: Zhishuai Guo, Mingrui Liu, Zhuoning Yuan, Li Shen, Wei Liu, Tianbao Yang
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on several benchmark datasets show the effectiveness of our algorithm and also confirm our theory. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, The University of Iowa, Iowa City, IA 52242, USA 2Tencent AI Lab, Shenzhen, China. |
| Pseudocode | Yes | Algorithm 1 Co DA; Algorithm 2 DSG |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We do experiments on 3 datasets: Cifar10, Cifar100 and Image Net. |
| Dataset Splits | No | The paper specifies training and testing data splits, but does not explicitly mention a separate validation set or split percentages for one. |
| Hardware Specification | No | The paper mentions 'a cluster of 4 computing nodes with each computer node having 4 GPUs', and states 'one machine corresponds to one GPU'. However, it does not specify the models of the GPUs (e.g., NVIDIA A100, Tesla V100) or any CPU details. |
| Software Dependencies | No | All algorithms are implemented by Py Torch (Paszke et al., 2019). The paper mentions PyTorch but does not specify a version number for it or any other software dependency. |
| Experiment Setup | Yes | For all algorithms, we set Ts = T03k, ηs = η0/3k. T0 and η0 are tuned for PPD-SG and set to the same for all other algorithms for fair comparison. T0 is tuned in [2000, 5000, 10000], and η0 is tuned in [0.1, 0.01, 0.001]. We fix the batch size for each GPU as 32. |