Distributed Projection-Free Online Learning for Smooth and Convex Losses
Authors: Yibo Wang, Yuanyu Wan, Shimao Zhang, Lijun Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide experimental results on benchmark datasets to illustrate the empirical performance of our proposed method. |
| Researcher Affiliation | Academia | 1 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2 Peng Cheng Laboratory, Shenzhen 518055, China 3 School of Software Technology, Zhejiang University, Ningbo 315048, China {wangyb, zhanglj}@lamda.nju.edu.cn, wanyy@zju.edu.cn, smzhang@smail.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1: Distributed Follow-the-Perturbed-Leader (D-FPL); Algorithm 2: Distributed Sampled Follow-the-Perturbed Leader (D-SFPL); Algorithm 3: Distributed Online Smooth Projection-Free Algorithm (D-OSPA) |
| Open Source Code | No | The paper does not provide any explicit statements about the availability of open-source code for the described methodology or links to code repositories. |
| Open Datasets | No | The paper mentions using 'benchmark datasets' and 'training example et,i' but does not provide specific details on training dataset splits (e.g., percentages, sample counts, or explicit references to standard splits). |
| Dataset Splits | No | The paper does not explicitly provide information about a validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'LIBSVM repository' in the context of datasets, but does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | In details, we set τ = 10 and the parameters of each method are set according to their theoretical suggestions. For D-OCG, σt,i = 1/t and η = c T 3/4. For D-BOCG, K = T 1/2 , Lϵ = 20 and η = c T 3/4 . For D-OSPA, we conduct two versions: D-OSPAsc for smooth and convex losses with m = k = T 1/3 and η = c T 2/3; D-OSPAc for general convex losses with m = k = T 1/2 and η = c T 3/4. The hyper-parameter c is selected from {2 3, 2 2, , 26}. |