Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Distributed Projection-Free Online Learning for Smooth and Convex Losses
Authors: Yibo Wang, Yuanyu Wan, Shimao Zhang, Lijun Zhang
AAAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide experimental results on benchmark datasets to illustrate the empirical performance of our proposed method. |
| Researcher Affiliation | Academia | 1 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2 Peng Cheng Laboratory, Shenzhen 518055, China 3 School of Software Technology, Zhejiang University, Ningbo 315048, China EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Distributed Follow-the-Perturbed-Leader (D-FPL); Algorithm 2: Distributed Sampled Follow-the-Perturbed Leader (D-SFPL); Algorithm 3: Distributed Online Smooth Projection-Free Algorithm (D-OSPA) |
| Open Source Code | No | The paper does not provide any explicit statements about the availability of open-source code for the described methodology or links to code repositories. |
| Open Datasets | No | The paper mentions using 'benchmark datasets' and 'training example et,i' but does not provide specific details on training dataset splits (e.g., percentages, sample counts, or explicit references to standard splits). |
| Dataset Splits | No | The paper does not explicitly provide information about a validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'LIBSVM repository' in the context of datasets, but does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | In details, we set τ = 10 and the parameters of each method are set according to their theoretical suggestions. For D-OCG, σt,i = 1/t and η = c T 3/4. For D-BOCG, K = T 1/2 , Lϵ = 20 and η = c T 3/4 . For D-OSPA, we conduct two versions: D-OSPAsc for smooth and convex losses with m = k = T 1/3 and η = c T 2/3; D-OSPAc for general convex losses with m = k = T 1/2 and η = c T 3/4. The hyper-parameter c is selected from {2 3, 2 2, , 26}. |