Towards Sharp Analysis for Distributed Learning with Random Features
Authors: Jian Li, Yong Liu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct several experiments on both simulated and real-world datasets, and the empirical results validate our theoretical findings. |
| Researcher Affiliation | Academia | 1Institute of Information Engineering, Chinese Academy of Sciences 2Gaoling School of Artificial Intelligence, Renmin University of China lijian9026@iie.ac.cn, liuyonggsai@ruc.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link to the full version on arXiv (https://arxiv.org/abs/1906.03155), but this is not an explicit statement that the source code for the methodology is available, nor is it a direct link to a code repository. |
| Open Datasets | No | The paper mentions experiments on "simulated data and real-world data" but does not provide specific names, links, DOIs, or citations for publicly available datasets. |
| Dataset Splits | No | The paper discusses partitioning the training set D into subsets for distributed learning but does not specify train/validation/test dataset splits with percentages, sample counts, or citations to predefined splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper does not contain specific experimental setup details, such as concrete hyperparameter values or training configurations, in the main text. |