Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization
Authors: Xiangru Lian, Yijun Huang, Yuncheng Li, Ji Liu
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section mainly provides the empirical study to validate the speedup properties for completeness. Due to the space limit, please find it in Supplemental Materials. |
| Researcher Affiliation | Academia | Xiangru Lian, Yijun Huang, Yuncheng Li, and Ji Liu Department of Computer Science, University of Rochester |
| Pseudocode | Yes | Algorithm 1 ASYSG-CON; Algorithm 2 ASYSG-INCON |
| Open Source Code | No | The paper mentions that empirical studies are in supplemental materials, but it does not explicitly state that the source code for the methodology described in the paper is openly available or provide a link to a repository. |
| Open Datasets | No | The paper mentions that empirical studies are in supplemental materials but does not provide any specific dataset names, links, DOIs, or formal citations for publicly available datasets used in its own experiments within the provided text. |
| Dataset Splits | No | The paper refers to empirical studies being in supplemental materials, but it does not provide specific details on training, validation, or test dataset splits needed for reproduction within the provided text. |
| Hardware Specification | No | The paper refers to empirical studies being in supplemental materials, but it does not provide any specific details about the hardware used to run experiments within the provided text. |
| Software Dependencies | No | The paper refers to empirical studies being in supplemental materials, but it does not provide specific ancillary software details with version numbers needed to replicate the experiment within the provided text. |
| Experiment Setup | No | The paper refers to empirical studies being in supplemental materials, but it does not provide specific experimental setup details, such as hyperparameter values or training configurations, within the provided text. |