Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Asymptotic Behaviors of Projected Stochastic Approximation: A Jump Diffusion Perspective
Authors: Jiadong Liang, Yuze Han, Xiang Li, Zhihua Zhang
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we validate our theoretical results through comprehensive experiments. |
| Researcher Affiliation | Academia | Jiadong Liang School of Mathematical Sciences Peking University EMAIL Yuze Han School of Mathematical Sciences Peking University EMAIL Xiang Li School of Mathematical Sciences Peking University EMAIL Zhihua Zhang School of Mathematical Sciences Peking University EMAIL |
| Pseudocode | Yes | It is clear this algorithm (2) mimics the behavior of Local SGD in FL settings (see Appendix A for the equivalence)....LPSA s Algorithm 1 in Appendix A |
| Open Source Code | No | The paper does not provide a direct link to open-source code or explicitly state that the source code for the methodology is available. |
| Open Datasets | Yes | The synthetic datasets are generated by following [24]. |
| Dataset Splits | No | The paper describes the generation of synthetic datasets and their use in experiments but does not explicitly specify training, validation, or test dataset splits. |
| Hardware Specification | No | Our experiments use synthetic datasets to validate the theoretical results. It is easy to reproduce the experiments on an average computer using only CPUs. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for reproducibility. |
| Experiment Setup | Yes | We focus on classification problems with cross entropy loss, and ℓ2 2 regularization is imposed to ensure the strong convexity of the objective function...We set K = 100, d = 60 and C = 10...The value of α is set as {1, 0.8, 0.6} and the value of β is from {0, 0.2, 0.4, 0.6, 0.8}. For each repetition, we run 2000 steps of LPSA. |