On Acceleration with Noise-Corrupted Gradients
Authors: Michael Cohen, Jelena Diakonikolas, Lorenzo Orecchia
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we verify the predictions and insights from our analysis of AGD+ by performing numerical experiments comparing AGD+ to other accelerated and non-accelerated methods on noise-corrupted gradient oracles. (From 'Our Contributions' in Introduction) and 5. Numerical Experiments |
| Researcher Affiliation | Academia | 1Department of EECS, Massachusetts Institute of Technology, Cambridge, MA, USA 2Department of Computer Science, Boston University, Boston, MA, USA. |
| Pseudocode | Yes | The steps of AGD+ are deļ¬ned as follows: zk = zk 1 ak erf(xk), xk = r (zk), yk = Ak 1 Ak yk 1 + ak Ak xk, D (vk, xk) L 2 kyk xkk2 |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | Yes | regression problems on Epileptic Seizure Recognition Dataset (Andrzejak et al., 2001) obtained from the UCI Machine Learning Repository (Lichman, 2013). |
| Dataset Splits | No | The paper mentions using specific datasets but does not provide explicit details about the training, validation, or test splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | In all the experiments, we used standard Python libraries to solve the considered problems to high accuracy. (No version numbers provided for Python or specific libraries). |
| Experiment Setup | No | The paper states 'In all the problems, we used (x) = L 2 as the regularizer.' and 'For constrained problems, we implemented projected gradient descent as the GD algorithm.' but lacks specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |