Non-convex online learning via algorithmic equivalence
Authors: Udaya Ghai, Zhou Lu, Elad Hazan
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically verify that reparameterized GD iterates and EG iterates stay close on a toy problem in Figure 1. |
| Researcher Affiliation | Collaboration | Google AI Princeton Princeton University |
| Pseudocode | Yes | Algorithm 1 Online Mirror Descent; Algorithm 2 Online Gradient Descent |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described. |
| Open Datasets | No | The paper discusses theoretical examples like 'Exponentiated gradient using quadratic reparameterization' and a 'toy problem' for empirical verification, but it does not specify or provide access information for any publicly available or open datasets used in a typical experimental setup. |
| Dataset Splits | No | The paper is primarily theoretical and does not mention specific training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to replicate the experiment. |
| Experiment Setup | No | The paper is theoretical and does not provide specific experimental setup details such as hyperparameter values, model initialization, or training schedules. While it defines a parameter η for the regret bound, this is part of the theoretical analysis, not an experimental setup for a training process. |