Online Improper Learning with an Approximation Oracle
Authors: Elad Hazan, Wei Hu, Yuanzhi Li, Zhiyuan Li
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our results are summarized in Table 1 below. We present these two algorithms and their guarantees in Sections 3 and Appendix B. |
| Researcher Affiliation | Collaboration | Elad Hazan Princeton University & Google AI Princeton ehazan@cs.princeton.edu Wei Hu Princeton University huwei@cs.princeton.edu Yuanzhi Li Stanford University yuanzhil@stanford.edu Zhiyuan Li Princeton University zhiyuanli@cs.princeton.edu |
| Pseudocode | Yes | Algorithm 1 Online Mirror Descent using a Projection-and-Separation Oracle; Algorithm 2 Projection-and-Decomposition Oracle, PAD(y, , W, '); Algorithm 3 Online Stochastic Mirror Descent with Barycentric Regularization |
| Open Source Code | No | The paper does not provide any statements about releasing its own source code or links to a code repository. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on specific datasets, so no dataset availability information for training is provided. |
| Dataset Splits | No | The paper does not discuss empirical validation or dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe running experiments, therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and focuses on algorithms and proofs; it does not mention any specific software dependencies with version numbers for implementation. |
| Experiment Setup | No | The paper is theoretical and does not describe an empirical experimental setup with hyperparameters or system-level training settings. |