Online Control for Meta-optimization
Authors: Xinyi Chen, Elad Hazan
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Illustrative experimental results of our meta-optimization methods are given in Section 4. |
| Researcher Affiliation | Collaboration | Xinyi Chen Princeton University Google Deep Mind xinyic@princeton.edu Elad Hazan Princeton University Google Deep Mind ehazan@princeton.edu |
| Pseudocode | Yes | Algorithm 1 Meta-optimization; Algorithm 2 Gradient perturbation controller for meta-optimization |
| Open Source Code | No | The paper cites external libraries like JAX and DeepMind JAX Ecosystem, but does not provide a link or statement for the source code of its own methodology or implementation. |
| Open Datasets | Yes | For a proof-of-concept nonconvex task, we consider MNIST classification with a neural network. |
| Dataset Splits | No | The paper mentions running experiments with a certain number of episodes and epochs, and a batch size, but does not specify clear training, validation, or test dataset splits or percentages. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running its experiments, only the number of layers in the neural network. |
| Software Dependencies | No | The paper mentions frameworks like JAX in its references but does not provide specific version numbers for JAX or any other software dependencies crucial for replication. |
| Experiment Setup | Yes | For meta-optimization, we use L = 3, δ = 0, and a base learning rate of 0.001. We take L = 20, and the network is trained using stochastic gradients from batches of size 256. The hyperparameters of the baselines are tuned. |