Meta-Descent for Online, Continual Prediction
Authors: Andrew Jacobsen, Matthew Schlegel, Cameron Linke, Thomas Degris, Adam White, Martha White3943-3950
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide an extensive empirical comparison on (1) canonical optimization problems that are difficult to optimize with large flat regions (2) an online, supervised tracking problem where the optimal step-sizes can be computed, (3) a finite Markov Decision Process with linear features that cause conventional temporal difference learning to diverge, and (4) a high-dimensional time-series prediction problem using data generated from a real mobile robot. |
| Researcher Affiliation | Collaboration | 1University of Alberta, Edmonton, Canada, 2Google Deep Mind, London, UK 3Google Deep Mind, Edmonton, Canada |
| Pseudocode | No | The paper derives recursive update forms and provides equations but does not present a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statements about the release of source code, nor does it include links to a code repository. |
| Open Datasets | Yes | Using the freely available nexting data set (144,000 samples, corresponding to 3.4 hours of runtime on the robot), we incrementally processed the data on each step constructing a feature vector from the sensor vector, and making one prediction for each sensor. |
| Dataset Splits | No | The paper mentions 'extensively searching the meta-parameters' but does not provide specific details on train/validation/test splits (percentages, counts, or explicit methodology). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or processing units used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers. |
| Experiment Setup | No | The paper mentions that meta-parameters were 'extensively swept' and 'optimized' but does not provide specific numerical values for hyperparameters or other detailed experimental setup configurations in the main text. |