Automatic Construction and Natural-Language Description of Nonparametric Regression Models
Authors: James Lloyd, David Duvenaud, Roger Grosse, Joshua Tenenbaum, Zoubin Ghahramani
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare ABCD against existing model construction techniques in terms of predictive performance at extrapolation, and we find stateof-the-art performance on 13 time series. We evaluate the performance of the algorithms listed below on 13 real time-series from various domains from the time series data library |
| Researcher Affiliation | Academia | James Robert Lloyd Department of Engineering University of Cambridge; David Duvenaud Department of Engineering University of Cambridge; Roger Grosse Brain and Cognitive Sciences Massachusetts Institute of Technology; Joshua B. Tenenbaum Brain and Cognitive Sciences Massachusetts Institute of Technology; Zoubin Ghahramani Department of Engineering University of Cambridge |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Source Code Source code to perform all experiments is available on github3. 3http://www.github.com/jamesrobertlloyd/gpss-research. |
| Open Datasets | Yes | We evaluate the performance of the algorithms listed below on 13 real time-series from various domains from the time series data library (Hyndman, Accessed summer 2013); plots of the data can be found at the beginning of the reports in the supplementary material. |
| Dataset Splits | Yes | As a heuristic, we order components by always adding next the component which most reduces the 10-fold cross-validated mean absolute error. |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions using the 'GPML toolbox' for GP parameter optimisation but does not provide a version number for it or any other software dependencies. |
| Experiment Setup | Yes | After each model is proposed its kernel parameters are optimised by conjugate gradient descent. We evaluate each optimized model, M, using the Bayesian Information Criterion (BIC) (Schwarz, 1978): BIC(M) = 2 log p(D | M) + |M| log n. We use the default mean absolute error criterion when using Eureqa. |