Tree-structured Gaussian Process Approximations

Authors: Thang D Bui, Richard E Turner

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the new approximation method on three challenging real-world prediction tasks2 via a speedaccuracy trade-off as recommended in [21].
Researcher Affiliation Academia Thang Bui tdb40@cam.ac.uk Richard Turner ret26@cam.ac.uk Computational and Biological Learning Lab, Department of Engineering University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK
Pseudocode No The paper does not contain any structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code is available at ... http://mlg.eng.cam.ac.uk/thang/ [Tree+VFE].
Open Datasets Yes The speech signal was taken from the TIMIT database (see fig. 4)...; Dataset is available at http://data.gov.uk/dataset/os-terrain-50-dtm.
Dataset Splits No In Experiment 3, the paper states: 'In total, this translates into about 200k/40k training/test points.' However, it does not explicitly provide a separate validation split or complete train/validation/test percentages/counts for all experiments.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions that code for various methods (including their own) is available, implying software usage like MATLAB, but it does not specify any software dependencies with version numbers (e.g., 'MATLAB R20XXa' or 'PyTorch 1.x').
Experiment Setup No The paper describes the kernel functions and how pseudo-dataset sizes were varied but does not provide specific hyperparameters for training, such as learning rates, convergence criteria values, or optimizer settings for the BFGS algorithm mentioned.