Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior

Authors: Anh Tong, Toan M Tran, Hung Bui, Jaesik Choi9906-9914

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experimental Evaluations In this section, we first set up choices for compositional kernels. We then justify how the Horseshoe assumption for kernel selection on synthetic data as well as time series data. Finally, we validate our model with regression and classification tasks.
Researcher Affiliation Collaboration Anh Tong1, Toan M Tran2, Hung Bui2, Jaesik Choi3,4 1 Ulsan National Institute of Science and Technology 2 Vin AI Research 3 Korea Advanced Institute of Science and Technology 4 INEEJI
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a direct link to a code repository for the described methodology. The footnote refers to an extended arXiv version, which also does not contain a code link.
Open Datasets Yes We conducted experiments on UCI data sets (Asuncion and Newman 2007) including boston, concrete, energy, kin8nm, wine and yatch (see Appendix for detailed descriptions). ... We test our model on GEFCOM data set from the Global Energy Forecasting Competition (Tao Hong, Pierre Pinson, and Shu Fan 2014).
Dataset Splits No The paper mentions training and testing splits (e.g., '90% of the data set for training and held out 10% as test data', 'test data is taken from top 1/15 and bottom 1/15 of the data, the remaining is train data') but does not specify a separate validation set or its split percentage.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No Our model is developed based on (Matthews et al. 2017). This implies use of GPflow/TensorFlow, but no specific version numbers for these or other software dependencies are provided.
Experiment Setup No The paper mentions hyperparameter initialization strategies but does not provide specific numerical values for hyperparameters (e.g., learning rate, batch size, number of epochs) or other detailed training configurations.