Efficient Approximate Inference for Stationary Kernel on Frequency Domain

Authors: Yohan Jung, Kyungwoo Song, Jinkyoo Park

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we provide the experiments results validating the performances of the proposed model using the various data sets.
Researcher Affiliation Academia 1Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon , South Korea 2Department of Artificial Intelligence, University of Seoul, Seoul, South Korea. Correspondence to: Yohan Jung <becre1776@kaist.ac.kr>.
Pseudocode Yes Algorithm 1 Approximate Inference for the spectral density parameters {wq, µq, σq}Q q=1 and noise parameter σϵ.
Open Source Code Yes We provide our implementation at https: //github.com/becre2021/ABInfer GSM and additional results in Appendix D.
Open Datasets Yes Passenger data set used in (Wilson & Adams, 2013). ... bike dataset (N=17379, D=17) in UCI benchmark set (Dua & Graff, 2017).
Dataset Splits Yes We equally divide five partitions of the dataset and randomly select the training, validation, and test set with a ratio of 8:1:1 for each partition. We pick the best kernel parameters with the lowest RMSE on validation set and used them for conducting predictions on test set.
Hardware Specification Yes We use Py Torch (1.7.0) (Paszke et al., 2019) and employ RTX2080TI-11GB and V100-16GB for GPU.
Software Dependencies Yes We use Py Torch (1.7.0) (Paszke et al., 2019)
Experiment Setup Yes For the baseline learning method (maximization of log marginal likelihood known as MLE-Type 2), SVSS, and SVSS-Ws, we use the Adam optimizer (Kingma & Ba, 2014) with the learning rate lr = .005. ... We run 1000 and 1200 iterations for training the Parkinsons and the Bike dataset, respectively. ... We run 1500 iterations for training.