Fourier Feature Approximations for Periodic Kernels in Time-Series Modelling
Authors: Anthony Tompkins, Fabio Ramos
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show significantly improved Gram matrix approximation errors, and also demonstrate the method in several time-series problems comparing other commonly used approaches such as recurrent neural networks. |
| Researcher Affiliation | Academia | Anthony Tompkins, Fabio Ramos School of Information Technologies The University of Sydney, New South Wales, 2006, Australia atom7176@uni.sydney.edu.au, fabio.ramos@sydney.edu.au |
| Pseudocode | No | The paper describes methods and processes but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any links or explicit statements indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | We test on the classic Mauna Loa from 1965 to more recent readings in the beginning of 2017 (Keeling et al. 2017). This dataset has been examined in great detail in the past (Rasmussen and Williams 2006; Duvenaud et al. 2013) and so provides a good baseline for validating our methodology. |
| Dataset Splits | No | The paper specifies training and testing splits (e.g., 'train on the first 80% of the data and test on the remaining 20%') but does not explicitly mention a separate validation dataset or its split details. |
| Hardware Specification | No | The paper discusses running times and scalability but does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper mentions using 'probabilistic programming and Variational Inference' and references the Edward library, but it does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper discusses learning kernel hyperparameters and optimizing them, but it does not provide specific values for hyperparameters (e.g., learning rate, batch size) or other detailed system-level training settings for its experiments. |