Scalable Bayesian dynamic covariance modeling with variational Wishart and inverse Wishart processes
Authors: Creighton Heaukulani, Mark van der Wilk
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experimentation, we demonstrate that some (but not all) model variants outperform multivariate GARCH when forecasting the covariances of returns on financial instruments. |
| Researcher Affiliation | Collaboration | Creighton Heaukulani No Affiliation Bangkok, Thailand c.k.heaukulani@gmail.com Mark van der Wilk PROWLER.io Cambridge, United Kingdom mark@prowler.io |
| Pseudocode | Yes | Appendix: Implementation in GPflow |
| Open Source Code | No | The paper includes a Python code snippet in the Appendix demonstrating implementation in GPflow, but it does not contain an explicit statement that the full source code for the methodology or experiments is being released, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We implement our variational inference routines on the model variants applied to three datasets of financial returns, which are denoted as follows (note that we take the log returns, which are defined at time t + 1 as log(1 + Pt+1/Pt), where Pt is the price of the instrument at time t): Dow 30: Intraday returns on the components of the Dow 30 Industrial Average (as of the changes on Jun. 8, 2009), taken at the close of every five-minute interval from Nov. 17, 2017 through Dec. 6, 2017. The resulting dataset size is N = 978, D = 30. The raw data was from Marjanovic [14]. FX: Daily foreign exchange rates for 20 currency pairs taken from Wu et al. [26]. The dataset size is N = 1, 565, D = 20. S&P 500: Daily returns on the closing prices of the S&P 500 index between Feb. 8, 2013 through Feb. 7, 2018, taken from Nugent [17]. |
| Dataset Splits | Yes | The validation sets were the final 2%, 5%, and 5% of the measurements in just one of the training sets for the Dow 30, FX, and S&P 500 datasets, respectively. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory, or cloud instance types) used to run the experiments. |
| Software Dependencies | Yes | This implementation is particularly easy with GPflow [16], a Gaussian process toolbox built on Tensorflow, as demonstrated with a code snippet in the Appendix. Appendix: Implementation in GPflow ... version 1.3.0 |
| Experiment Setup | Yes | We used M = 300 inducing points, R = 2 variational samples for the Monte Carlo approximations, and a minibatch size of 300. The gradient ascent step sizes were scheduled according to Adam [11]. We selected the stopping times and an exponential learning rate decay schedule via cross validation, choosing the setting that maximized the test loglikelihood metric (see below) on a validation set. |