Discovering Latent Covariance Structures for Multiple Time Series
Authors: Anh Tong, Jaesik Choi
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on five real-world data sets demonstrate that our new model outperforms existing methods in term of structure discoveries and predictive performances. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea. Correspondence to: Jaesik Choi <jaesik@unist.ac.kr>. |
| Pseudocode | Yes | Algorithm 1 Partial set expansion of LKM learning |
| Open Source Code | No | The paper does not provide a direct link to a code repository or an explicit statement about the public availability of its source code. |
| Open Datasets | Yes | The US stock price data set consists of 9 stocks (GE, MSFT, XOM, PFE, C, WMT, INTC, BP, and AIG) containing 129 adjusted closes taken from the second half of 2001. The US housing market data set includes the 120-month housing prices of 6 cities (New York, Los Angeles, Chicago, Phoenix, San Diego, San Francisco) from 2004 to 2013. The currency data set includes 4 currency exchange rates from US dollar to 4 emerging markets: South African Rand (ZAR), Indonesian Rupiah (IDR), Malaysian Ringgit (MYR), and Russian Rouble (RUB). Each currency exchange time series has 132 data points. We collected time series from various domains into a data set. It consists of gold prices, crude oil prices, NASDAQ composite index, and USD index1 from 2015 July 1st to 2018 July 1st. We call this data set as GONU (Gold, Oil, NASDAQ, USD index). Each time series has 157 weekly prices or indexes taken from Quandl (2018). We retrieved the epileptic seizure data set (Andrzejak et al., 2002) from UCI repository (Dheeru & Karra Taniskidou, 2017). |
| Dataset Splits | Yes | All experiments are conducted to predict future events (extrapolation) by splitting all data sets and trained with the first 90%, then tested with the remaining 10% as in the standard setting for extrapolation tasks. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions general tools like GPy and Stan language in related work sections but does not list specific software dependencies with version numbers for its own implementation. |
| Experiment Setup | Yes | All experiments are conducted to predict future events (extrapolation) by splitting all data sets and trained with the first 90%, then tested with the remaining 10% as in the standard setting for extrapolation tasks. Root mean square error (RMSE) and Mean Negative Log Likelihood (MNLP) (L azaro-Gredilla et al., 2010) are the main evaluation metrics in all data sets. RMSEs and NMLPs for each data set with corresponding methods (5 independent runs per method). |