Second Order Techniques for Learning Time-series with Structural Breaks

Authors: Takayuki Osogami9259-9267

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposed approaches is demonstrated with real time-series. We empirically demonstrate the effectiveness of the proposed techniques with real time-series datasets. We conduct numerical experiments to answer the following questions.
Researcher Affiliation Industry Takayuki Osogami IBM Research Tokyo osogami@jp.ibm.com
Pseudocode Yes Algorithm 1 Online learning by following the best hyper forgetting rate (single target)
Open Source Code No The paper does not include any explicit statements about the release of open-source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We use the 10-year (from September 1, 2008 to August 31, 2018) historical data of the daily close price of Standard & Poor s 500 Stock Index (US index; SPX), Nikkei 225 (Japanese index; Nikkei 225), Deutscher Aktienindex (German index; DAX), Financial Times Stock Exchange 100 Index (UK index; FTSE 100), and Shanghai Stock Exchange Composite Index (Chinese index; SSEC).
Dataset Splits No The paper describes an online learning setting where models are continuously updated. It states: 'for a time-series of length N, we make a prediction about the next value at every step n for 0 < n < N. When we make a prediction at step n, the time-series up to step n is used to train the models.' This does not constitute a traditional training/validation/test split.
Hardware Specification Yes We run our experiments on a workstation having eight Intel Core i7-6700K CPUs running at 4.00 GHz and 64 GB random access memory.
Software Dependencies No The paper does not specify version numbers for any software components or libraries used in the experiments.
Experiment Setup Yes Input: Nmod = 30, Nhyp = 11; γ1 = λ1 = 0, γi Unif[0.51/D, 1], λi Unif[0, 1], i [2, Nmod]; ηj = 0.89 + 0.01 j, j [1, Nhyp]. We set (µt, at) = ( 10, 0.3) for t < 1, 000, and (µt, at) = (10, 0.3) for t 1, 000. We learn the AR model with the first order with Algorithm 1.