Parsimonious Quantile Regression of Financial Asset Tail Dynamics via Sequential Learning
Authors: Xing Yan, Weizhong Zhang, Lin Ma, Wei Liu, Qi Wu
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments are conducted on three types of time series datasets: simulated data, daily asset returns (of stock indexes, exchange rates, and treasury yields), and intraday 5-minute commodity futures returns. Each time series is divided into three successive parts, for training, validation, and testing respectively. The training set is four fifths of the original series, and the validation and test sets are both one tenth. |
| Researcher Affiliation | Collaboration | Xing Yan3 Weizhong Zhang1 Lin Ma1 Wei Liu1 Qi Wu2, 1Tencent AI Lab 2School of Data Science, City University of Hong Kong 3Department of SEEM, The Chinese University of Hong Kong |
| Pseudocode | No | The paper describes methods using mathematical equations but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or providing a link to a code repository. |
| Open Datasets | No | The paper mentions using 'daily asset returns (of stock indexes, exchange rates, and treasury yields)' and 'intraday 5-minute commodity futures returns' along with 'simulated data'. It lists specific asset names like 'S&P 500, NASDAQ 100, HSI, Nikkei 225, DAX, FTSE 100, exchange rate of USD to EUR/GBP/CHF/JPY/AUD, and U.S. treasury yield of 2/10/30 years' and 'steel rebar, natural rubber, soybean, cotton, and sugar'. However, it does not provide specific URLs, DOIs, or formal citations (author, year) for accessing these datasets, nor does it explicitly state they are publicly available with access instructions. |
| Dataset Splits | Yes | Each time series is divided into three successive parts, for training, validation, and testing respectively. The training set is four fifths of the original series, and the validation and test sets are both one tenth. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU models, memory, or cloud instances). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, CPLEX 12.4). It mentions the use of 'LSTM' and 'GARCH-type models' but no software details. |
| Experiment Setup | Yes | Our model has two hyper-parameters, the length L of past series rt 1, . . . , rt L on which time-t HTQF parameters µt, σt, ut, vt depend, and the hidden state dimension H of the LSTM unit. We denote our model with them by LSTM-HTQF(L,H)... The tuning of the hyper-parameters is done in the following sets: L {40, 60, 80, 100}, H {8, 16}, and s, p, q {1, 2, 3}. The A in the HTQF is set to be 4 arbitrarily. We choose K = 21 probability levels into the τ set: [τ1, . . . , τ21] = [0.01, 0.05, 0.1, . . . , 0.9, 0.95, 0.99]. The training set is normalized to have sample mean 0 and sample variance 1, followed by normalizing the validation and test sets in the exactly same way. The validation set is used for tuning hyper-parameters, and for stopping training when the loss on the validation set begins to increase, to prevent overfitting. |