Forecasting Asset Dependencies to Reduce Portfolio Risk
Authors: Haoren Zhu, Shih-Yang Liu, Pengfei Zhao, Yingying Chen, Dik Lun Lee4397-4404
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our proposed framework consistently outperforms the baselines on both future ADM prediction and portfolio risk reduction tasks. |
| Researcher Affiliation | Academia | Hong Kong University of Science and Technology, Beijing Normal University-Hong Kong Baptist University United International College, London School of Economics and Political Science |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | No | We construct a pool of real stock prices by combining the daily price data of stocks from S&P-100, NASDAQ-100, and DJI-30, including most influential companies in recent 15 years (from 2005/09/27 to 2020/08/05). The full list of stocks in each stock dataset is attached in the technical appendix. The paper mentions data sources but does not provide a direct link, DOI, or specific repository for accessing the combined dataset. |
| Dataset Splits | Yes | We select the first 90% of ADM sequences as training samples (including validation) and the remaining 10% as testing samples. |
| Hardware Specification | Yes | We use Adam optimizer and the experiments are run on a server with 4 NVIDIA Ge Force RTX 2080-ti Graphic Cards. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'gradual warm-up learning rate scheduler' with 'cosine annealing' but does not specify software names with version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | The inital learning rate for the adaptive learning rate scheduler is set to 5e 4. We have tested the model with the following batch sizes: {128, 256, 384, 512, 640} and finalized the batch size to 512. Horizon h is an application-specific parameter, and since our application is portfolio management with monthly adjustments, we set h = 21. We set k = 10. ... To strike a balance, we set nlag = 42 and n = 32. ... The performance of Mo E depends on two crucial parameters, (1) the number of experts nexp, which determines how many experts in total are contained in the network, and (2) topk, which determines how many experts participate in generating the final transformation function (topk nexp). Table 1 shows how the two parameters affect the learning of the transformation function T and in turn the prediction MSE. The entry in the table denotes given nexp, the MSE returned by the optimal topk on average in the 10 stock datasets. For example, when nexp = 8, on average topk = 4 obtains the optimal MSE among the 10 stock datasets. |