Graphical Models in Heavy-Tailed Markets
Authors: Jose Vinicius de Miranda Cardoso, Jiaxi Ying, Daniel Palomar
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed methods outperform state-of-the-art benchmarks in an extensive series of practical experiments with publicly available data from the S&P500 index, foreign exchanges, and cryptocurrencies. |
| Researcher Affiliation | Academia | José Vinícius de M. Cardoso, Jiaxi Ying, Daniel P. Palomar Department of Electronic and Computer Engineering Department of Industrial Engineering and Decision Analytics The Hong Kong University of Science and Technology Clear Water Bay, Hong Kong SAR China |
| Pseudocode | Yes | Algorithm 1: Student-t Graph Learning; Algorithm 2: k-component Student-t graph learning |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | We perform experiments using historical daily price time series data, available in Yahoo! Finance TM, from financial instruments in three scenarios: (i) stocks belonging to the S&P500 index, (ii) foreign exchange markets, and (iii) cryptocurrencies. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions "cvxpy" but does not provide specific version numbers for it or any other software component used in the experiments. |
| Experiment Setup | Yes | In our ADMM algorithms, we set the penalty parameter to ρ = 1 and the hyperparameter η in (16) is adaptively increased until the rank constraint is satisfied. For GLE and NGL, we use grid search on the sparsity hyperparameter such that the resulting graph yields the highest modularity value. The graph weights in Algorithm 1 and 2 are initialized using the same procedure as in [8]. |