Autonomous Sparse Mean-CVaR Portfolio Optimization

Authors: Yizun Lin, Yangyu Zhang, Zhao-Rong Lai, Cheng Li

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on 6 real-world financial data sets from Kenneth R. French s Data Library
Researcher Affiliation Academia 1Department of Mathematics, College of Information Science and Technology, Jinan University, Guangzhou, China 2Jinan University-University of Birmingham Joint Institute, Jinan University, Guangzhou, China.
Pseudocode Yes Algorithm 1 ASMCVa R
Open Source Code Yes The codes for these two modules are available in the folders Sparse Relaxation Test and Pytorch Demo, respectively, accessible via the link: https: //github.com/linyizun2024/ASMCVa R.
Open Datasets Yes We conduct extensive experiments on 6 real-world financial data sets from Kenneth R. French s Data Library2, whose details are provided in Table 1. ... 2http://mba.tuck.dartmouth.edu/pages/ faculty/ken.french/data_library.html
Dataset Splits No The paper describes a 'standard moving-window trading scheme' with T=60 as the window size for all methods, which implicitly defines how data is used for training and testing in a time-series context, but it does not specify explicit train/validation/test dataset splits with percentages or counts.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as CPU or GPU models.
Software Dependencies No The paper mentions 'PyTorch module' and 'Gurobi' (which they didn't use for their main results due to errors) but does not provide specific version numbers for any software dependencies used in their experiments.
Experiment Setup Yes The model parameters in (21) are set as follows: the confidence level is set as a conventional one c = 0.99. The expected return level is empirically set as ρ = 0.02... The approximation parameter is set as γ = 10^-5... The algorithm parameters can be conveniently set based on the convergence criteria. For FPPA, we set θ = 1.99/||Q||^2_2. Its maximum iteration and relative difference tolerance are set as Max Iter1 = 200 and tol1 = 0.001. For PALM, the learning rates are set as β1 = 0.99/L1 and β2 = 0.99/L2, respectively. Its maximum iteration and relative difference tolerance are set as Max Iter2 = 10^4 and tol2 = 10^-4.