Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent

Authors: Wenqing Hu, Chris Junchi Li, Xiangru Lian, Ji Liu, Huizhuo Yuan

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments on risk-adverse portfolio management validate the superiority of SARAH-Compositional over a few rival algorithms.In this section, we study performance of our algorithm to risk-adverse portfolio management problem and conduct numerical experiments to support our theory.
Researcher Affiliation Collaboration Wenqing Hu Missouri University of Science and Techology huwen@mst.edu Chris Junchi Li Tencent AI Lab junchi.li.duke@gmail.com Xiangru Lian University of Rochester admin@mail.xrlian.com Ji Liu University of Rochester & Kwai Inc. ji.liu.uwisc@gmail.com Huizhuo Yuan Peking University hzyuan@pku.edu.cn
Pseudocode Yes Algorithm 1 SARAH-Compositional, Online Case (resp. Finite-Sum Case)
Open Source Code Yes The source code can be found at http://github.com/angeoz/SCGD.
Open Datasets Yes Datasets include different portfolio datas formed on Size and Operating Profitability. We choose to use 6 different 25-portfolio datasets where N = 25 and T = 7240, same as the ones adopted by Lin et al. (2018). Specifically, we choose SL 1 = SL 2 = SL 3 = 2000 (roughly optimized to improve the numerical performance).The source code can be found at http://github.com/angeoz/SCGD. http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html
Dataset Splits No The paper mentions using 6 different 25-portfolio datasets and specifies mini-batch sizes (SL1, SL2, SL3), but it does not provide explicit details about train/validation/test splits (e.g., percentages, sample counts, or predefined splits with citations) for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models, processor types, or memory amounts. It only discusses the experimental setup at a higher level.
Software Dependencies No The paper does not specify any software dependencies with version numbers, such as programming languages (e.g., Python 3.x) or specific libraries/frameworks (e.g., PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes Specifically, we choose SL 1 = SL 2 = SL 3 = 2000 (roughly optimized to improve the numerical performance). Our range of stepsize is 1 10 5, 1 10 4, 2 10 4, 5 10 4, 1 10 3, 1 10 2 , and we plot the learning curve for each algorithm corresponding to their individually optimized stepsize. The q-parameters in both SARAH-Compositional and VRSC-PG algorithms are set as 50.