A Stochastic Composite Gradient Method with Incremental Variance Reduction

Authors: Junyu Zhang, Lin Xiao

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present numerical experiments for a risk-averse portfolio optimization problem. [...] We test these algorithms on three real world portfolio datasets, which contain 30, 38 and 49 industrial portfolios respectively, from the Keneth R. French Data Library. [...] The experiment results are shown in Figure 1.
Researcher Affiliation Collaboration Junyu Zhang University of Minnesota Minneapolis, Minnesota 55455 zhan4393@umn.edu Lin Xiao Microsoft Research Redmond, Washington 98052 lin.xiao@microsoft.com
Pseudocode Yes Algorithm 1: Composite Incremental Variance Reduction (CIVR)
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the CIVR method is publicly available.
Open Datasets Yes We test these algorithms on three real world portfolio datasets, which contain 30, 38 and 49 industrial portfolios respectively, from the Keneth R. French Data Library1. 1http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html
Dataset Splits No The paper describes using "three real world portfolio datasets" but does not provide specific details on how these datasets were split into training, validation, or test sets for the experiments, nor does it refer to predefined splits with citations.
Hardware Specification No The paper does not provide any specific details regarding the hardware used to run the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper discusses various algorithms and their comparison but does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes We set the parameter λ = 0.2 in (5) and use an ℓ1 regularization r(x) = 0.01 x 1. [...] Throughout the experiments, VRSC-PG and C-SAGA algorithms use the batch size S = n2/3 while CIVR uses the batch size S = n , all dictated by their complexity theory. CIVR-adp employs the adaptive batch size St = min{10t + 1, n} for t = 1, ...,T. For Industrial-30 dataset, all of VRSC-PG, C-SAGA, CIVR and CIVR-adp use the same step size η = 0.1. [...] Similarly, η = 0.001 is chosen for the Industrial-38 dataset and η = 0.0001 is chosen for the Industrial-49 dataset. For ASC-PG, we set its step size parameters αk = 0.001/k and βk = 1/k [see details in 32].