Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization

Authors: Adithya M Devraj, Jianshu Chen

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we evaluate our proposed algorithms on several real-world benchmarks, and experimental results show that the proposed algorithms significantly outperform existing techniques. and We evaluate our algorithms on 18 real-world US Research Returns datasets obtained from the Center for Research in Security Prices (CRSP) website
Researcher Affiliation Collaboration Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA. Email: adithyamdevraj@ufl.edu. The work was done during an internship at Tencent AI Lab, Bellevue, WA. Tencent AI Lab, Bellevue, WA, USA. Email: jianshuchen@tencent.com.
Pseudocode Yes The full algorithm is summarized in Algorithm 1, with its key ideas explained below. (Page 3) and Algorithm 1 SVRPDA-I (Page 4)
Open Source Code No The choice of the hyper-parameters can be found in Appendix C.2, and the code will be released publicly.
Open Datasets Yes The processed data in the form of .mat file was obtained from https://github.com/tyDLin/SCVRG
Dataset Splits No The paper mentions using '18 real-world US Research Returns datasets' but does not specify the training, validation, or test data splits.
Hardware Specification No The paper does not explicitly describe the hardware specifications (e.g., GPU, CPU models, or cloud computing instances) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper mentions that 'The choice of the hyper-parameters can be found in Appendix C.2' but Appendix C.2 provides only general tuning strategies and epoch counts without concrete numerical values for specific hyperparameters like learning rates or batch sizes.