Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization

Authors: Yue Yu, Longbo Huang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct experiments and compare com SVR-ADMM to existing algorithms. The experimental results are shown in Figure 1 and Figure 2.
Researcher Affiliation Academia Yue Yu and Longbo Huang Institute for Interdisciplinary Information Sciences, Tsinghua University yu-y14@mails.tsinghua.edu.cn, longbohuang@tsinghua.edu.cn
Pseudocode Yes Algorithm 1 com-SVR-ADMM for strongly convex stochastic composition optimization
Open Source Code No The paper does not contain any explicit statement about releasing the source code for the described methodology or a link to a code repository.
Open Datasets No Using the same definition of gj(x) and fi(y) and the same parameters generation method as [Lian et al., 2016], we set the regularization to R(x) = µ 2 ||x||2 2, where µ > 0. The experimental results are shown in Figure 1 and Figure 2. Here the y-axis represents the objective value minus optimal value and the x-axis is the number of oracle calls or CPU time. We set N = 200, n = 2000. In this experiment, the transition probability is randomly generated and then regularized. The reward is also randomly generated.
Dataset Splits No The paper does not specify exact split percentages or absolute sample counts for training, validation, or test sets, nor does it reference predefined splits with citations for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup No We set N = 200, n = 2000. cov is the parameter used for reward covariance matrix generation [Lian et al., 2016]. In Figure 1, cov = 2, and cov = 10 in Figure 2. All shared parameters in the four algorithms, e.g., stepsize, have the same values. All shared parameters in four algorithms have the same values.