Adaptive Three Operator Splitting

Authors: Fabian Pedregosa, Gauthier Gidel

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, an empirical comparison with related methods on 6 different problems illustrates the computational advantage of the proposed method.
Researcher Affiliation Academia 1University of California at Berkeley, USA 2Department of Computer Science, ETH Zurich, Switzerland 3Mila Universit e de Montr eal, Canada.
Pseudocode Yes Algorithm 1: Adaptive Three Operator Splitting
Open Source Code No The paper does not contain an explicit statement that the source code for the described methodology is publicly available, nor does it provide a direct link to such code. While there is a reference to "Pedregosa, F. C-OPT: composite optimization in Python. 2018. doi: 10.5281/zenodo.1283339. URL http://openopt.github.io/copt/", it does not state that the code for this paper's methodology is available there.
Open Datasets Yes RCV1 and real-sim. Lewis, D. D., Yang, Y., Rose, T. G., and Li, F. RCV1: A new benchmark collection for text categorization research. Journal of machine learning research, 2004.
Dataset Splits No The paper mentions using datasets like RCV1 and real-sim and discusses regularization parameters, but it does not specify details like train/validation/test split percentages, absolute sample counts for splits, or reference predefined splits with citations.
Hardware Specification No The paper mentions "Computing time on was donated by Amazon through the program AWS Cloud Credits for Research", but it does not specify any exact GPU/CPU models, processor types, or memory amounts used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Subfigures A and C were run with the regularization parameter chosen to give 50% of sparsity, while B, E are run with higher levels of sparsity, chosen to give 5% of sparsity. For each problem, we show 2 different benchmarks, corresponding to the low and high regularization regimes (denoted low reg and high reg).