Optimal Black-Box Reductions Between Optimization Objectives
Authors: Zeyuan Allen-Zhu, Elad Hazan
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments to confirm our theoretical speed-ups obtained for Adapt Smooth and Adapt Reg. We work on minimizing Lasso and SVM objectives for the following three well-known datasets that can be found on the Lib SVM website [10]: covtype, mnist, and rcv1. |
| Researcher Affiliation | Academia | Zeyuan Allen-Zhu zeyuan@csail.mit.edu Institute for Advanced Study & Princeton University Elad Hazan ehazan@cs.princeton.edu Princeton University |
| Pseudocode | Yes | Algorithm 1 The Adapt Reg Reduction Input: an objective F( ) in Case 2 (smooth and not necessarily strongly convex); x0 a starting vector, σ0 an initial regularization parameter, T the number of epochs; an algorithm A that solves Case 1 of problem (1.1). Output: bx T . |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code available or provide a link to a code repository. |
| Open Datasets | Yes | We work on minimizing Lasso and SVM objectives for the following three well-known datasets that can be found on the Lib SVM website [10]: covtype, mnist, and rcv1. [10] Rong-En Fan and Chih-Jen Lin. LIBSVM Data: Classification, Regression and Multi-label. Accessed: 2015-06. |
| Dataset Splits | No | The paper mentions deferring 'dataset and implementation details' to the full version but does not provide specific training/test/validation dataset splits (e.g., percentages, sample counts, or citations to predefined splits) in the provided text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper refers to methods like APCG [20] and SVRG [14] and mentions using datasets from the Lib SVM website [10], but it does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | No | The paper describes some practical implementation details such as termination criteria for the oracle in the inner loop (e.g., duality gap below 1/4 or Euclidean norm below 1/3 of the previous epoch), and ranges for regularization weights (e.g., {10k, 3 · 10k : k ∈ Z}). However, it does not provide specific hyperparameter values like learning rates, batch sizes, or optimizer settings for the experiments. |