Stochastic Optimization with Importance Sampling for Regularized Loss Minimization
Authors: Peilin Zhao, Tong Zhang
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experimental Results |
| Researcher Affiliation | Collaboration | Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore Department of Statistics & Biostatistics, Rutgers University, USA; and Big Data Lab, Baidu Research, China |
| Pseudocode | Yes | Algorithm 1 Proximal Stochastic Mirror Descent with Importance Sampling (Iprox-SMD) |
| Open Source Code | No | No explicit statement or link providing access to the open-source code for the methodology described in the paper was found. |
| Open Datasets | Yes | The experiments were performed on several real world datasets downloaded from the LIBSVM website www.csie.ntu.edu.tw/ cjlin/libsvmtools/. The dataset characteristics are provided in the Table 1. |
| Dataset Splits | No | No specific training/test/validation dataset splits were provided. The paper mentions using real-world datasets but does not detail how they were partitioned for training, validation, or testing. |
| Hardware Specification | No | No specific hardware details (such as CPU/GPU models, memory, or cloud instance types) used for running the experiments were provided. |
| Software Dependencies | No | No specific ancillary software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or solver versions) were provided. |
| Experiment Setup | Yes | the regularization parameter λ of SVM is set to 10 4, 10 6, 10 4 for ijcnn1, kdd2010(algebra), and w8a, respectively. For prox-SGD and Iprox-SGD, the step size is set to ηt = 1/(λt) for all the datasets. |