Efficient Generalized Conditional Gradient with Gradient Sliding for Composite Optimization
Authors: Yiu-ming Cheung, Jian Lou
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments of CUR-like matrix factorization problem with group lasso penalty on four real-world datasets demonstrate the efficiency of the proposed method. |
| Researcher Affiliation | Academia | Yiu-ming Cheung1,2 and Jian Lou1 1Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China 2United International College, Beijing Normal University Hong Kong Baptist University, Zhuhai, China ymc@comp.hkbu.edu.hk, jianlou@comp.hkbu.edu.hk |
| Pseudocode | Yes | For clarity, we summarize the general GCG-GS in Algorithm (1) and Algorithm (2). Algorithm (3) and Algorithm (4) show the implementation details of the Refined GCG-GS. |
| Open Source Code | No | The paper provides download links for datasets and a baseline algorithm (GCG TUM) used for comparison, but it does not provide concrete access to the source code for the proposed GCG-GS methodology. |
| Open Datasets | Yes | We utilized the following four real datasets as used in [Yu et al., 2014]: SRBCT, Brain Tumor 2, 9 Tumor and Leukemia 5, which are of sizes 83 2308, 50 10367, 60 5762, and 72 11225, respectively. 5Download from http://www.gems-system.org. |
| Dataset Splits | No | The paper specifies the datasets used but does not provide specific information about training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split citations). |
| Hardware Specification | Yes | This experiment was conducted by using MATLAB on a laptop computer of Intel Core i7 2.7GHz processor with 8 GB RAM. |
| Software Dependencies | No | The paper states that experiments were conducted using 'MATLAB' but does not specify a version number for MATLAB or any other software dependencies. |
| Experiment Setup | Yes | We set λ = 5 10 4 in our experiment. We set our inner loop estimation m to 3 for all four datasets. Other input sequences were assigned exactly as the theoretical part. |