Efficient Algorithms for Sum-Of-Minimum Optimization
Authors: Lisang Ding, Ziang Chen, Xinshang Wang, Wotao Yin
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The efficiency of our algorithms is numerically examined on multiple tasks, including generalized principal component analysis, mixed linear regression, and small-scale neural network training. |
| Researcher Affiliation | Collaboration | 1Department of Mathematics, University of California, Los Angeles, Los Angeles, CA, USA 2Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA 3Decision Intelligence Lab, Alibaba US, Bellevue, WA, USA. |
| Pseudocode | Yes | The pseudo-code of this algorithm is shown in Algorithm 1. |
| Open Source Code | Yes | Our code with documentation can be found at https://github.com/Lisang Ding/ Sum-of-Minimum_Optimization. |
| Open Datasets | No | The dataset {(ai, bi)}N i=1 for the ℓ2-regularized mixed linear regression is synthetically generated in the following way:... |
| Dataset Splits | No | No specific dataset split information (e.g., percentages, sample counts for training, validation, and test sets) or mention of a validation set was provided. The paper states: "In our experiments, the training dataset size is N = 1000 and the testing dataset size is 200." |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments are provided in the paper. |
| Software Dependencies | No | Software components like "ADAM optimizer" are mentioned, but no specific version numbers are provided to ensure reproducibility. |
| Experiment Setup | Yes | We set the maximum iteration number as 50 for Algorithm 2 with (5) and terminate the algorithm once the objective function stops decreasing... We use the default ADAM learning rate γ = 1e 3. We set r = 10 in Lloyd s Algorithm 2 and fix the cluster number k = 5. |