A Conditional-Gradient-Based Augmented Lagrangian Framework
Authors: Alp Yurtsever, Olivier Fercoq, Volkan Cevher
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section presents the numerical evidence to demonstrate the empirical superiority of CGAL, based on the max-cut, clustering ,and generalized eigenvector problems. |
| Researcher Affiliation | Academia | 1LIONS, Ecole Polytechnique F ed erale de Lausanne, Switzerland 2LTCI, T el ecom Paris Tech, Universit e Paris-Saclay, France. |
| Pseudocode | Yes | Algorithm 1 CGAL (for g(Bx) = 0) and Algorithm 2 CGAL for (P) |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that code is released or available. |
| Open Datasets | Yes | We use the GD97 b dataset1... 1V. Batagelj and A. Mrvar. Pajek datasets, http://vlado. fmf.uni-lj.si/pub/networks/data/ ; We consider a medium scale experiment, where we compare CGAL, HCGM, and UPD for max-cut with G1 (800 800) and G40 (2000 2000) datasets2... 2Y. Ye. Gset random graphs. https://www.cise.ufl. edu/research/sparse/matrices/gset/ ; We use the same setup as in (Yurtsever et al., 2018), which is designed and published online by Mixon et al. (2017). This setup contains a 1000 1000 dimensional dataset generated by sampling and preprocessing the MNIST dataset3 using a one-layer neural network. Further details on this setup and the dataset can be found in (Mixon et al., 2017). 3Y. Le Cun and C. Cortes. MNIST handwritten digit database, http://yann.lecun.com/exdb/mnist/ |
| Dataset Splits | No | The paper mentions using specific datasets and tuning parameters, but it does not provide specific details on train/validation/test dataset splits, such as percentages, sample counts, or citations to predefined splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU/CPU models, processor types, or memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions tuning parameters like the penalty parameter λ0 and accuracy parameter ϵ, but it does not provide specific hyperparameter values, training configurations, or system-level settings in the main text. |