Global optimality for Euclidean CCCP under Riemannian convexity
Authors: Melanie Weber, Suvrit Sra
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the practicality of the resulting methods in numerical experiments. In this section we present several applications that possess the DC structure (1.1). . . We complement our discussion with experimental results for two of the applications discussed above. We show that CCCP performs competitively against several popular Riemannian Optimization methods for the problem of computing matrix square roots (Fig. 1) and for computing Brascamp-Lieb constants (Fig. 2). |
| Researcher Affiliation | Academia | 1Harvard University 2MIT. Correspondence to: Melanie Weber <mweber@seas.harvard.edu>. |
| Pseudocode | Yes | Algorithm 1 Euclidean CCCP for Riemannian DC; Algorithm 2 Incremental CCCP for Riemannian DC with finite-sum structure |
| Open Source Code | No | The paper does not provide any links to source code for the methodology described, nor does it explicitly state that the code is being released or is available in supplementary materials. It mentions using 'Manopt' for comparison but this is a third-party library, not their own code. |
| Open Datasets | No | The paper describes experiments on specific optimization problems (matrix square roots, Brascamp-Lieb constants) and inputs of different sizes. It does not mention using any standard, publicly available datasets for training, validation, or testing, nor does it provide access information for any custom datasets. |
| Dataset Splits | No | The paper does not provide specific details on dataset splits (e.g., percentages, sample counts) for training, validation, or testing. The experiments focus on the performance of optimization algorithms on mathematical problems, rather than training models on datasets with predefined splits. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory, or cloud computing instances) used to run the experiments. It only mentions running 'numerical experiments'. |
| Software Dependencies | No | The paper mentions comparing against the 'Manopt (Boumal et al., 2014) implementation', indicating Manopt as a software used. However, it does not provide a version number for Manopt or any other software dependencies used for their own implementation or analysis, making the software environment not fully reproducible. |
| Experiment Setup | No | The paper describes the applications and problem types for which experiments were conducted, and states that inputs of different sizes were used. However, it does not provide specific experimental setup details such as hyperparameters (e.g., learning rates, batch sizes, number of epochs) or specific training configurations for their CCCP algorithm or the compared Riemannian optimization methods. |