Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Nonconvex-nonconcave min-max optimization on Riemannian manifolds
Authors: Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the empirical benefits of the proposed methods with extensive experiments. ... 7 Experiments In this section, we evaluate the proposed second-order algorithms RFR, RTGDA, RNGD, RNFR, and RNTGDA (proposed and analyzed in Section 6) on a variety of problems and compare them against the below baselines. |
| Researcher Affiliation | Collaboration | Andi Han EMAIL University of Sydney; Bamdev Mishra EMAIL Microsoft India; Pratik Jawanpuria EMAIL Microsoft India; Junbin Gao EMAIL University of Sydney |
| Pseudocode | No | The paper describes methods through mathematical formulations and textual descriptions of updates. There are no explicitly labeled pseudocode or algorithm blocks in the main text of the paper. |
| Open Source Code | Yes | The code is available at https://github.com/andyjm3/nonconvex-nonconcave-mfd. |
| Open Datasets | Yes | We use the fisheriris dataset from Matlab, which consists of n = 150 samples in d = 4 dimension. |
| Dataset Splits | No | The paper uses various datasets and problem setups (e.g., 'fisheriris dataset' and 'mixture of 8 Gaussians example'), but does not explicitly provide details about training/validation/test dataset splits, percentages, or sample counts for these datasets. |
| Hardware Specification | No | The paper states, 'All our experiments are done in Matlab with the Manopt package (Boumal et al., 2014) except for the GAN experiments (Section 7.4), where we use the Geoopt package with Pytorch (Kochurov et al., 2020).' However, it does not provide any specific details about the hardware (e.g., GPU models, CPU types, or memory) used to run these experiments. |
| Software Dependencies | No | The paper mentions 'Matlab', 'Manopt package (Boumal et al., 2014)', 'Geoopt package with Pytorch (Kochurov et al., 2020)'. However, it does not specify any version numbers for Matlab, Manopt, Geoopt, or Pytorch, which are necessary for reproducible software dependencies. |
| Experiment Setup | Yes | Experiment setup and results. For experiments, we consider d = 30 and tune stepsizes for all the methods. For RCON, we also tune Ξ³ after fixing the same stepsize as RHM. We fix ΞΆ = 1 for RNFR and RNTGDA. ... The dimension of the prior zi is 100 and the batch size is set as 512. ... For RFR, RNGD, and RNFR, we use 20 iterations for the conjugate gradient method (that is used to compute the Hessian inverse). We fix the stepsize for the generator to be 0.001 for all methods and tune the stepsize of the discriminator. We set the maximum number of iterations to 5000 for RGDA, RCEG, RHM, and RCON, 200 for RFR, and 100 for RNGD and RNFR. |