The Burer-Monteiro SDP method can fail even above the Barvinok-Pataki bound
Authors: Liam O'Carroll, Vaidehi Srinivas, Aravindan Vijayaraghavan
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also empirically evaluate Theorem 1 in Appendix G. |
| Researcher Affiliation | Academia | Liam O Carroll Department of Computer Science Northwestern University Evanston, IL 60208 liamocarroll2023@u.northwestern.edu Vaidehi Srinivas Department of Computer Science Northwestern University Evanston, IL 60208 vaidehi@u.northwestern.edu Aravindan Vijayaraghavan Department of Computer Science Northwestern University Evanston, IL 60208 aravindv@northwestern.edu |
| Pseudocode | Yes | Input: Initializer Y (0) 2 Mp, step size > 0. For t = 0, 1, 2, . . . Y (t+1) = RY (t) grad OBJ(Y (t)) |
| Open Source Code | No | The paper provides links to code for empirical evaluation and visualization, but does not explicitly state that the source code for the theoretical methodology (e.g., proof generation) itself is open source. |
| Open Datasets | No | The paper constructs specific instances for analysis rather than using external, publicly available datasets for training or evaluation. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) needed to reproduce data partitioning, as it primarily deals with theoretical constructions and synthetic instances. |
| Hardware Specification | No | Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] We do not report the runtime of our experiments, as this is not relevant to our claims. |
| Software Dependencies | Yes | All experiments for the empirical evaluation were run using Riemannian gradient descent... The main Python packages used were NumPy 1.23.1 and Matplotlib 3.5.2. |
| Experiment Setup | Yes | All experiments for the empirical evaluation were run using Riemannian gradient descent with a constant step size of 0.001. We initialized the method from 1000 uniformly random points on the manifold. We stopped optimization when the Euclidean norm of the gradient was less than 10^−8 or after 100,000 iterations. |