Faster proximal algorithms for matrix optimization using Jacobi-based eigenvalue methods
Authors: Hamza Fawzi, Harry Goulbourne
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of this approach on a variety of algorithms and problems, and show that, on a GPU, we can obtain 5 to 10x speed-ups in the evaluation of proximal operators compared to standard CPU or GPU linear algebra routines. Our findings are supported by new theoretical results providing guarantees on the approximation quality of proximal operators obtained using approximate eigenvalue or singular value decompositions. Finally we present numerical experiments in Section 5. |
| Researcher Affiliation | Academia | Hamza Fawzi Harry Goulbourne Department of Applied Mathematics and Theoretical Physics University of Cambridge, UK {hf323, hmg42}@cam.ac.uk |
| Pseudocode | Yes | Algorithm 1: Jacobi method for diagonalizing symmetric matrix A while off(A) tolerance do choose an off-diagonal index (i, j) such that |Aij| > 0 construct a Givens matrix G such that (GAGT )ij = 0 update A GAGT end |
| Open Source Code | No | The paper states "All our code is in Python and the latter functions are invoked respectively by Num Py s [HMvd W+20] and Cu Py s [OUN+17] functions linalg.eigh and linalg.svd." It mentions using specific libraries but does not provide a link or explicit statement that the authors' own implementation code for the described methodology is open-source or publicly available. A link to a third-party Matlab implementation is provided for a baseline method, not their own. |
| Open Datasets | No | The paper describes using "random Gaussian n n matrix", "random instances of (12)", and "random matrix-completion instances of (13)" for its experiments. It does not refer to a pre-existing, publicly available dataset with concrete access information (link, DOI, citation with authors/year) or a well-known benchmark dataset. |
| Dataset Splits | No | The paper uses synthetically generated "random instances" for its experiments and does not specify any training, validation, or test dataset splits, percentages, or methodology for data partitioning. |
| Hardware Specification | Yes | The GPU we used is a NVIDIA TITAN Xp with CUDA 11.0, and the CPU is an Intel Core i5-6500 with 4 cores at 3.20 GHz and 32GB RAM, running Ubuntu 18.04. |
| Software Dependencies | No | The paper mentions using "NVIDIA CUSOLVER library", "LAPACK/CUSOLVER", "Num Py", and "Cu Py", along with "CUDA 11.0". However, it does not provide specific version numbers for NumPy, CuPy, CUSOLVER, or LAPACK, which are required for full reproducibility of software dependencies. |
| Experiment Setup | Yes | For the Jacobi method we used a relative tolerance that decays like 1/k2, more precisely we set ϵk = Yk F /(1 + k2), where Yk is the matrix to diagonalize at iteration k. The number of iterations (14) was fixed and set to 1000, with a time limit of 1 hour on the total running time. |