Riemannian coordinate descent algorithms on matrix manifolds

Authors: Andi Han, Pratik Jawanpuria, Bamdev Mishra

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments. We now benchmark the performance of the proposed RCD and RCDlin algorithms in terms of computational efficiency (flop counts and/or runtime) and convergence quality (distance to optimality).
Researcher Affiliation Collaboration 1Riken AIP, Japan 2Microsoft India. Correspondence to: Andi Han <andi.han@riken.jp>.
Pseudocode Yes Algorithm 1 Riemannian coordinate descent (RCD/RCDlin)
Open Source Code Yes Our codes are implemented using the Manopt toolbox (Boumal et al., 2014) and run on a laptop with an i5-10500 3.1GHz CPU processor. The codes are available at https://github.com/andyjm3.
Open Datasets Yes For experiment settings, we train 5-dimensional embeddings (n = 5) for Word Net mammals subtree (Miller, 1998).
Dataset Splits No The paper describes generating synthetic data for some experiments and using WordNet, but no explicit train/validation/test dataset splits (e.g., percentages, sample counts, or specific predefined splits) are provided for reproducibility of data partitioning.
Hardware Specification Yes Our codes are implemented using the Manopt toolbox (Boumal et al., 2014) and run on a laptop with an i5-10500 3.1GHz CPU processor.
Software Dependencies No Our codes are implemented using the Manopt toolbox (Boumal et al., 2014). However, no specific version numbers for Manopt or other software dependencies are provided.
Experiment Setup Yes For all the methods, we tune the stepsize. ... For RCDlin-c and RCDlin-tc, we use a linearly-decaying stepsize, i.e., η/(1 + 0.1 epoch). For RGD we use a fixed stepsize η which generally leads to better convergence. We tune and set η = 1.0 for RCDlin and 0.5 for RGD. ... We set S = np/5 and select the coordinates randomly without replacement.