Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Riemannian Search for Eigenvector Computation

Authors: Zhiqiang Xu, Ping Li

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that the proposed search method is able to deliver significantly better performance than projection methods by taking advantages of step-size schemes. [...] Section 6. Experiments: We now test our proposed eigensolvers on both synthetic and real data.
Researcher Affiliation Industry Zhiqiang Xu EMAIL Ping Li EMAIL Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue, WA 98004, USA
Pseudocode Yes Algorithm 1 Shift-and-Invert Preconditioned Riemannian Gradient Eigensolver (SI-rg EIGS) [...] Algorithm 6 Shift by Lanczos with Matrix Pairs
Open Source Code No The paper does not contain an explicit statement about the release of source code, a direct link to a code repository, or mention of code in supplementary materials.
Open Datasets Yes We download real data from the sparse matrix collection5. The statistics of the matrix data is given in Table 1. [footnote 5: www.cise.ufl.edu/research/sparse/matrices/]
Dataset Splits No The paper mentions generating synthetic data using A's full eigenvalue decomposition and downloading real data from a sparse matrix collection. For CCA, it uses JW11 and MNIST datasets. However, there are no specific details on training, validation, or test splits (e.g., percentages, counts, or predefined splits) provided for any of these datasets.
Hardware Specification No The paper states: "All the algorithms were implemented in matlab and running single threaded." This describes the software environment and execution but does not provide specific hardware details like GPU/CPU models or memory.
Software Dependencies No The paper states: "All the algorithms were implemented in matlab and running single threaded." While 'matlab' is mentioned, no specific version number is provided. Other software components like Nesterov's accelerated gradient descent and SVRG are algorithms, not specific software libraries with version numbers.
Experiment Setup Yes Throughout experiments, four iterations for the least-squares solvers are run to approximately solve those least-squares subproblems. Nesterov’s accelerated gradient descent is used mostly for this purpose. [...] Step-sizes are constant and hand-tuned for both rg EIGS and SI-rg EIGS. [...] The shift parameter is obtained using Algorithm 3 with parameter m = 9 and used for all the shift-and-invert preconditioning based algorithms. [...] we use the following simplified Riemannian Barzilai-Borwein (BB) step-sizes [...] with initial step-sizes set to α0 = 10^-2. [...] The SVRG is used as the least-squares solver. It runs four epochs with each running n iterations as well as the step-size ηx = 1/max_i ||Xi||^2_2 or ηy = 1/max_i ||Yi||^2_2 in our experiments. [...] The shift parameter is obtained using Algorithm 8 for which the parameters are set as follows: δ = 0.06 (following Wang et al. (2016)), m1 = 2, and ϵ = 1 / (3084 * (δ / 18)^(m1 - 1)). The SI-rg Gen EIS adopts the BB step-size scheme with initial step-size α0 = 10^-2.