Entrywise convergence of iterative methods for eigenproblems

Authors: Vasileios Charisopoulos, Austin R. Benson, Anil Damle

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement our analysis with a practical stopping criterion and demonstrate its applicability via numerical experiments. In this section, we present a set of numerical experiments illustrating the results of our analysis in practice, as well as the advantages of the proposed stopping criterion.
Researcher Affiliation Academia Vasileios Charisopoulos Department of Operations Research & Information Engineering Cornell University Ithaca, NY 14853 vc333@cornell.edu Austin R. Benson Department of Computer Science Cornell University Ithaca, NY 14853 arb@cs.cornell.edu Anil Damle Department of Computer Science Cornell University Ithaca, NY 14853 damle@cornell.edu
Pseudocode Yes Algorithm 1 Subspace iteration Input: initial guess Q0 On,k, symmetric matrix A, iterations T for t = 1, 2, . . . , T do V (t) := AQt 1; Qt, Rt = qr(V (t)) QR decomposition end for return QT
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Table 1: Summary statistics of network datasets. Dataset Citation # nodes # edges CA-HEPPH [32] 11204 117649 CA-ASTROPH 17903 197031 GEMSEC-FACEBOOK-ARTIST [46] 50515 819306 COM-DBLP [55] 317080 1049866 COM-LIVEJOURNAL 3997962 34681189
Dataset Splits No The paper describes generating synthetic data and using real-world graph datasets but does not provide specific details on training, validation, or test splits (e.g., percentages, counts, or a standard split reference).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup No The paper mentions that 'The supplementary material contains more details about the implementation and the experimental setup.' but does not include specific hyperparameters, training configurations, or system-level settings in the main text.