Fast methods for estimating the Numerical rank of large matrices

Authors: Shashanka Ubaru, Yousef Saad

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we illustrate the performance of the rank estimation techniques on matrices from various typical applications. In the first experiment, we use a 5, 981 5, 981 matrix named ukerbe1 from the AG-Monien group (the matrix is a Laplacian of an undirected graph), available in the University of Florida Sparse Matrix Collection (Davis & Hu, 2011) database. The performances of the Chebyshev Polynomial filter method and the extended Mc Weeny filter method for estimating the numerical rank of this matrix2 are shown in figure 3.
Researcher Affiliation Academia Shashanka Ubaru UBARU001@UMN.EDU Yousef Saad SAAD@CS.UMN.EDU Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN USA
Pseudocode Yes Algorithm 1 describes our approach for estimating the approximate rank rε by the two polynomial filtering methods discussed earlier.
Open Source Code Yes Matlab codes are available at http://www-users.cs. umn.edu/ ubaru/codes/rank_estimation.zip
Open Datasets Yes In the first experiment, we use a 5, 981 5, 981 matrix named ukerbe1 from the AG-Monien group (the matrix is a Laplacian of an undirected graph), available in the University of Florida Sparse Matrix Collection (Davis & Hu, 2011) database.
Dataset Splits No No explicit train/test/validation splits are mentioned for the datasets used in the experiments. The paper uses existing matrices from databases or image datasets for evaluation.
Hardware Specification Yes The estimation of its rank by the Chebyshev filter method took only 7.18 secs on average (over 10 trials) on a standard 3.3GHz Intel-i5 machine.
Software Dependencies No No specific software versions are mentioned. The paper only states 'Matlab codes are available...'.
Experiment Setup No No specific experimental setup details such as hyperparameters, learning rates, or optimizer settings are provided. The paper describes the general methods and their application to matrices.