Symmetric Matrix Completion with ReLU Sampling
Authors: Huikang Liu, Peng Wang, Longxiu Huang, Qing Qu, Laura Balzano
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct numerical experiments to validate our theoretical developments and demonstrate the performance of several state-of-the-art algorithms on the MC problem with Re LU sampling. |
| Researcher Affiliation | Academia | 1Antai College of Economics and Management, Shanghai Jiao Tong University 2Department of Electrical Engineering and Computer Science, University of Michigan 3Department of Computational Mathematics, Science and Engineering & Department of Mathematics, Michigan State University. Correspondence to: Laura Balzano <girasole@umich.edu>. |
| Pseudocode | Yes | Algorithm 1 GD for MC with Re LU sampling |
| Open Source Code | No | The paper states it uses MATLAB codes for *other* methods (Scaled GD, momentum PAM, GNMR) from external sources, but it does not provide explicit links or statements for the open-source code of *their own* proposed methodology. |
| Open Datasets | No | In our experiments, we set n = 200 and r = 5. We generate data matrix M according to the model (1) with different noise level σ {0, 10 4, 10 2} and sample the observed entries via Re LU sampling, e.g. (2). For each noise level, we generate 20 data matrices and run GD with our proposed, RI, and RS initialization on each data matrix, respectively. |
| Dataset Splits | No | The paper generates synthetic data for each experiment run rather than using pre-defined datasets with explicit training, validation, or test splits. It mentions '20 data matrices' are generated and used for runs, but does not specify a partitioning strategy like 80/10/10 for a single dataset. |
| Hardware Specification | Yes | All of our experiments are implemented in MATLAB R2023a on a PC with 32GM memory and Intel(R) Core(TM) i7-11800H 2.3GHz CPU. |
| Software Dependencies | Yes | All of our experiments are implemented in MATLAB R2023a on a PC with 32GM memory and Intel(R) Core(TM) i7-11800H 2.3GHz CPU. |
| Experiment Setup | Yes | In our experiments, we set n = 200 and r = 5. We generate data matrix M according to the model (1) with different noise level σ {0, 10 4, 10 2}... In each test, we terminate the algorithm when the Frobenious norm of the gradient at the T-th iteration is less than 10 6 or T 5000. |