Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Efficient Active Manifold Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization
Authors: Hao Wang, Ye Wang, Xiangyu Yang
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct numerical experiments using both synthetic and real data to showcase our algorithm s efficiency and superiority over existing methods. |
| Researcher Affiliation | Academia | Hao Wang EMAIL School of Information Science and Technology Shanghai Tech University Shanghai, 201210, China Ye Wang EMAIL School of Information Science and Technology Shanghai Tech University Shanghai, 201210, China Xiangyu Yang* EMAIL School of Mathematics and Statistics Henan University Kaifeng, 475000, China Center for Applied Mathematics of Henan Province Henan University Zhengzhou, 450046, China |
| Pseudocode | Yes | Algorithm 1 Extrapolated Iteratively Reweighted Nuclear Norm with Active Manifold Identification (EIRNAMI) Algorithm 2 Update perturbation ϵ. |
| Open Source Code | Yes | We have made the source code publicly available at the Git Hub repository https://github.com/ Optimizater/Low-rank-optimization-with-active-manifold-identification. |
| Open Datasets | No | In this section, we conduct low-rank matrix completion tasks using synthetic data and natural color images to demonstrate the effectiveness and efficiency of the proposed EIRNAMI algorithm. For synthetic data, we adopt λ = 10 1 M . For natural color images, we begin by testing the regularization parameter λ with an initial large value λ0 = 28. |
| Dataset Splits | No | For synthetic data, we generate a low-rank matrix c X with Rank(c X) = r , where c X = BC. Here, B Rm r and C Rr n are generated randomly with i.i.d. standard Gaussian entries. We consider r {5, 10, 15} for the original matrix c X in our tests. We then uniformly sample a subset Ωwith SR = 0.5, and then form the observed matrix M = PΩ(c X). In these experiments, the row and column indices of the missing entries in each image channel are randomly selected and the corresponding pixel values are set to zero, resulting in a missing rate of 50%. |
| Hardware Specification | Yes | All methods tested in this section were implemented in MATLAB on a desktop equipped with an Intel(R) Xeon(R) CPU E5-2620 v2 (2.10 GHz) and 64GB RAM, running 64-bit Windows 10 Enterprise. |
| Software Dependencies | No | All methods tested in this section were implemented in MATLAB on a desktop equipped with an Intel(R) Xeon(R) CPU E5-2620 v2 (2.10 GHz) and 64GB RAM, running 64-bit Windows 10 Enterprise. |
| Experiment Setup | Yes | For IRNAMI and EIRNAMI, we set the parameters as follows: p = 0.5, β = 1.1 > Lf, µ = 0.1, tol1 = 10 5 and tol2 = 10 7. In addition, we initialize ϵ0 = 10 3e. For the extrapolation parameter α, we consider values in the range α {0, 0.1, 0.3, 0.5, 0.7, 0.9}. We then select the values that yield the best performance in most cases. Our experimental results shown in Figure 2 indicate that α = 0.7 is a reasonable choice. In all experiments, we terminate the proposed algorithm if Rel Err tol1 or Rel Dist tol1 or the number of iterations exceeds the prespecified maximum number of iterations Iter Max = 3 103. In addition, we also use another termination condition Xk+1 Xk tol2 according to the criterion (3.16). |