Local Search Algorithms for Rank-Constrained Convex Optimization

Authors: Kyriakos Axiotis, Maxim Sviridenko

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the versatility of these methods on a wide range of applications involving matrix completion and robust principal component analysis. ... In Section 3, we evaluate the proposed greedy and local search algorithms on optimization problems like robust PCA. Then, in Section 4 we evaluate their generalization performance in machine learning problems like matrix completion.
Researcher Affiliation Collaboration Kyriakos Axiotis MIT kaxiotis@mit.edu; Maxim Sviridenko Yahoo! Research NYC sviri@verizonmedia.com
Pseudocode Yes Algorithm 1 Greedy; Algorithm 2 Local Search; Algorithm 3 Fast inner Optimization; Algorithm 4 Fast rank reduction; Algorithm 5 Fast Local Search
Open Source Code No The paper does not provide explicit links or statements regarding the open-sourcing of their developed code. It mentions implementations of other algorithms but not their own.
Open Datasets Yes In this section we compare our algorithms on the task of movie recommendation on the Movielens datasets Harper & Konstan (2015).
Dataset Splits Yes In order to evaluate the algorithms, we perform random 80%-20% train-test splits that are the same for all algorithms and measure the mean RMSE in the test set.
Hardware Specification Yes Code written in python and tested on an Intel Skylake CPU with 16 v CPUs.
Software Dependencies No The paper mentions "python" and specific algorithms like "LSQR algorithm" and "L-BFGS", but it does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We solve the inner optimization problem by applying 10 L-BFGS iterations. ... We have implemented the Fast Greedy and Fast Local Search algorithms with 3 inner optimization iterations. ... For the inner optimization steps we have used the LSQR algorithm with 2 iterations in the 100K and 1M datasets, and with 3 iterations in the 10M dataset. ... We clip the entries of A between 1 and 5 when computing R(A) in Algorithm 2.3 and Algorithm 5.