Projection Free Rank-Drop Steps

Authors: Edward Cheung, Yuying Li

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experimental Results We validate the rank-drop steps on a matrix completion task using various datasets from Movie Lens2. We compare the proposed Rank-Drop Frank Wolfe (RDFW) against the aforementioned FW variants, FW [Frank and Wolfe, 1956], AFW [Lacoste-Julien and Jaggi, 2015], and IF(0, ), [Freund et al., 2015], as well as a state-of-the-art nuclear norm regularized solver in Active ALT [Hsieh and Olsen, 2014].
Researcher Affiliation Academia Edward Cheung Yuying Li Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada {eycheung, yuying}@uwaterloo.ca
Pseudocode Yes Algorithm 1 Frank-Wolfe (FW), Algorithm 2 (Atomic) Away Steps Frank-Wolfe (AFW), Algorithm 3 Compute Rank-Drop Direction (rank Drop), Algorithm 4 Rank-Drop Frank-Wolfe (RDFW)
Open Source Code No The paper does not provide any links to source code or explicit statements about its release.
Open Datasets Yes We validate the rank-drop steps on a matrix completion task using various datasets from Movie Lens2. 2http://grouplens.org/datasets/movielens/ and Table 2: Movie Lens Data
Dataset Splits Yes Following [Yao et al., 2016], we randomly partition each dataset into 50% training, 25% validation, and 25% testing.
Hardware Specification No The paper only states: 'All simulations were run in MATLAB.' No specific hardware details (e.g., CPU/GPU models, memory) are provided.
Software Dependencies No The paper only states: 'All simulations were run in MATLAB.' It does not specify a version number for MATLAB or any other software dependencies.
Experiment Setup Yes The δ value in (1) is tuned with δ = µj Y F , where Y F is the Frobenius norm of the training data matrix, and µj = 2 + 0.2j, j N. We increase j until the mean RMSE on the validation set does not improve by more than 10 3. We terminate the algorithm when an upper bound on the relative optimality gap ensures (f(Xk) f )/f < 10 2 or a maximum iteration count of 1000 is reached. and Table 3: Parameters used for each dataset.