Probabilistic low-rank matrix completion on finite alphabets

Authors: Jean Lafond, Olga Klopp, Eric Moulines, Joseph Salmon

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 3 Numerical Experiments
Researcher Affiliation Academia Jean Lafond Institut Mines-T el ecom T el ecom Paris Tech CNRS LTCI, Olga Klopp CREST et MODAL X Universit e Paris Ouest, Eric Moulines Institut Mines-T el ecom T el ecom Paris Tech CNRS LTCI, Joseph Salmon Institut Mines-T el ecom T el ecom Paris Tech CNRS LTCI
Pseudocode Yes Algorithm 1: Multinomial lifted coordinate gradient descent
Open Source Code No The paper states 'Algorithm 1 was implemented in C' but does not provide any link to public source code or an explicit statement of its release.
Open Datasets Yes We have also run the same estimators on the Movie Lens 100k dataset.
Dataset Splits Yes Therefore, to compare the prediction errors, we randomly selected 20% of the entries as a test set, and the remaining entries were split between a training set (80%) and a validation set (20%).
Hardware Specification Yes Algorithm 1 was implemented in C and Table 1 gives a rough idea of the execution time for the case of two classes on a 3.07Ghz w3550 Xeon CPU (RAM 1.66 Go, Cache 8Mo).
Software Dependencies No The paper states 'Algorithm 1 was implemented in C' but does not provide specific version numbers for any software libraries, frameworks, or dependencies used in their implementation.
Experiment Setup Yes data were simulated according to a multinomial logit distribution. ... We have then generated matrices of rank equals to 5, such that Xj = Γ m1m2 k=1 αkuj k(vj k) , with (α1, .0.0. , α5) = (2, 1, 0.5, 0.25, 0.1) and Γ is a scaling factor. The choice of the λ parameter has been set for both methods by performing 5-fold cross-validation on a geometric grid of size 0.8 log(n).