Bayesian Matrix Completion via Adaptive Relaxed Spectral Regularization
Authors: Yang Song, Jun Zhu
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real datasets demonstrate encouraging results on rank recovery and collaborative filtering, with notably good results for very sparse matrices. |
| Researcher Affiliation | Academia | Yang Song and Jun Zhu Department of Physics, Tsinghua University, yang.song@zoho.com Department of Computer Science & Tech., State Key Lab of Intell. Tech. & Sys.; CBICR Center; Tsinghua National Lab for Information Science and Tech., Tsinghua University, dcszj@mail.tsinghua.edu.cn |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. The paper describes the inference steps in prose. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology or a link to a code repository. Footnote 1 links to supplementary material, not code. |
| Open Datasets | Yes | Movie Lens 1M2 and Each Movie datasets, and compare results with various strong competitors... 2Movie Lens datasets can be downloaded from http://grouplens.org/datasets/movielens/. |
| Dataset Splits | Yes | We randomly split the dataset into 80% training and 20% test. We further split 20% training data for validation for M3F, i PM3F, Soft Impute, Soft Impute-ALS and HASI to tune their hyperparameters. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions 'R package soft Impute' for Soft Impute and Soft Impute-ALS, and states using 'the code provided by the corresponding authors' for other methods, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For each matrix Z, the iteration number was fixed to 1000 and the result was averaged from last 200 samples (with first 800 discarded as burn-in). We simply initialize our sampler with uniformly distributed U and V with norms fixed to 0.9 and all d fixed to zero. |