Fine-grained Generalization Analysis of Inductive Matrix Completion
Authors: Antoine Ledent, Rodrigo Alves, Yunwen Lei, Marius Kloft
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments further confirm that our strategy outperforms standard inductive matrix completion on various synthetic datasets and real problems, justifying its place as an important tool in the arsenal of methods for matrix completion using side information. |
| Researcher Affiliation | Academia | Antoine Ledent TU Kaiserslautern ledent@cs.uni-kl.de Rodrigo Alves TU Kaiserslautern alves@cs.uni-kl.de Yunwen Lei University of Birmingham y.lei@bham.ac.uk Marius Kloft TU Kaiserslautern kloft@cs.uni-kl.de |
| Pseudocode | No | The paper describes methods mathematically and textually but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code for the described methodology publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate the performance of our model on three real life datasets: Douban, Last FM and Movie Lens (further described in the supplementary). |
| Dataset Splits | No | The paper describes how synthetic data was constructed and sampled ('sampled drω entries') and mentions 'training set' in theoretical sections, but it does not specify explicit training/validation/test splits (e.g., percentages or absolute counts) for any of the datasets used in the experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to conduct the experiments. |
| Software Dependencies | No | The paper mentions certain algorithms and frameworks (e.g., Soft Impute, IMCNF) but does not provide specific version numbers for any software dependencies or libraries used in its implementation. |
| Experiment Setup | Yes | In all experiments, we work with the square loss. We construct square data matrices in Rnˆn with a given rank r ď d for several combinations of n, d, r. The sampling distribution is a power-type law depending on Λ such that Λ 0 yields uniform sampling (details in appendix). We compare three approaches: (1) Standard inductive matrix completion with the side information matrices X, Y (IMC) (2) Our smoothed adjusted regulariser λ}Ă M} (for several values of α) (ATR)6; and finally (3) our smoothed empirically adjusted regulariser λ}| M} (for several values of α) (E-ATR). For each n P t100, 200u we evaluate the following d, r combinations: p30, 4q, p50, 6q and p80, 10q. In order to study a meaningful data-sparsity regime, in each case we sampled drω entries where ω P t1, 2, 3, 4, 5u. For real data, the optimization problem (18) includes parameters λ1 and λ2, and Table 3 shows results for E-ATR-0.5, E-ATR-0.75, E-ATR-1.0, indicating specific alpha values. |