Discrete-Valued Latent Preference Matrix Estimation with Graph Side Information
Authors: Changhun Jo, Kangwook Lee
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our algorithm is robust to model errors and outperforms the existing algorithms in terms of prediction performance on both synthetic and real data. In Sec. 6, experiments are conducted on synthetic and real data to compare the performance between our algorithm and existing algorithms in the literature. |
| Researcher Affiliation | Academia | 1Department of Mathematics, University of Wisconsin Madison, Madison, Wisconsin, USA 2Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin, USA. |
| Pseudocode | Yes | Now we provide a high-level description of our algorithm while deferring the pseudocode to the appendix. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the code for their methodology is open-source or publicly available. |
| Open Datasets | Yes | First, we take Facebook graph data (Traud et al., 2012) as graph side information... Furthermore, we evaluate the performance of our algorithm on a real rating/real graph dataset called Epinions (Massa & Avesani, 2007; Massa et al., 2008). |
| Dataset Splits | Yes | We use 80% (randomly sampled) of Ωas a training set (Ωtr) and the remaining 20% of Ωas a test set (Ωt). We use 5-fold cross-validation to determine hyperparameters. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper mentions using 'Lib Rec (Guo et al., 2015a)' for baseline comparisons but does not specify version numbers for this or any other software dependencies crucial for reproducing the experiments. |
| Experiment Setup | Yes | For each observation rate p, we generate synthetic data (N Ω, G) 100 times at random and then report the average errors. We use 80% (randomly sampled) of Ωas a training set (Ωtr) and the remaining 20% of Ωas a test set (Ωt). We use mean absolute error (MAE) 1 |Ωt| P (i,j) Ωt N Ω ij (2 ˆRij 1) for the performance metric. We use 5-fold cross-validation to determine hyperparameters. |