Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Tensor denoising and completion based on ordinal observations
Authors: Chanwoo Lee, Miaoyan Wang
ICML 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the empirical performance of our methods1. We investigate both the complete and the incomplete settings, and we compare the recovery accuracy with other tensor-based methods. Unless otherwise stated, the ordinal data tensors are generated from model (1) using standard probit link f. We consider the setting with K = 3, d1 = d2 = d3 = d, and r1 = r2 = r3 = r. The parameter tensors are simulated based on (6), where the core tensor entries are i.i.d. drawn from N(0, 1), and the factors Mk are uniformly sampled (with respect to Haar measure) from matrices with orthonormal columns. We set the cut-off points bโ= f 1(โ/L) for โ [L], such that f(bโ) are evenly spaced from 0 to 1. In each simulation study, we report the summary statistics across nsim = 30 replications. |
| Researcher Affiliation | Academia | 1Department of Statistics, University of Wisconsin Madison, Wisconsin, USA. Correspondence to: Miaoyan Wang <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Ordinal tensor decomposition |
| Open Source Code | Yes | Software package: https://CRAN.R-project.org/ package=tensorordinal |
| Open Datasets | Yes | We apply our ordinal tensor method to two real-world datasets. In the ๏ฌrst application, we use our model to analyze an ordinal tensor consisting of structural connectivities among 68 brain regions for 136 individuals from Human Connectome Project (HCP) (Geddes, 2016). In the second application, we perform tensor completion to an ordinal dataset with missing values. The data tensor records the ratings of 139 songs on a scale of 1 to 5 from 42 users on 26 contexts (Baltrunas et al., 2011). |
| Dataset Splits | Yes | Table 3 summarizes the prediction error via 5-fold strati๏ฌed cross-validation averaged over 10 runs. |
| Hardware Specification | No | The paper mentions running times (e.g., "(4.18 sec/iter)") but does not provide specific details on the hardware used (e.g., CPU/GPU models, memory, or cloud instances) for the experiments. |
| Software Dependencies | No | The paper mentions an R package (tensorordinal) but does not provide specific version numbers for R or any other software libraries used. |
| Experiment Setup | Yes | The parameter tensors are simulated based on (6), where the core tensor entries are i.i.d. drawn from N(0, 1), and the factors Mk are uniformly sampled (with respect to Haar measure) from matrices with orthonormal columns. We set the cut-off points bโ= f 1(โ/L) for โ [L], such that f(bโ) are evenly spaced from 0 to 1. [...] Random initialization of core tensor C(0), factor matrices {M (0) k }, and cut-off points b(0). |