Multi-Mode Deep Matrix and Tensor Factorization
Authors: Jicong Fan
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on synthetic data and real datasets showed that the proposed methods have much higher recovery accuracy than many baselines. |
| Researcher Affiliation | Academia | Jicong Fan1,2 1School of Data Science, The Chinese University of Hong Kong (Shenzhen), China 2Shenzhen Research Institute of Big Data, China |
| Pseudocode | Yes | Algorithm 1 Gradient-based optimization for M2DMTF (12) |
| Open Source Code | Yes | Codes link: https://github.com/jicongfan/Multi-Mode-Deep-Matrix-and-Tensor-Factorization |
| Open Datasets | Yes | We consider two benchmark datasets: Movie Lens-100k and Movie Lens-1M... We compare the proposed method M2DMTF with the baselines on the following datasets: Amino acid fluorescence (Bro, 1997) (5 201 61), Flow injection (Nørgaard & Ridder, 1994) (12 100 89), and SW-NIR kinetic data (Bijlsma & Smilde, 2000) (301 241 8). |
| Dataset Splits | No | The paper mentions 'determine the hyper parameters of all methods via cross-validation' but does not explicitly state a separate validation dataset split with specific percentages or sample counts for the reported results in tables. The tables only show 'Train/Test' ratios. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided. |
| Software Dependencies | No | The paper mentions using MATLAB and Python, and refers to optimizers like 'i Rprop+' and 'Adam', but does not provide specific version numbers for any software or libraries. |
| Experiment Setup | Yes | In MF (problem (1) in the main paper), the factorization dimension d is 5 because it outperforms other choices. The λ is chosen from {0.01, 0.1, 1} and the optimizer is i Rprop+. The maximum iteration is 2000. ... In M2DMTF, L = 2, d1 = d2 = 3, h(1) 1 = h(2) 1 = 10, m1 = m2 = 20, and λ1 = λ 2 = 1. ... The activation function is the hyperbolic tangent function. The optimizer is i Rprop+ and the maximum iteration is 3000. |