Implicit Regularization in Tensor Factorization
Authors: Noam Razin, Asaf Maman, Nadav Cohen
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper we provide the first theoretical analysis of implicit regularization in tensor factorization... Experiments validate our analysis, demonstrating implicit regularization towards low tensor rank in a wide array of configurations... we empirically explore its potential to serve as a measure of complexity for multivariable predictors. |
| Researcher Affiliation | Academia | 1Blavatnik School of Computer Science, Tel Aviv University, Israel. |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a link or explicit statement of code release) for the source code. |
| Open Datasets | Yes | MNIST (Le Cun, 1998) and Fashion-MNIST (Xiao et al., 2017) |
| Dataset Splits | Yes | For each problem, we associate the label 1 with the active category and 0 with all the rest, and then attempt to fit training examples with predictors of low tensor rank, reporting the resulting mean squared error, i.e. the residual of the fit... We report this mean squared error, as well as that obtained by the predictor on the test set. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models, memory, or cloud instances) were mentioned in the paper for running experiments. |
| Software Dependencies | No | The paper cites PyTorch and scikit-learn, but does not specify the version numbers of these or any other software dependencies used for their experiments. |
| Experiment Setup | Yes | The first (left) three plots show (Frobenius) norms of the ten largest components under three standard deviations for initialization 0.05, 0.01, and 0.005... The tensor factorizations were initialized randomly with components drawn from a normal distribution with mean zero and standard deviation 0.01 (unless stated otherwise). Learning rates were kept constant, 0.01, (unless stated otherwise). |