Fitting Low-Rank Tensors in Constant Time

Authors: Kohei Hayashi, Yuichi Yoshida

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments For the experimental evaluation, we slightly modified our sampling algorithm. ... The results (see Figure 1) show that our method behaved ideally. ... To evaluate how our method worked against real data tensors, we used eight datasets [1, 2, 4, 11, 14, 19] described in Table 1...
Researcher Affiliation Academia Kohei Hayashi National Institute of Advanced Industrial Science and Technology RIKEN AIP hayashi.kohei@gmail.com Yuichi Yoshida National Institute of Informatics yyoshida@nii.ac.jp
Pseudocode Yes Algorithm 1 Input: Random access to a tensor X RN1 NK, Tucker rank (R1, . . . , Rk), and ϵ, δ (0, 1). for k = 1 to K do Sk a sequence of s = s(ϵ, δ) indices uniformly and independently sampled from [Nk]. Construct a mini-tensor X|S1,...,SK. Return ℓR1,...,RK(X|S1,...,SK).
Open Source Code Yes Python code of our algorithm is available at: https://github.com/hayasick/CTFT.
Open Datasets Yes To evaluate how our method worked against real data tensors, we used eight datasets [1, 2, 4, 11, 14, 19] described in Table 1, where the fluor dataset is order-4 and the others are order-3 tensors.
Dataset Splits No No specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit standard split references) were provided in the paper.
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments were provided.
Software Dependencies No The paper mentions 'HOOI implemented in Python by Nickel' and links to 'scikit-tensor' but does not provide specific version numbers for Python or the scikit-tensor library.
Experiment Setup Yes We prepared N N N tensors for N {100, 200, 400, 800}, with a Tucker rank of (15, 15, 15). Each element of the core G R15 15 15 and the factor matrices U (1), U (2), U (3) RN 15 was drawn from a standard normal distribution. We set Y = [G; U (1), U (2), U (3)]. Then, we generated X RN N N as Xijk = Yijk/ Y F + 0.1ϵijk, where ϵijk follows the standard normal distribution for i, j, k [N]. ... Every data tensor was decomposed with Tucker rank (R1, . . . , RK) on the grid Rk {5, 10, 15, 20} for k [K].