Fast Tucker Rank Reduction for Non-Negative Tensors Using Mean-Field Approximation

Authors: Kazu Ghalamkari, Mahito Sugiyama

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Numerical Experiments We empirically examined the efficiency and the effectiveness of LTR using synthetic and real-world datasets. We compared LTR with two existing non-negative low Tucker-rank approximation methods.
Researcher Affiliation Academia Kazu Ghalamkari1,2 Mahito Sugiyama1,2 1National Institute of Informatics 2The Graduate University for Advanced Studies, SOKENDAI {gkazu,mahito}@nii.ac.jp
Pseudocode Yes Algorithm 1: input :Tensor P, target Tucker rank r = (r1, . . . , rd) output :Rank reduced tensor Q LTR(P,r) ... BESTRANK1(P) ...
Open Source Code No The paper does not provide any explicit statement about releasing the source code for their proposed method (LTR), nor does it provide a link to a code repository.
Open Datasets Yes We evaluated running time and the LS reconstruction error for two real-world datasets. 4DLFD is a (9, 9, 512, 512, 3) tensor [18] and Att Face is a (92, 112, 400) tensor [33].
Dataset Splits No The paper describes the generation of synthetic data and the characteristics of real-world datasets (4DLFD and Att Face), along with the target Tucker ranks used for experiments. However, it does not provide details on specific training, validation, or test dataset splits used for evaluation.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, memory) used to conduct the experiments.
Software Dependencies No The paper mentions comparison methods (NTD_KL, NTD_LS, lra SNTD) and states 'see Supplement for implementation details' for them. However, it does not explicitly list any specific software dependencies or their version numbers for the proposed LTR method or the experimental setup in the main text.
Experiment Setup Yes For the 4DLFD dataset, we chose the target Tucker rank as (1,1,1,1,1), (2,2,2,2,1), (3,3,4,4,1), (3,3,5,5,1), (3,3,6,6,1), (3,3,7,7,1), (3,3,8,8,1), (3,3,16,16,1), (3,3,20,20,1), (3,3,40,40,1), (3,3,60,60,1), and (3,3,80,80,1). For the Att Face dataset, we chose (1,1,1), (3,3,3), (5,5,5), (10,10,10), (15,15,15), (20,20,20), (30,30,30), (40,40,40), (50,50,50), (60,60,60), (70,70,70), and (80,80,80). Also, in Algorithm 1, it specifies 'Construct {c1, . . . , crk} [Ik] by random sampling from [Ik] without replacement'.