Multitask learning meets tensor factorization: task imputation via convex optimization

Authors: Kishan Wimalawarne, Masashi Sugiyama, Ryota Tomioka

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank.We conducted several experiments to evaluate performances of tensor based multitask learning setting we have discussed in Section 3. In Section 4.1, we discuss simulation we conducted using synthetic data sets. In Sections 4.2 and 4.3, we discuss experiments on two real world data sets, namely the Restaurant data set [26] and School Effectiveness data set [3, 4].
Researcher Affiliation Academia Kishan Wimalawarne Tokyo Institute of Technology Meguro-ku, Tokyo, Japan kishan@sg.cs.titech.ac.jp Masashi Sugiyama The University of Tokyo Bunkyo-ku, Tokyo, Japan sugi@k.u-tokyo.ac.jp Ryota Tomioka TTI-C Illinois, Chicago, USA tomioka@ttic.edu
Pseudocode No The paper includes mathematical formulations and theoretical analyses but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We discuss experiments on two real world data sets, namely the Restaurant data set [26] and School Effectiveness data set [3, 4].
Dataset Splits Yes We also selected 250 instances as the validation set and the rest was used as the test set.selected the regularization parameter λ using two-fold cross validation on the training set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run the experiments.
Software Dependencies No The paper describes mathematical formulations and computational methods but does not list any specific software dependencies or their version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup Yes We used the penalty formulation of (6) with the squared loss and selected the regularization parameter λ using two-fold cross validation on the training set from the range 0.01 to 10 with the interval 0.1.The regularization parameter for each norm was selected by minimizing the mean squared error on the validation set from the candidate values in the interval [50, 1000] for the overlapped, [0.5, 40] for the latent, [6000, 20000] for the scaled latent norms, respectively.