Multiplicative Sparse Feature Decomposition for Efficient Multi-View Multi-Task Learning

Authors: Lu Sun, Canh Hao Nguyen, Hiroshi Mamitsuka

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both simulated and real-world datasets validate its efficiency.
Researcher Affiliation Academia 1Bioinformatics Center, Institute for Chemical Research, Kyoto University, Japan 2Department of Computer Science, Aalto University, Finland
Pseudocode No The paper describes the steps of the algorithm but does not present them in a formally structured pseudocode or algorithm block.
Open Source Code Yes We provide the MATLAB code of SPLIT at: https://github.com/ futuresun912/SPLIT.git.
Open Datasets Yes Mirflickr URL1: https://press.liacs.nl/mirflickr/ Caltech101 URL2: http://www.vision.caltech.edu/Image Datasets/Caltech101/ NUS-Object URL3: http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm
Dataset Splits Yes For each task we randomly select a%, 20% and 20% of its total samples as training set, validation set and testing set, respectively, with a {10, 20, 30}.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'MATLAB code' but does not specify its version or the versions of any other software libraries or dependencies used.
Experiment Setup Yes The dimensionality of latent space in MFM is set as 20, as recommended in [Lu et al., 2017]. The number K of latent topics of SPLIT is set according to K T {0.3, 0.5, 0.7, 0.9}. For each iterative algorithm, we terminate it once the relative change of its objective is below 10 5, and set the maximum number of iterations as 1000. Values of regularization coefficients of comparing methods are selected from {10a |a| {0, 1, 2, 3, 4}}. The value of K is varied from 1 to 10 by step 1.