Interest Inference via Structure-Constrained Multi-Source Multi-Task Learning

Authors: Xuemeng Song, Liqiang Nie, Luming Zhang, Maofu Liu, Tat-Seng Chua

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on a real-world dataset validated our scheme.
Researcher Affiliation Academia National University of Singapore, Wuhan University of Science and Technology {sxmustc, nieliqiang, zglumg}@gmail.com, liumaofu@wust.edu.cn, chuats@comp.nus.edu.sg
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states: "We have released our compiled dataset... The compiled dataset is currently publicly accessible via: http://msmt.farbox.com/." This refers to the dataset, not the source code for the proposed methodology.
Open Datasets Yes We have released our compiled dataset2, which will facilitate other researchers to repeat our approach and to comparatively verify their own ideas. The compiled dataset is currently publicly accessible via: http://msmt.farbox.com/.
Dataset Splits Yes Experimental results reported in this work are the average values over 10-fold cross validation.
Hardware Specification Yes All the experiments were conducted over a server equipped with Intel(R) Xeon(R) CPU X5650 at 2.67GHZ on 48GB RAM, 24 cores and 64-bit Cent OS 5.4 operating system.
Software Dependencies No The paper mentions software like LIBSVM, Boiler Pipe, and LDA but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We adopted the grid search strategy to determine the optimal values for the regularization parameters among the values {10r : r { 12, , 1}}. Experimental results reported in this work are the average values over 10-fold cross validation. Noticeably, we tuned the K in S@K and P@K from 1 to 10 and reported the optimal performance for each fold.